TY - THES A1 - Hanna, John T1 - Computational Fracture Modeling and Design of Encapsulation-Based Self-Healing Concrete Using XFEM and Cohesive Surface Technique N2 - Encapsulation-based self-healing concrete (SHC) is the most promising technique for providing a self-healing mechanism to concrete. This is due to its capacity to heal fractures effectively without human interventions, extending the operational life and lowering maintenance costs. The healing mechanism is created by embedding capsules containing the healing agent inside the concrete. The healing agent will be released once the capsules are fractured and the healing occurs in the vicinity of the damaged part. The healing efficiency of the SHC is still not clear and depends on several factors; in the case of microcapsules SHC the fracture of microcapsules is the most important aspect to release the healing agents and hence heal the cracks. This study contributes to verifying the healing efficiency of SHC and the fracture mechanism of the microcapsules. Extended finite element method (XFEM) is a flexible, and powerful discrete crack method that allows crack propagation without the requirement for re-meshing and has been shown high accuracy for modeling fracture in concrete. In this thesis, a computational fracture modeling approach of Encapsulation-based SHC is proposed based on the XFEM and cohesive surface technique (CS) to study the healing efficiency and the potential of fracture and debonding of the microcapsules or the solidified healing agents from the concrete matrix as well. The concrete matrix and a microcapsule shell both are modeled by the XFEM and combined together by CS. The effects of the healed-crack length, the interfacial fracture properties, and microcapsule size on the load carrying capability and fracture pattern of the SHC have been studied. The obtained results are compared to those obtained from the zero thickness cohesive element approach to demonstrate the significant accuracy and the validity of the proposed simulation. The present fracture simulation is developed to study the influence of the capsular clustering on the fracture mechanism by varying the contact surface area of the CS between the microcapsule shell and the concrete matrix. The proposed fracture simulation is expanded to 3D simulations to validate the 2D computational simulations and to estimate the accuracy difference ratio between 2D and 3D simulations. In addition, a proposed design method is developed to design the size of the microcapsules consideration of a sufficient volume of healing agent to heal the expected crack width. This method is based on the configuration of the unit cell (UC), Representative Volume Element (RVE), Periodic Boundary Conditions (PBC), and associated them to the volume fraction (Vf) and the crack width as variables. The proposed microcapsule design is verified through computational fracture simulations. KW - Beton KW - Bruchverhalten KW - Finite-Elemente-Methode KW - Self-healing concrete KW - Computational fracture modeling KW - Capsular clustering; Design of microcapsules KW - XFEM KW - Cohesive surface technique KW - Mikrokapsel KW - Selbstheilendem Beton KW - Computermodellierung des Bruchverhaltens KW - Entwurf von Mikrokapseln KW - Kapselclustern KW - Erweiterte Finite-Elemente-Methode KW - Kohäsionsflächenverfahren Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221124-47467 ER - TY - THES A1 - Jenabidehkordi, Ali T1 - An Efficient Adaptive PD Formulation for Complex Microstructures N2 - The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridy- namic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dy- namic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena. This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature. New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three dis- tinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions. KW - Peridynamik KW - Numerical Simulations KW - Peridynamics KW - Numerical Simulations Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221124-47422 ER - TY - THES A1 - Jenabidehkordi, Ali T1 - An efficient adaptive PD formulation for complex microstructures N2 - The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridynamic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dynamic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena. This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature. New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three distinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions. KW - Peridynamik KW - Peridynamics KW - Numerical Simulation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221116-47389 UR - https://e-pub.uni-weimar.de/opus4/frontdoor/index/index/docId/4742 ER - TY - THES A1 - Zacharias, Christin T1 - Numerical Simulation Models for Thermoelastic Damping Effects N2 - Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment. This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy. The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping. Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation. The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential. The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes. N2 - Die Finite-Elemente Simulation von dynamisch angeregten Strukturen wird im Wesentlich durch die Steifigkeits-, Massen- und Dämpfungseigenschaften des Systems sowie durch die äußere Belastung bestimmt. Die Vorhersagequalität von dynamischen Simulationen schwingungsanfälliger Bauteile hängt wesentlich von der Verwendung geeigneter Dämpfungsmodelle ab. Dämpfungsphänomene haben einen wesentlichen Einfluss auf die Schwingungsamplitude, die Frequenz und teilweise sogar die Existenz von Vibrationen. Allerdings ist die Entwicklung von realitätsnahen Dämpfungsmodellen oft schwierig, da eine Vielzahl von physikalischen Effekten zur Energiedissipation während eines Schwingungsvorgangs führt. Beispiele hierfür sind die Materialdämpfung, verschiedene Formen der Reibung sowie vielfältige Wechselwirkungen mit dem umgebenden Medium. Diese Dissertation befasst sich mit thermoelastischer Dämpfung, die in homogenen Materialien die dominante Ursache der Materialdämpfung darstellt. Der thermoelastische Effekt wird ausgelöst durch eine Temperaturänderung aufgrund mechanischer Spannungen. In der schwingenden Struktur entstehen während der Deformation Temperaturgradienten zwischen benachbarten Regionen unter Zug- und Druckbelastung. In Abhängigkeit von der Vibrationsfrequenz führen diese zu Wärmeströmen und irreversibler Umwandlung mechanischer in thermische Energie. Die Zielstellung dieser Arbeit besteht in der Entwicklung recheneffizienter Simulationsmethoden, um thermoelastische Dämpfung in zeitabhängigen Finite-Elemente Analysen, die auf modaler Superposition beruhen, zu integrieren. Der thermoelastische Verlustfaktor wird auf der Grundlage der mechanischen Eigenformen und -frequenzen bestimmt. In nachfolgenden Analysen im Zeit- und Frequenzbereich wird er als modaler Dämpfungsgrad verwendet. Zwei Ansätze werden entwickelt, um den thermoelastischen Verlustfaktor in dünn-wandigen Plattenstrukturen, sowie in dreidimensionalen Volumenbauteilen zu simulieren. Die realitätsnahe Vorhersage der Energiedissipation wird durch die Verifizierung an experimentellen Daten bestätigt. Dafür wird ein Versuchsaufbau entwickelt, der eine Messung von Materialdämpfung unter Ausschluss anderer Dissipationsquellen ermöglicht. Für den Fall der Volumenbauteile wird ein Ansatz verwendet, der auf der Berechnung der Entropieänderung und damit der erzeugte Wärmeenergie während eines Schwingungszyklus beruht. Im Verhältnis zur Formänderungsenergie ist dies ein Maß für die thermoelastische Dämpfung. Für dünne Plattenstrukturen wird der Anteil an Biegeenergie in der Eigenform bestimmt und im sogenannten modalen Biegefaktor (MBF) zusammengefasst. Der maximale Grad an thermoelastischer Dämpfung kann im Zustand reiner Biegung auftreten, sodass der MBF eine quantitative Klassifikation der Eigenformen hinsichtlich ihres thermoelastischen Dämpfungspotentials zulässt. Die Ergebnisse der entwickelten Simulationsmethoden stimmen sehr gut mit den experimentellen Daten überein und sind geeignet, um thermoelastische Dämpfungsgrade vorherzusagen. Beide Ansätze basieren auf modaler Superposition und ermöglichen damit zeitabhängige Simulationen mit einer hohen Recheneffizienz. Insgesamt stellt die Modellierung der thermoelastischen Dämpfung einen Baustein in einem umfassenden Dämpfungsmodell dar, welches zur realitätsnahen Simulation von Schwingungsvorgängen notwendig ist. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,8 KW - Werkstoffdämpfung KW - Finite-Elemente-Methode KW - Strukturdynamik KW - Thermoelastic damping KW - modal damping KW - decay experiments KW - energy dissipation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221116-47352 ER - TY - THES A1 - Zhang, Yongzheng T1 - A Nonlocal Operator Method for Quasi-static and Dynamic Fracture Modeling N2 - Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons. Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form. The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows: -The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method. -A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation. -A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,9 KW - Variationsprinzip KW - Partial Differential Equations KW - Taylor Series Expansion KW - Peridynamics KW - Variational principle KW - Phase field method KW - Peridynamik KW - Phasenfeldmodell KW - Partielle Differentialgleichung KW - Nichtlokale Operatormethode Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221026-47321 ER - TY - THES A1 - Yousefi, Hassan T1 - Discontinuous propagating fronts: linear and nonlinear systems N2 - The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales. In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis. Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems). The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended. At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied. KW - Partielle Differentialgleichung KW - Adaptives System KW - Wavelet KW - Tichonov-Regularisierung KW - Hyperbolic PDEs KW - Adaptive central high resolution schemes KW - Wavelet based adaptation KW - Tikhonov regularization Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220922-47178 ER - TY - THES A1 - Nouri, Hamidreza T1 - Mechanical Behavior of two dimensional sheets and polymer compounds based on molecular dynamics and continuum mechanics approach N2 - Compactly, this thesis encompasses two major parts to examine mechanical responses of polymer compounds and two dimensional materials: 1- Molecular dynamics approach is investigated to study transverse impact behavior of polymers, polymer compounds and two dimensional materials. 2- Large deflection of circular and rectangular membranes is examined by employing continuum mechanics approach. Two dimensional materials (2D), including, Graphene and molybdenum disulfide (MoS2), exhibited new and promising physical and chemical properties, opening new opportunities to be utilized alone or to enhance the performance of conventional materials. These 2D materials have attracted tremendous attention owing to their outstanding physical properties, especially concerning transverse impact loading. Polymers, with the backbone of carbon (organic polymers) or do not include carbon atoms in the backbone (inorganic polymers) like polydimethylsiloxane (PDMS), have extraordinary characteristics particularly their flexibility leads to various easy ways of forming and casting. These simple shape processing label polymers as an excellent material often used as a matrix in composites (polymer compounds). In this PhD work, Classical Molecular Dynamics (MD) is implemented to calculate transverse impact loading of 2D materials as well as polymer compounds reinforced with graphene sheets. In particular, MD was adopted to investigate perforation of the target and impact resistance force . By employing MD approach, the minimum velocity of the projectile that could create perforation and passes through the target is obtained. The largest investigation was focused on how graphene could enhance the impact properties of the compound. Also the purpose of this work was to discover the effect of the atomic arrangement of 2D materials on the impact problem. To this aim, the impact properties of two different 2D materials, graphene and MoS2, are studied. The simulation of chemical functionalization was carried out systematically, either with covalently bonded molecules or with non-bonded ones, focusing the following efforts on the covalently bounded species, revealed as the most efficient linkers. To study transverse impact behavior by using classical MD approach , Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software, that is well-known among most researchers, is employed. The simulation is done through predefined commands in LAMMPS. Generally these commands (atom style, pair style, angle style, dihedral style, improper style, kspace style, read data, fix, run, compute and so on) are used to simulate and run the model for the desired outputs. Depends on the particles and model types, suitable inter-atomic potentials (force fields) are considered. The ensembles, constraints and boundary conditions are applied depends upon the problem definition. To do so, atomic creation is needed. Python codes are developed to generate particles which explain atomic arrangement of each model. Each atomic arrangement introduced separately to LAMMPS for simulation. After applying constraints and boundary conditions, LAMMPS also include integrators like velocity-Verlet integrator or Brownian dynamics or other types of integrator to run the simulation and finally the outputs are emerged. The outputs are inspected carefully to appreciate the natural behavior of the problem. Appreciation of natural properties of the materials assist us to design new applicable materials. In investigation on the large deflection of circular and rectangular membranes, which is related to the second part of this thesis, continuum mechanics approach is implemented. Nonlinear Föppl membrane theory, which carefully release nonlinear governing equations of motion, is considered to establish the non-linear partial differential equilibrium equations of the membranes under distributed and centric point loads. The Galerkin and energy methods are utilized to solve non-linear partial differential equilibrium equations of circular and rectangular plates respectively. Maximum deflection as well as stress through the film region, which are kinds of issue in many industrial applications, are obtained. T2 - Mechanisches Verhalten von zweidimensionalen Schichten und Polymerverbindungen basierend auf molekulardynamischer und kontinuumsmechanischem Ansatz KW - Molekulardynamik KW - Polymerverbindung KW - Auswirkung KW - Molecular Dynamics Simulation KW - Continuum Mechnics KW - Polymer compound KW - Impact Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220713-46700 ER - TY - THES A1 - Liu, Bokai T1 - Stochastic multiscale modeling of polymeric nanocomposites using Data-driven techniques N2 - In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks. Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis. While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models. As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling. This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components: -Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS). -Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks. -Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,3 KW - Polymere KW - Nanoverbundstruktur KW - multiscale KW - nanocomposite KW - stochastic KW - Data-driven Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220503-46379 ER - TY - THES A1 - Valizadeh, Navid T1 - Developments in Isogeometric Analysis and Application to High-Order Phase-Field Models of Biomembranes N2 - Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics. As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects. As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models. Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,1 KW - Phasenfeldmodell KW - Vesikel KW - Hydrodynamik KW - Multiphysics KW - Isogeometrische Analyse KW - Isogeometric Analysis KW - Vesicle dynamics KW - Phase-field modeling KW - Geometric Partial Differential Equations KW - Residual-based variational multiscale method Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220114-45658 ER - TY - THES A1 - Wang, Jiasheng T1 - Lebensdauerabschätzung von Bauteilen aus globularem Grauguss auf der Grundlage der lokalen gießprozessabhängigen Werkstoffzustände N2 - Das Ziel der Arbeit ist, eine mögliche Verbesserung der Güte der Lebensdauervorhersage für Gusseisenwerkstoffe mit Kugelgraphit zu erreichen, wobei die Gießprozesse verschiedener Hersteller berücksichtigt werden. Im ersten Schritt wurden Probenkörper aus GJS500 und GJS600 von mehreren Gusslieferanten gegossen und daraus Schwingproben erstellt. Insgesamt wurden Schwingfestigkeitswerte der einzelnen gegossenen Proben sowie der Proben des Bauteils von verschiedenen Gussherstellern weltweit entweder durch direkte Schwingversuche oder durch eine Sammlung von Betriebsfestigkeitsversuchen bestimmt. Dank der metallografischen Arbeit und Korrelationsanalyse konnten drei wesentliche Parameter zur Bestimmung der lokalen Dauerfestigkeit festgestellt werden: 1. statische Festigkeit, 2. Ferrit- und Perlitanteil der Mikrostrukturen und 3. Kugelgraphitanzahl pro Flächeneinheit. Basierend auf diesen Erkenntnissen wurde ein neues Festigkeitsverhältnisdiagramm (sogenanntes Sd/Rm-SG-Diagramm) entwickelt. Diese neue Methodik sollte vor allem ermöglichen, die Bauteildauerfestigkeit auf der Grundlage der gemessenen oder durch eine Gießsimulation vorhersagten lokalen Zugfestigkeitswerte sowie Mikrogefügenstrukturen besser zu prognostizieren. Mithilfe der Versuche sowie der Gießsimulation ist es gelungen, unterschiedliche Methoden der Lebensdauervorhersage unter Berücksichtigung der Herstellungsprozesse weiterzuentwickeln. KW - Grauguss KW - Lebensdauerabschätzung KW - Werkstoffprüfung Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220111-45542 ER - TY - THES A1 - Mauludin, Luthfi Muhammad T1 - Computational Modeling of Fracture in Encapsulation-Based Self-Healing Concrete Using Cohesive Elements N2 - Encapsulation-based self-healing concrete has received a lot of attention nowadays in civil engineering field. These capsules are embedded in the cementitious matrix during concrete mixing. When the cracks appear, the embedded capsules which are placed along the path of incoming crack are fractured and then release of healing agents in the vicinity of damage. The materials of capsules need to be designed in a way that they should be able to break with small deformation, so the internal fluid can be released to seal the crack. This study focuses on computational modeling of fracture in encapsulation-based selfhealing concrete. The numerical model of 2D and 3D with randomly packed aggreates and capsules have been developed to analyze fracture mechanism that plays a significant role in the fracture probability of capsules and consequently the self-healing process. The capsules are assumed to be made of Poly Methyl Methacrylate (PMMA) and the potential cracks are represented by pre-inserted cohesive elements with tension and shear softening laws along the element boundaries of the mortar matrix, aggregates, capsules, and at the interfaces between these phases. The effects of volume fraction, core-wall thickness ratio, and mismatch fracture properties of capsules on the load carrying capacity of self-healing concrete and fracture probability of the capsules are investigated. The output of this study will become valuable tool to assist not only the experimentalists but also the manufacturers in designing an appropriate capsule material for self-healing concrete. KW - beton KW - Bruch KW - self healing concrete KW - cohesive elements KW - Fracture KW - Fracture Computational Model Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20211008-45204 ER - TY - THES A1 - Khademi Zahedi, Reza T1 - Stress Distribution in Buried Defective PE Pipes and Crack Propagation in Nanosheets N2 - Buried PE pipelines are the main choice for transporting hazardous hydrocarbon fluids and are used in urban gas distribution networks. Molecular dynamics (MD) simulations used to investigate material behavior at nanoscale. KW - Gasleitung KW - gas pipes KW - Riss KW - Defekt KW - defects KW - nanosheets KW - crack KW - maximum stress Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210803-44814 ER - TY - THES A1 - Alkam, Feras T1 - Vibration-based Monitoring of Concrete Catenary Poles using Bayesian Inference N2 - This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level. KW - Parameteridentifikation KW - Bayesian Inference, Uncertainty Quantification KW - Inverse Problems KW - Damage Identification KW - Concrete catenary pole KW - SHM KW - Inverse Probleme KW - Bayes’schen Inferenz KW - Unschärfequantifizierung KW - Schadenerkennung KW - Oberleitungsmasten Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210526-44338 UR - https://asw-verlage.de/katalog/vibration_based_monitoring_of_co-2363.html VL - 2021 PB - Bauhaus-Universitätsverlag CY - Weimar ER - TY - THES A1 - Ren, Huilong T1 - Dual-horizon peridynamics and Nonlocal operator method N2 - In the last two decades, Peridynamics (PD) attracts much attention in the field of fracture mechanics. One key feature of PD is the nonlocality, which is quite different from the ideas in conventional methods such as FEM and meshless method. However, conventional PD suffers from problems such as constant horizon, explicit algorithm, hourglass mode. In this thesis, by examining the nonlocality with scrutiny, we proposed several new concepts such as dual-horizon (DH) in PD, dual-support (DS) in smoothed particle hydrodynamics (SPH), nonlocal operators and operator energy functional. The conventional PD (SPH) is incorporated in the DH-PD (DS-SPH), which can adopt an inhomogeneous discretization and inhomogeneous support domains. The DH-PD (DS-SPH) can be viewed as some fundamental improvement on the conventional PD (SPH). Dual formulation of PD and SPH allows h-adaptivity while satisfying the conservations of linear momentum, angular momentum and energy. By developing the concept of nonlocality further, we introduced the nonlocal operator method as a generalization of DH-PD. Combined with energy functional of various physical models, the nonlocal forms based on dual-support concept are derived. In addition, the variation of the energy functional allows implicit formulation of the nonlocal theory. At last, we developed the higher order nonlocal operator method which is capable of solving higher order partial differential equations on arbitrary domain in higher dimensional space. Since the concepts are developed gradually, we described our findings chronologically. In chapter 2, we developed a DH-PD formulation that includes varying horizon sizes and solves the "ghost force" issue. The concept of dual-horizon considers the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly with arbitrary particle discretization. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. A simple adaptive refinement procedure (h-adaptivity) is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method. In chapter 3, a nonlocal operator method (NOM) based on the variational principle is proposed for the solution of waveguide problem in computational electromagnetic field. Common differential operators as well as the variational forms are defined within the context of nonlocal operators. The present nonlocal formulation allows the assembling of the tangent stiffness matrix with ease, which is necessary for the eigenvalue analysis of the waveguide problem. The present formulation is applied to solve 1D Schrodinger equation, 2D electrostatic problem and the differential electromagnetic vector wave equations based on electric fields. In chapter 4, a general nonlocal operator method is proposed which is applicable for solving partial differential equations (PDEs) of mechanical problems. The nonlocal operator can be regarded as the integral form, ``equivalent'' to the differential form in the sense of a nonlocal interaction model. The variation of a nonlocal operator plays an equivalent role as the derivatives of the shape functions in the meshless methods or those of the finite element method. Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease. The nonlocal operator method is enhanced here also with an operator energy functional to satisfy the linear consistency of the field. A highlight of the present method is the functional derived based on the nonlocal operator can convert the construction of residual and stiffness matrix into a series of matrix multiplications using the predefined nonlocal operators. The nonlocal strong forms of different functionals can be obtained easily via the concept of support and dual-support. Several numerical examples of different types of PDEs are presented. In chapter 5, we extended the NOM to higher order scheme by using a higher order Taylor series expansion of the unknown field. Such a higher order scheme improves the original NOM in chapter 3 and chapter 4, which can only achieve one-order convergence. The higher order NOM obtains all partial derivatives with specified maximal order simultaneously without resorting to shape functions. The functional based on the nonlocal operators converts the construction of residual and stiffness matrix into a series of matrix multiplication on the nonlocal operator matrix. Several numerical examples solved by strong form or weak form are presented to show the capabilities of this method. In chapter 6, the NOM proposed as a particle-based method in chapter 3,4,5, has difficulty in imposing accurately the boundary conditions of various orders. In this paper, we converted the particle-based NOM into a scheme with interpolation property. The new scheme describes partial derivatives of various orders at a point by the nodes in the support and takes advantage of the background mesh for numerical integration. The boundary conditions are enforced via the modified variational principle. The particle-based NOM can be viewed a special case of NOM with interpolation property when nodal integration is used. The scheme based on numerical integration greatly improves the stability of the method, as a consequence, the operator energy functional in particle-based NOM is not required. We demonstrated the capabilities of current method by solving the gradient solid problems and comparing the numerical results with the available exact solutions. In chapter 7, we derived the DS-SPH in solid within the framework of variational principle. The tangent stiffness matrix of SPH can be obtained with ease, and can be served as the basis for the present implicit SPH. We proposed an hourglass energy functional, which allows the direct derivation of hourglass force and hourglass tangent stiffness matrix. The dual-support is {involved} in all derivations based on variational principles and is automatically satisfied in the assembling of stiffness matrix. The implementation of stiffness matrix comprises with two steps, the nodal assembly based on deformation gradient and global assembly on all nodes. Several numerical examples are presented to validate the method. KW - Peridynamik KW - Variational principle KW - weighted residual method KW - gradient elasticity KW - phase field fracture method KW - smoothed particle hydrodynamics KW - numerical methods KW - PDEs Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210412-44039 ER - TY - THES A1 - Harirchian, Ehsan T1 - Improved Rapid Assessment of Earthquake Hazard Safety of Existing Buildings Using a Hierarchical Type-2 Fuzzy Logic Model N2 - Although it is impractical to avert subsequent natural disasters, advances in simulation science and seismological studies make it possible to lessen the catastrophic damage. There currently exists in many urban areas a large number of structures, which are prone to damage by earthquakes. These were constructed without the guidance of a national seismic code, either before it existed or before it was enforced. For instance, in Istanbul, Turkey, as a high seismic area, around 90% of buildings are substandard, which can be generalized into other earthquakeprone regions in Turkey. The reliability of this building stock resulting from earthquake-induced collapse is currently uncertain. Nonetheless, it is also not feasible to perform a detailed seismic vulnerability analysis on each building as a solution to the scenario, as it will be too complicated and expensive. This indicates the necessity of a reliable, rapid, and computationally easy method for seismic vulnerability assessment, commonly known as Rapid Visual Screening (RVS). In RVS methodology, an observational survey of buildings is performed, and according to the data collected during the visual inspection, a structural score is calculated without performing any structural calculations to determine the expected damage of a building and whether the building needs detailed assessment. Although this method might save time and resources due to the subjective/qualitative judgments of experts who performed the inspection, the evaluation process is dominated by vagueness and uncertainties, where the vagueness can be handled adequately through the fuzzy set theory but do not cover all sort of uncertainties due to its crisp membership functions. In this study, a novel method of rapid visual hazard safety assessment of buildings against earthquake is introduced in which an interval type-2 fuzzy logic system (IT2FLS) is used to cover uncertainties. In addition, the proposed method provides the possibility to evaluate the earthquake risk of the building by considering factors related to the building importance and exposure. A smartphone app prototype of the method has been introduced. For validation of the proposed method, two case studies have been selected, and the result of the analysis presents the robust efficiency of the proposed method. KW - Fuzzy-Logik KW - Erdbebensicherheit KW - Fuzzy logic KW - RC Buildings KW - Rapid Visual Assessment KW - Seismic Vulnerability KW - Uncertainty Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210326-43963 ER - TY - THES A1 - Goswami, Somdatta T1 - Phase field modeling of fracture with isogeometric analysis and machine learning methods N2 - This thesis presents the advances and applications of phase field modeling in fracture analysis. In this approach, the sharp crack surface topology in a solid is approximated by a diffusive crack zone governed by a scalar auxiliary variable. The uniqueness of phase field modeling is that the crack paths are automatically determined as part of the solution and no interface tracking is required. The damage parameter varies continuously over the domain. But this flexibility comes with associated difficulties: (1) a very fine spatial discretization is required to represent sharp local gradients correctly; (2) fine discretization results in high computational cost; (3) computation of higher-order derivatives for improved convergence rates and (4) curse of dimensionality in conventional numerical integration techniques. As a consequence, the practical applicability of phase field models is severely limited. The research presented in this thesis addresses the difficulties of the conventional numerical integration techniques for phase field modeling in quasi-static brittle fracture analysis. The first method relies on polynomial splines over hierarchical T-meshes (PHT-splines) in the framework of isogeometric analysis (IGA). An adaptive h-refinement scheme is developed based on the variational energy formulation of phase field modeling. The fourth-order phase field model provides increased regularity in the exact solution of the phase field equation and improved convergence rates for numerical solutions on a coarser discretization, compared to the second-order model. However, second-order derivatives of the phase field are required in the fourth-order model. Hence, at least a minimum of C1 continuous basis functions are essential, which is achieved using hierarchical cubic B-splines in IGA. PHT-splines enable the refinement to remain local at singularities and high gradients, consequently reducing the computational cost greatly. Unfortunately, when modeling complex geometries, multiple parameter spaces (patches) are joined together to describe the physical domain and there is typically a loss of continuity at the patch boundaries. This decrease of smoothness is dictated by the geometry description, where C0 parameterizations are normally used to deal with kinks and corners in the domain. Hence, the application of the fourth-order model is severely restricted. To overcome the high computational cost for the second-order model, we develop a dual-mesh adaptive h-refinement approach. This approach uses a coarser discretization for the elastic field and a finer discretization for the phase field. Independent refinement strategies have been used for each field. The next contribution is based on physics informed deep neural networks. The network is trained based on the minimization of the variational energy of the system described by general non-linear partial differential equations while respecting any given law of physics, hence the name physics informed neural network (PINN). The developed approach needs only a set of points to define the geometry, contrary to the conventional mesh-based discretization techniques. The concept of `transfer learning' is integrated with the developed PINN approach to improve the computational efficiency of the network at each displacement step. This approach allows a numerically stable crack growth even with larger displacement steps. An adaptive h-refinement scheme based on the generation of more quadrature points in the damage zone is developed in this framework. For all the developed methods, displacement-controlled loading is considered. The accuracy and the efficiency of both methods are studied numerically showing that the developed methods are powerful and computationally efficient tools for accurately predicting fractures. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2021,1 KW - Phasenfeldmodell KW - Neuronales Netz KW - Sprödbruch KW - Isogeometric Analysis KW - Physics informed neural network KW - phase field KW - deep neural network KW - brittle fracture Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210304-43841 ER - TY - THES A1 - Winkel, Benjamin T1 - A three-dimensional model of skeletal muscle for physiological, pathological and experimental mechanical simulations T1 - Ein dreidimensionales Skelettmuskel-Modell für physiologische, pathologische und experimentelle mechanische Simulationen N2 - In recent decades, a multitude of concepts and models were developed to understand, assess and predict muscular mechanics in the context of physiological and pathological events. Most of these models are highly specialized and designed to selectively address fields in, e.g., medicine, sports science, forensics, product design or CGI; their data are often not transferable to other ranges of application. A single universal model, which covers the details of biochemical and neural processes, as well as the development of internal and external force and motion patterns and appearance could not be practical with regard to the diversity of the questions to be investigated and the task to find answers efficiently. With reasonable limitations though, a generalized approach is feasible. The objective of the work at hand was to develop a model for muscle simulation which covers the phenomenological aspects, and thus is universally applicable in domains where up until now specialized models were utilized. This includes investigations on active and passive motion, structural interaction of muscles within the body and with external elements, for example in crash scenarios, but also research topics like the verification of in vivo experiments and parameter identification. For this purpose, elements for the simulation of incompressible deformations were studied, adapted and implemented into the finite element code SLang. Various anisotropic, visco-elastic muscle models were developed or enhanced. The applicability was demonstrated on the base of several examples, and a general base for the implementation of further material models was developed and elaborated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2020,3 KW - Biomechanik KW - Nichtlineare Finite-Elemente-Methode KW - Muskel KW - Brustkorb KW - Muscle model KW - FEM KW - Biomechanics KW - Incompressibility KW - Thorax Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20201211-43002 ER - TY - THES A1 - Rabizadeh, Ehsan T1 - Goal-oriented A Posteriori Error Estimation and Adaptive Mesh Refinement in 2D/3D Thermoelasticity Problems T1 - Zielorientierte a posteriori Fehlerabschätzung und adaptive Netzverfeinerung bei 2D- und 3Dthermoelastischen Problemen N2 - In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential. Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods. This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed. As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available. After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples. After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest. The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion. In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison. N2 - Einleitung und Motivation: 1- Im Laufe der letzten Jahrzehnte wurde den Mehrfeldproblemen und ihrer numerischen Analyse große Aufmerksamkeit gewidmet. Bei Mehrfeldproblemen wird die Wechselwirkung zwischen verschiedenen Feldern wie elastischen, elektrischen, magnetischen, chemischen oder thermischen Feldern untersucht. Eine wichtige Kategorie von Mehrfeldproblemen ist die Thermoelastizität. In der Thermoelastizität werden neben dem mechanischen Feld (Verschiebungen) auch das thermische Feld (Temperatur) und deren Auswirkungen aufeinander untersucht. 2- In fortgeschrittenen und sensible Anwendungen mit Temperaturänderung (z. B. LNG-, CNG- oder LPG-Speichertanks bei Sonnentemperatur im Sommer) ist die Elastizitätstheorie, die nur Verschiebungen berücksichtigt, nicht ausreichend. In diesen Fällen ist die Verwendung einer thermoelastischen Formulierung unumgänglich, um zuverlässige Ergebnisse zu erzielen. 3- Da eine analytische Lösung für thermoelastische Probleme sehr selten bestimmbar ist, wird sie durch numerische Methoden ersetzt. Allerdings sind die numerischen Ergebnisse nicht exakt und approximieren nur die exakte Lösung. Daher sind Fehler in den numerischen Ergebnissen unvermeidlich. 4- In jeder numerischen Simulation ist die Genauigkeit der Approximation das Hauptanliegen. Daher wurden verschiedene Fehlerschätzungstechniken entwickelt, um den Fehler der numerischen Lösung zu schätzen. Die herkömmlichen Fehlerschätzungsmethoden geben nur einen allgemeinen Überblick über die Gesamtgenauigkeit einer Näherungslösung. Bei vielen realen Problemen ist jedoch anstelle der Gesamtgenauigkeit die örtliche Genauigkeit (z. B. die Genauigkeit an einem bestimmten Punkt) von großem Interesse 5- Herkömmliche Fehlerschätzer berechnen Fehler in gewissen Normen. In der Ingenieurpraxis interessieren allerdings Fehler in anderen Zielgrößen, beispielsweise in der Last-Verformungs-Kurve oder in gewissen Spannungs-komponenten und speziellen Positionen. Dafür wurden sog. zielorientierte Fehlerschätzer entwickelt. 6- Die meisten numerischen Methoden unterteilen das Gebiet in kleine Teile (Element/Zelle), um das Problem zu lösen. Die Verwendung sehr feiner Elemente erhöht die Simulationsgenauigkeit, erhöht aber auch die Rechenzeit drastisch. Dieses Problem wird durch adaptive Methoden (AM) gelöst. AM können die Rechenzeit deutlich verringern. Bei adaptiven Methoden spielt die Fehlerschätzung eine Schlüsselrolle. Die Verfeinerung der Diskretisierung wird von einer Fehlerschätzung der Lösung kontrolliert und gesteuert (Elemente mit einem höheren geschätzten Fehler werden zur Verfeinerung/Aufteilung ausgewählt). Problemstellung und Zielsetzung der Arbeit 7- Die thermoelastischen Probleme können in zwei Hauptgruppen eingeteilt werden: Klassische Thermoelastizität (KTE) und klassische gekoppelte Thermoelastizität (KKTE). In jeder Gruppe werden verschiedene thermoelastische Probleme mit verschiedenen Geometrien, und Rand-/Anfangsbedingungen untersucht. In dieser Untersuchung werden die KTE- und KKTE-Probleme numerisch gelöst und alle numerischen Lösungen durch Fehlerschätzung bewertet. 8- In dieser Arbeit werden die Gesamtgenauigkeit der numerischen Lösung durch herkömmliche globale Fehlerschätzverfahren (auch als recovery-basierte Methoden bekannt) und die Genauigkeit der Lösung in bestimmten Punkten durch neue lokale Methoden (z. B. Dual-gewichtete Residuumsmethode oder DWR-Methode) bewertet. 9- Bei den dynamischen thermoelastischen Problemen ändern sich die Problembedin-gungen und anschließend die Lösung mit der Zeit. Daher werden die Fehler in jedem Zeitschritt geschätzt, um die Genauigkeit über die Zeit zu erhalten. 10- In dieser Dissertation wurde eine neue adaptive Gitter-Verfeinerung (AGV)-Technik entwickelt und für thermoelastische Probleme implementiert. Stand der Wissenschaft 11- Da die Thermoelastizität im Vergleich zu anderen mechanischen Bereichen wie der Elastizität nicht so umfangreich untersucht ist, wurden nur sehr begrenzte Untersuchungen durchgeführt, um die numerischen Fehler abzuschätzen und zu kontrollieren. Alle diese Untersuchungen konzentrierten sich auf die konventionellen Techniken, die nur den Gesamtfehler abschätzen können. Um die lokalen Fehler (wie punktweise Fehler oder Fehler an einem bestimmten Punkt) abzuschätzen, ist die Verwendung der zielorientierten Fehlerschätzungstechniken unvermeidlich. Die Implementierung der recovery-basierten zielorientierten Fehlerschätzung in der Thermoelastizität wird vor diesem Projekt nicht untersucht. 12- Viele numerische Analysen der dynamischen thermoelastischen Probleme basieren auf der Laplace-Transformationsmethode. Bei dieser Methode ist es praktisch nicht möglich, den Fehler in jedem Zeitschritt abzuschätzen. Daher wurden bisher die herkömmlichen globalen oder lokalen zielorientierten Fehlerschätzungsverfahren nicht in der dynamischen Thermoelastizität implementiert. 13- Eine der neuesten fortgeschrittenen zielorientierten Fehlerschätzungsmethoden ist die Dual-gewichtete Residuumsmethode (DWR-Methode). Die DWR-Methode, die punktweise Fehler (wie Verschiebungs-, mechanische Spannungs- oder Dehnungsfehler an einem bestimmten Punkt) abschätzen kann, wird bei elastischen Problemen angewendet. Es wurde jedoch kein Versuch unternommen, die DWR-Methode für die thermoelastischen Probleme zu formulieren. 14- In numerischen Simulationen sollte das Gitter verfeinert werden, um den Fehler zu verringern. Viele Verfeinerungstechniken basieren auf den globalen Fehlerschätzern, die versuchen, den Fehler der gesamten Lösung zu reduzieren. Daher sind diese Verfeinerungsmethoden zum reduzieren der lokalen Fehler nicht effizient. Wenn nur die Lösung an bestimmten Punkten interessiert ist und der Fehler dort reduziert werden will, sollten die zielorientierten Verfeinerungsmethoden angewendet werden, die vor dieser Untersuchung nicht in thermoelastischen Problemen entwickelt und implementiert wurden. 15- Die realen Probleme sind in der Regel 3D-Probleme, und die Simulation mit vereinfachten 2D-Fällen zeigt nicht alle Aspekte des Problems. Wie bereits erwähnt, sollten in der numerischen Simulation zur Erhöhung der Genauigkeit Gitterverfeinerungstechniken eingesetzt werden. Die konventionell verfeinerten Gitter, die durch gleichmäßige Aufteilung aller Elemente erreicht werden, erhöhen die Rechenzeit. Diese Simulationszeiterhöhung bei 3D-Problemen ist enorm. Dieses Problem wird durch die Verwendung der intelligenten Verfeinerung anstelle der globalen gleichmäßigen Verfeinerung gelöst. In diesem Projekt wurde erstmals die zielorientierte adaptive Gitterverfeinerung (AGV) bei thermoelastischen 3D-Problemen entwickelt und implementiert. Forschungsmethodik 16- In dieser Arbeit werden die beiden Haupttypen der thermoelastischen Probleme (KTE und KKTE) untersucht. Das System der partiellen Differentialgleichung der Thermoelastizität besteht aus zwei Hauptgleichungen: der herkömmlichen Gleichgewichtsgleichung und der Energiebilanzgleichung. 17- In diesem Projekt wird die Finite-Elemente-Methode (FEM) verwendet, um die Probleme numerisch zu simulieren. 18- Der Computercode zur Lösung von 2D- und 3D-Problemen wurde in den Program-miersprachen MATLAB bzw. C++ entwickelt. Um die Rechenzeit zu verkürzen und die Computerressourcen effizient zu nutzen, wurden Parallelprogrammierungs- und Optimierungsalgorithmen eingesetzt. 19- Nachdem die Probleme numerisch gelöst wurden, wurden zwei verschiedene Arten von globalen und lokalen Fehlerschätzungstechniken implementiert, um den Fehler zu schätzen und die Genauigkeit der Lösung zu messen. Der globale Typ ist die recovery-basierte zielorientierte Fehlerabschätzung, die wiederum in drei Unterkategorien von SPR-, L2-PR- und WSPR-Methoden unterteilt ist. Der lokale Typ ist die dual-gewichtete residuumsbasierte zielorientierte Fehlerabschätzung. Die Formulierung dieser Methoden wurde für thermoelastische Probleme entwickelt. 20- Schließlich wurde nach der Fehlerschätzung die entwickelte AGV-Methode implementiert. Wesentliche Ergebnisse und Schlussfolgerungen 21- In diesem Projekt wurde die Fehlerschätzung der Thermoelastizität in den folgenden drei Schritten untersucht: 1- Recovery-basierte Fehlerschätzung in statischen thermo Problemen (KTE), 2- Recovery-basierte Fehlerabschätzung in dynamischen thermo Problemen (KKTE), 3- Residuumsbasierte Fehlerschätzung in statischen thermo Problemen (KTE), 22- Im ersten Schritt, wurde das recovery-basierte Fehlerschätzverfahren auf mehrere stationäre thermoelastische Probleme angewendet. Einige der untersuchten Probleme verfügen über analytische Lösungen. Der Vergleich der numerischen Ergebnisse mit der analytischen (exakten) Lösung zeigt, dass die WSPR-Methode die genaueste unter den SPR, L2-PR und WSPR Techniken ist. 23- Darüber hinaus schließen wir aus den Ergebnissen des ersten Schritts, dass die zielorientierte Verfeinerung, im Vergleich zur herkömmlichen gleichmäßigen Total-Verfeinerungsmethode, nur ein Drittel der Unbekannten erfordert, um das Problem mit der gleichen Genauigkeit zu lösen. Daher benötigt die zielorientierte Adaptivität im Vergleich zu herkömmlichen Methoden viel weniger Rechenzeit, um die gleiche Genauigkeit zu erreichen. 24- Im zweiten Schritt, sind die Fehlerschätzungstechniken dieselben wie im ersten Schritt, aber die untersuchten Probleme sind dynamisch und nicht statisch. Der Vergleich der numerischen Ergebnisse mit den analytischen Ergebnissen in einem Benchmark-Problem bestätigt die Genauigkeit der verwendeten Methode. 25- Die Ergebnisse des zweiten Schritts zeigen, dass die geschätzten Fehler in allen gekoppelten Problemen niedriger sind als die ähnlichen ungekoppelten. Bei diesen Problemen reduziert die Implementierung der entwickelten adaptiven Methode den Fehler erheblich. 26- Im dritten Schritt wurde das residuumsbasierte Fehlerabschätzungsverfahren auf mehrere thermoelastische Probleme im stationären Zustand angewendet. In allen Beispielen wird die Genauigkeit der Methode durch analytische Lösungen überprüft. Die numerischen Ergebnisse zeigen eine sehr gute Übereinstimmung mit der analytischen Lösung sowohl bei 2D- als auch bei 3D-Problemen. 27- Im dritten Schritt werden die Ergebnisse der DWR-Verfeinerung mit Kelly-, W-Kelly- und gleichmäßigen Total-Verfeinerungstechniken verglichen. Die entwickelte DWR-Methode zeigt im Vergleich zu den anderen Methoden die beste Effizienz. Um beispielsweise die Fehlertoleranz von 10-6 zu erreichen, enthält das DWR-Gitter nur 2% unbekannte Parameter im Vergleich zu einem gleichmäßig verfeinerten Gitter. Die Verwendung des DWR-Verfahrens spart daher erhebliche Rechenzeit und Kosten. KW - Mesh Refinement KW - Thermoelastizität KW - Goal-oriented A Posteriori Error Estimation KW - 2D/3D Adaptive Mesh Refinement KW - Thermoelasticity KW - Deal ii C++ code KW - recovery-based and residual-based error estimators Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20201113-42864 ER - TY - THES A1 - Oucif, Chahmi T1 - Analytical Modeling of Self-Healing and Super Healing in Cementitious Materials N2 - Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken. The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history. In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain. Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics. In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied. KW - Schaden KW - Beschädigung KW - Selbstheilung KW - Zementbeton KW - Damage KW - Healing KW - Concrete KW - Autonomous KW - Autogenous KW - Super Healing Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200831-42296 ER - TY - THES A1 - Chan, Chiu Ling T1 - Smooth representation of thin shells and volume structures for isogeometric analysis N2 - The purpose of this study is to develop self-contained methods for obtaining smooth meshes which are compatible with isogeometric analysis (IGA). The study contains three main parts. We start by developing a better understanding of shapes and splines through the study of an image-related problem. Then we proceed towards obtaining smooth volumetric meshes of the given voxel-based images. Finally, we treat the smoothness issue on the multi-patch domains with C1 coupling. Following are the highlights of each part. First, we present a B-spline convolution method for boundary representation of voxel-based images. We adopt the filtering technique to compute the B-spline coefficients and gradients of the images effectively. We then implement the B-spline convolution for developing a non-rigid images registration method. The proposed method is in some sense of “isoparametric”, for which all the computation is done within the B-splines framework. Particularly, updating the images by using B-spline composition promote smooth transformation map between the images. We show the possible medical applications of our method by applying it for registration of brain images. Secondly, we develop a self-contained volumetric parametrization method based on the B-splines boundary representation. We aim to convert a given voxel-based data to a matching C1 representation with hierarchical cubic splines. The concept of the osculating circle is employed to enhance the geometric approximation, where it is done by a single template and linear transformations (scaling, translations, and rotations) without the need for solving an optimization problem. Moreover, we use the Laplacian smoothing and refinement techniques to avoid irregular meshes and to improve mesh quality. We show with several examples that the method is capable of handling complex 2D and 3D configurations. In particular, we parametrize the 3D Stanford bunny which contains irregular shapes and voids. Finally, we propose the B´ezier ordinates approach and splines approach for C1 coupling. In the first approach, the new basis functions are defined in terms of the B´ezier Bernstein polynomials. For the second approach, the new basis is defined as a linear combination of C0 basis functions. The methods are not limited to planar or bilinear mappings. They allow the modeling of solutions to fourth order partial differential equations (PDEs) on complex geometric domains, provided that the given patches are G1 continuous. Both methods have their advantages. In particular, the B´ezier approach offer more degree of freedoms, while the spline approach is more computationally efficient. In addition, we proposed partial degree elevation to overcome the C1-locking issue caused by the over constraining of the solution space. We demonstrate the potential of the resulting C1 basis functions for application in IGA which involve fourth order PDEs such as those appearing in Kirchhoff-Love shell models, Cahn-Hilliard phase field application, and biharmonic problems. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2020,2 KW - Modellierung KW - Isogeometrische Analyse KW - NURBS KW - Geometric Modeling KW - Isogeometric Analysis KW - NURBS Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200812-42083 ER - TY - THES A1 - Salavati, Mohammad T1 - Multi-Scale Modeling of Mechanical and Electrochemical Properties of 1D and 2D Nanomaterials, Application in Battery Energy Storage Systems N2 - Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods. Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method. Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD. Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA. The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries. Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials. KW - Batterie KW - Modellierung KW - Nanostrukturiertes Material KW - Mechanical properties KW - Multi-scale modeling KW - Energiespeichersystem KW - Elektrodenmaterial KW - Elektrode KW - Mechanische Eigenschaft KW - Elektrochemische Eigenschaft KW - Electrochemical properties KW - Battery development KW - Nanomaterial Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200623-41830 ER - TY - THES A1 - Abu Bakar, Ilyani Akmar T1 - Computational Analysis of Woven Fabric Composites: Single- and Multi-Objective Optimizations and Sensitivity Analysis in Meso-scale Structures N2 - This study permits a reliability analysis to solve the mechanical behaviour issues existing in the current structural design of fabric structures. Purely predictive material models are highly desirable to facilitate an optimized design scheme and to significantly reduce time and cost at the design stage, such as experimental characterization. The present study examined the role of three major tasks; a) single-objective optimization, b) sensitivity analyses and c) multi-objective optimization on proposed weave structures for woven fabric composites. For single-objective optimization task, the first goal is to optimize the elastic properties of proposed complex weave structure under unit cells basis based on periodic boundary conditions. We predict the geometric characteristics towards skewness of woven fabric composites via Evolutionary Algorithm (EA) and a parametric study. We also demonstrate the effect of complex weave structures on the fray tendency in woven fabric composites via tightness evaluation. We utilize a procedure which does not require a numerical averaging process for evaluating the elastic properties of woven fabric composites. The fray tendency and skewness of woven fabrics depends upon the behaviour of the floats which is related to the factor of weave. Results of this study may suggest a broader view for further research into the effects of complex weave structures or may provide an alternative to the fray and skewness problems of current weave structure in woven fabric composites. A comprehensive study is developed on the complex weave structure model which adopts the dry woven fabric of the most potential pattern in singleobjective optimization incorporating the uncertainties parameters of woven fabric composites. The comprehensive study covers the regression-based and variance-based sensitivity analyses. The second task goal is to introduce the fabric uncertainties parameters and elaborate how they can be incorporated into finite element models on macroscopic material parameters such as elastic modulus and shear modulus of dry woven fabric subjected to uni-axial and biaxial deformations. Significant correlations in the study, would indicate the need for a thorough investigation of woven fabric composites under uncertainties parameters. The study describes here could serve as an alternative to identify effective material properties without prolonged time consumption and expensive experimental tests. The last part focuses on a hierarchical stochastic multi-scale optimization approach (fine-scale and coarse-scale optimizations) under geometrical uncertainties parameters for hybrid composites considering complex weave structure. The fine-scale optimization is to determine the best lamina pattern that maximizes its macroscopic elastic properties, conducted by EA under the following uncertain mesoscopic parameters: yarn spacing, yarn height, yarn width and misalignment of yarn angle. The coarse-scale optimization has been carried out to optimize the stacking sequences of symmetric hybrid laminated composite plate with uncertain mesoscopic parameters by employing the Ant Colony Algorithm (ACO). The objective functions of the coarse-scale optimization are to minimize the cost (C) and weight (W) of the hybrid laminated composite plate considering the fundamental frequency and the buckling load factor as the design constraints. Based on the uncertainty criteria of the design parameters, the appropriate variation required for the structural design standards can be evaluated using the reliability tool, and then an optimized design decision in consideration of cost can be subsequently determined. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2020,1 KW - Verbundwerkstoff KW - Gewebeverbundwerkstoff KW - woven composites Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200605-41762 SN - 1610-7381 ER - TY - THES A1 - Radmard Rahmani, Hamid T1 - Artificial Intelligence Approach for Seismic Control of Structures N2 - Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed. Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations. Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control. Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records. Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations. Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties. KW - Erdbeben KW - seismic control KW - tuned mass damper KW - reinforcement learning KW - earthquake KW - machine learning KW - Operante Konditionierung KW - structural control Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200417-41359 ER - TY - THES A1 - Hossein Nezhad Shirazi, Ali T1 - Multi-Scale Modeling of Lithium ion Batteries: a thermal management approach and molecular dynamic studies N2 - Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low. Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs. Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite. It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs. KW - Akkumulator KW - Battery KW - Batterie Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200214-40986 ER - TY - THES A1 - Hossain, Md Naim T1 - Isogeometric analysis based on Geometry Independent Field approximaTion (GIFT) and Polynomial Splines over Hierarchical T-meshes N2 - This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines). In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required. The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost. T2 - Die isogeometrische Analysis basierend auf der geometrieunabhängigen Feldnäherung (GIFT)und polynomialen Splines über hierarchischen T-Netzen KW - Finite-Elemente-Methode KW - Isogeometrc Analysis KW - Geometry Independent Field Approximation KW - Polynomial Splines over Hierarchical T-meshes KW - Recovery Based Error Estimator Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20191129-40376 ER - TY - THES A1 - Nickerson, Seth T1 - Thermo-Mechanical Behavior of Honeycomb, Porous, Microcracked Ceramics BT - Characterization and analysis of thermally induced stresses with specific consideration of synthetic, porous cordierite honeycomb substrates N2 - The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties. Primary novel factors of this work center on two aspects. 1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners. 2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions. Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis. This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,4 KW - Keramik KW - ceramics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190911-39753 ER - TY - THES A1 - Schemmann, Christoph T1 - Optimierung von radialen Verdichterlaufrädern unter Berücksichtigung empirischer und analytischer Vorinformationen mittels eines mehrstufigen Sampling Verfahrens T1 - Optimization of Centrifugal Compressor Impellers by a Multi-fidelity Sampling Method Taking Analytical and Empirical Information into Account N2 - Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too. N2 - Turbomaschinen sind eine entscheidende Komponente in vielen Energiewandlungs- oder Energieerzeugungsprozessen und daher als vielversprechender Ansatzpunkt für eine Effizienzsteigerung der Energie-und Ressourcennutzung anzusehen. Im Laufe des letzten Jahrzehnts haben automatisierte Optimierungsmethoden in Verbindung mit numerischer Simulation zunehmend breitere Verwendung als Mittel zur Effizienzsteigerung in vielen Bereichen der Ingenieurwissenschaften gefunden. Allerdings standen die komplexen Interaktionen zwischen Strömungs- und Strukturmechanik sowie der hohe nummerische Aufwand einem weitverbreiteten Einsatz dieser Methoden im Turbomaschinenbereich bisher entgegen. Das Ziel dieser Forschungsaktivität ist die Entwicklung einer effizienten Strategie zur metamodellbasierten Optimierung von radialen Verdichterlaufrädern. Dabei liegt der Schwerpunkt auf einer Reduktion des benötigten numerischen Aufwandes. Der in diesem Vorhaben gewählte Ansatz ist das Einbeziehen analytischer und empirischer Vorinformationen (“lowfidelity“) in den Sampling Prozess, um vielversprechende Bereiche des Parameterraumes zu identifizieren. Diese Informationen werden genutzt um die aufwendigen numerischen Berechnungen (“high-fidelity“) des strömungs- und strukturmechanischen Verhaltens der Laufräder in diesen Bereichen zu konzentrieren, während gleichzeitig eine ausreichende Abdeckung des gesamten Parameterraumes sichergestellt wird. Die Entwicklung der Optimierungsstrategie ist in drei zentrale Arbeitspakete aufgeteilt. In einem ersten Schritt werden die verfügbaren empirischen und analytischen Methoden gesichtet und bewertet. In dieser Recherche sind Verlustmodelle basierend auf eindimensionaler Strömungsmechanik und empirischen Korrelationen als bestgeeignete Methode zur Vorhersage des aerodynamischen Verhaltens der Verdichter identifiziert worden. Um eine hohe Vorhersagegüte sicherzustellen, sind diese Modelle anhand verfügbarer Leistungsdaten kalibriert worden. Da zur Vorhersage der mechanischen Belastung des Laufrades keine brauchbaren analytischen oder empirischen Modelle ermittelt werden konnten, ist hier ein Metamodel basierend auf Finite-Element Berechnungen gewählt worden. Das zweite Arbeitspaket beinhaltet die Entwicklung der angepassten Samplingmethode, welche Samples in Bereichen des Parameterraumes konzentriert, die auf Basis der Vorinformationen als vielversrechend angesehen werden können. Gleichzeitig müssen eine gleichmäßige Abdeckung des gesamten Parameterraumes und ein niedriges Niveau an Eingangskorrelationen sichergestellt sein. Da etablierte Methoden wie Markov-Ketten-Monte-Carlo-Methoden oder die Verwerfungsmethode diese Voraussetzungen nicht erfüllen, ist ein neues, mehrstufiges Samplingverfahren (“Filtered Sampling“) entwickelt worden. Das letzte Arbeitspaket umfasst die Entwicklung eines automatisiertenSimulations-Workflows. Dieser Workflow umfasst Geometrieparametrisierung, Geometrieerzeugung, Netzerzeugung sowie die Berechnung des aerodynamischen Betriebsverhaltens und der strukturmechanischen Belastung. Dabei liegt ein Schwerpunkt auf der Entwicklung eines Parametrisierungskonzeptes, welches auf strömungsmechanischen Zusammenhängen beruht, um so physikalisch nicht zielführende Parameterkombinationen zu vermeiden. Abschließend ist die auf den zuvor entwickelten Werkzeugen aufbauende Optimierungsstrategie erfolgreich eingesetzt worden, um drei Optimierungsfragestellungen zu bearbeiten. Im ersten und zweiten Testcase sind bestehende Verdichterlaufräder mit der vorgestellten Methode optimiert worden. Die erzielten Optimierungsergebnisse sind von ähnlicher Güte wie die solcher Optimierungen, die keine Vorinformationen berücksichtigen, allerdingswirdnurdieHälfteannumerischemAufwandbenötigt. IneinemdrittenTestcase ist die Methode eingesetzt worden, um ein neues Laufraddesign zu erzeugen. Im Gegensatz zu den vorherigen Beispielen werden im Rahmen dieser Optimierung stark unterschiedliche Designs untersucht. Dadurch kann an diesem dritten Beispiel aufgezeigt werden, dass die Methode auch für Parameterräume mit stakt variierenden Designs funktioniert. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,3 KW - Simulation KW - Maschinenbau KW - Optimierung KW - Strömungsmechanik KW - Strukturmechanik Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190910-39748 ER - TY - THES A1 - Tan, Fengjie T1 - Shape Optimization Design of Arch Type Dams under Uncertainties N2 - Due to an increased need for hydro-electricity, water storage, and flood protection, it is assumed that a series of new dams will be built throughout the world. Comparing existing design methodologies for arch-type dams, model-based shape optimization can effectively reduce construction costs and leverage the properties of construction materials. To apply the means of shape optimization, suitable variables need to be chosen to formulate the objective function, which is the volume of the arch dam here. In order to increase the consistency with practical conditions, a great number of geometrical and behavioral constraints are included in the mathematical model. An optimization method, namely Genetic Algorithm is adopted which allows a global search. Traditional optimization techniques are realized based on a deterministic approach, which means that the material properties and loading conditions are assumed to be fixed values. As a result, the real-world structures that are optimized by these approaches suffer from uncertainties that one needs to be aware of. Hence, in any optimization process for arch dams, it is nec- essary to find a methodology that is capable of considering the influences of uncertainties and generating a solution which is robust enough against the uncertainties. The focus of this thesis is the formulation and the numerical method for the optimization of the arch dam under the uncertainties. The two main models, the probabilistic model, and non-probabilistic models are intro- duced and discussed. Classic procedures of probabilistic approaches un- der uncertainties, such as RDO (robust design optimization) and RBDO (reliability-based design optimization), are in general computationally ex- pensive and rely on estimates of the system’s response variance and fail- ure probabilities. Instead, the robust optimization (RO) method which is based on the non-probabilistic model, will not follow a full probabilistic approach but works with pre-defined confidence levels. This leads to a bi-level optimization program where the volume of the dam is optimized under the worst combination of the uncertain parameters. By this, robust and reliable designs are obtained and the result is independent of any as- sumptions on stochastic properties of the random variables in the model. The optimization of an arch-type dam is realized here by a robust optimiza- tion method under load uncertainty, where hydraulic and thermal loads are considered. The load uncertainty is modeled as an ellipsoidal expression. Comparing with any traditional deterministic optimization (DO) method, which only concerns the minimum objective value and offers a solution candidate close to limit-states, the RO method provides a robust solution against uncertainties. All the above mentioned methods are applied to the optimization of the arch dam to compare with the optimal design with DO methods. The re- sults are compared and analyzed to discuss the advantages and drawbacks of each method. In order to reduce the computational cost, a ranking strategy and an ap- proximation model are further involved to do a preliminary screening. By means of these, the robust design can generate an improved arch dam structure which ensures both safety and serviceability during its lifetime. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,2 KW - Wasserbau KW - Staudamm KW - dams Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190819-39608 ER - TY - THES A1 - Alalade, Muyiwa T1 - An Enhanced Full Waveform Inversion Method for the Structural Analysis of Dams N2 - Since the Industrial Revolution in the 1700s, the high emission of gaseous wastes into the atmosphere from the usage of fossil fuels has caused a general increase in temperatures globally. To combat the environmental imbalance, there is an increase in the demand for renewable energy sources. Dams play a major role in the generation of “green" energy. However, these structures require frequent and strict monitoring to ensure safe and efficient operation. To tackle the challenges faced in the application of convention dam monitoring techniques, this work proposes the inverse analysis of numerical models to identify damaged regions in the dam. Using a dynamic coupled hydro-mechanical Extended Finite Element Method (XFEM) model and a global optimization strategy, damage (crack) in the dam is identified. By employing seismic waves to probe the dam structure, a more detailed information on the distribution of heterogeneous materials and damaged regions are obtained by the application of the Full Waveform Inversion (FWI) method. The FWI is based on a local optimization strategy and thus it is highly dependent on the starting model. A variety of data acquisition setups are investigated, and an optimal setup is proposed. The effect of different starting models and noise in the measured data on the damage identification is considered. Combining the non-dependence of a starting model of the global optimization strategy based dynamic coupled hydro-mechanical XFEM method and the detailed output of the local optimization strategy based FWI method, an enhanced Full Waveform Inversion is proposed for the structural analysis of dams. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,1 KW - Talsperre KW - Staumauer KW - Damage identification KW - Inverse analysis KW - Dams KW - Full waveform inversion KW - Wave propagation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190813-39566 ER - TY - THES A1 - ZHANG, CHAO T1 - Crack Identification using Dynamic Extended Finite Element Method and Thermal Conductivity Engineering for Nanomaterials N2 - Identification of flaws in structures is a critical element in the management of maintenance and quality assurance processes in engineering. Nondestructive testing (NDT) techniques based on a wide range of physical principles have been developed and are used in common practice for structural health monitoring. However, basic NDT techniques are usually limited in their ability to provide the accurate information on locations, dimensions and shapes of flaws. One alternative to extract additional information from the results of NDT is to append it with a computational model that provides detailed analysis of the physical process involved and enables the accurate identification of the flaw parameters. The aim here is to develop the strategies to uniquely identify cracks in two-dimensional 2D) structures under dynamic loadings. A local NDT technique combined eXtended Finite Element Method (XFEM) with dynamic loading in order to identify the cracks in the structures quickly and accurately is developed in this dissertation. The Newmark-b time integration method with Rayleigh damping is used for the time integration. We apply Nelder-Mead (NM)and Quasi-Newton (QN) methods for identifying the crack tip in plate. The inverse problem is solved iteratively, in which XFEM is used for solving the forward problem in each iteration. For a timeharmonic excitation with a single frequency and a short-duration signal measured along part of the external boundary, the crack is detected through the solution of an inverse time-dependent problem. Compared to the static load, we show that the dynamic loads are more effective for crack detection problems. Moreover, we tested different dynamic loads and find that NM method works more efficient under the harmonic load than the pounding load while the QN method achieves almost the same results for both load types. A global strategy, Multilevel Coordinate Search (MCS) with XFEM (XFEM-MCS) methodology under the dynamic electric load, to detect multiple cracks in 2D piezoelectric plates is proposed in this dissertation. The Newmark-b method is employed for the time integration and in each iteration the forward problem is solved by XFEM for various cracks. The objective functional is minimized by using a global search algorithm MCS. The test problems show that the XFEM-MCS algorithm under the dynamic electric load can be effectively employed for multiple cracks detection in piezoelectric materials, and it proves to be robust in identifying defects in piezoelectric structures. Fiber-reinforced composites (FRCs) are extensively applied in practical engineering since they have high stiffness and strength. Experiments reveal a so-called interphase zone, i.e. the space between the outside interface of the fiber and the inside interface of the matrix. The interphase strength between the fiber and the matrix strongly affects the mechanical properties as a result of the large ratio of interface/volume. For the purpose of understanding the mechanical properties of FRCs with functionally graded interphase (FGI), a closed-form expression of the interface strength between a fiber and a matrix is obtained in this dissertation using a continuum modeling approach according to the ver derWaals (vdW) forces. Based on the interatomic potential, we develop a new modified nonlinear cohesive law, which is applied to study the interface delamination of FRCs with FGI under different loadings. The analytical solutions show that the delamination behavior strongly depends on the interphase thickness, the fiber radius, the Young’s moduli and Poisson’s ratios of the fiber and the matrix. Thermal conductivity is the property of a material to conduct heat. With the development and deep research of 2D materials, especially graphene and molybdenum disulfide (MoS2), the thermal conductivity of 2D materials attracts wide attentions. The thermal conductivity of graphene nanoribbons (GNRs) is found to appear a tendency of decreasing under tensile strain by classical molecular dynamics (MD) simulations. Hence, the strain effects of graphene can play a key role in the continuous tunability and applicability of its thermal conductivity property at nanoscale, and the dissipation of thermal conductivity is an obstacle for the applications of thermal management. Up to now, the thermal conductivity of graphene under shear deformation has not been investigated yet. From a practical point of view, good thermal managements of GNRs have significantly potential applications of future GNR-based thermal nanodevices, which can greatly improve performances of the nanosized devices due to heat dissipations. Meanwhile, graphene is a thin membrane structure, it is also important to understand the wrinkling behavior under shear deformation. MoS2 exists in the stable semiconducting 1H phase (1H-MoS2) while the metallic 1T phase (1T-MoS2) is unstable at ambient conditions. As it’s well known that much attention has been focused on studying the nonlinear optical properties of the 1H-MoS2. In a very recent research, the 1T-type monolayer crystals of TMDCs, MX2 (MoS2, WS2 ...) was reported having an intrinsic in-plane negative Poisson’s ratio. Luckily, nearly at the same time, unprecedented long-term (>3months) air stability of the 1T-MoS2 can be achieved by using the donor lithium hydride (LiH). Therefore, it’s very important to study the thermal conductivity of 1T-MoS2. The thermal conductivity of graphene under shear strain is systematically studied in this dissertation by MD simulations. The results show that, in contrast to the dramatic decrease of thermal conductivity of graphene under uniaxial tensile, the thermal conductivity of graphene is not sensitive to the shear strain, and the thermal conductivity decreases only 12-16%. The wrinkle evolves when the shear strain is around 5%-10%, but the thermal conductivity barely changes. The thermal conductivities of single-layer 1H-MoS2(1H-SLMoS2) and single-layer 1T-MoS2 (1T-SLMoS2) with different sample sizes, temperatures and strain rates have been studied systematically in this dissertation. We find that the thermal conductivities of 1H-SLMoS2 and 1T-SLMoS2 in both the armchair and the zigzag directions increase with the increasing of the sample length, while the increase of the width of the sample has minor effect on the thermal conductions of these two structures. The thermal conductivity of 1HSLMoS2 is smaller than that of 1T-SLMoS2 under size effect. Furthermore, the temperature effect results show that the thermal conductivities of both 1H-SLMoS2 and 1T-SLMoS2 decrease with the increasing of the temperature. The thermal conductivities of 1HSLMoS2 and 1T-SLMoS2 are nearly the same (difference <6%) in both of the chiral orientations under corresponding temperatures, especially in the armchair direction (difference <2.8%). Moreover, we find that the strain effects on the thermal conductivity of 1HSLMoS2 and 1T-SLMoS2 are different. More specifically, the thermal conductivity decreases with the increasing tensile strain rate for 1T-SLMoS2, while fluctuates with the growth of the strain for 1HSLMoS2. Finally, we find that the thermal conductivity of same sized 1H-SLMoS2 is similar with that of the strained 1H-SLMoS2 structure. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,6 KW - crack KW - Wärmeleitfähigkeit KW - crack identification KW - thermal conductivity Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190119-38478 ER - TY - THES A1 - Keßler, Andrea T1 - Matrix-free voxel-based finite element method for materials with heterogeneous microstructures T1 - Matrixfreie voxelbasierte Finite-Elemente-Methode für Materialien mit komplizierter Mikrostruktur N2 - Modern image detection techniques such as micro computer tomography (μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis. However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm. This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained. N2 - Moderne bildgebende Verfahren wie Mikro-Computertomographie (μCT), Magnetresonanztomographie (MRT) und Rasterelektronenmikroskopie (SEM) liefern nicht-invasiv hochauflösende Bilder der Mikrostruktur von Materialien. Sie bilden die Grundlage der geometrischen Modelle der hochauflösenden bildbasierten Analysis. Allerdings erreichen vor allem in 3D die Diskretisierungen dieser Modelle leicht die Größe von 100 Mill. Freiheitsgraden und erfordern umfangreiche Hardware-Ressourcen in Bezug auf Hauptspeicher und Rechenleistung, um das numerische Modell zu lösen. Der Fokus dieser Arbeit liegt daher darin, numerische Lösungsmethoden zu kombinieren und anzupassen, um den Speicherplatzbedarf und die Rechenzeit zu reduzieren und damit eine Ausführung der bildbasierten Analyse auf modernen Computer-Desktops zu ermöglichen. Daher ist als numerisches Modell eine einfache Gitterdiskretisierung der voxelbasierten (Pixel mit der Tiefe als dritten Dimension) Geometrie gewählt, die die Oberflächenerstellung weglässt und eine reduzierte Speicherung der finiten Elementen und einen matrixfreien Lösungsalgorithmus ermöglicht. Dies wiederum verringert den Aufwand von fast allen angewandten gitterbasierten Lösungsverfahren und führt zu Speichereffizienz und numerisch stabilen Algorithmen für die Mikrostrukturmodelle. Es werden zwei Varianten der Anpassung der matrixfreien Lösung präsentiert, die Element-für-Element Methode und eine Knoten-Kanten-Variante. Die Methode der konjugierten Gradienten in Kombination mit dem Mehrgitterverfahren als sehr effizienten Vorkonditionierer wird für den matrixfreien Lösungsalgorithmus adaptiert. Der stufige Verlauf der Materialgrenzen durch die voxelbasierte Diskretisierung wird durch Elemente geglättet, die am Integrationspunkt unterschiedliche Materialinformationen enthalten und über Teilzellen integriert werden (embedded boundary elements). Die Effizienz der matrixfreien Verfahren bleibt erhalten. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,7 KW - Dissertation KW - Finite-Elemente-Methode KW - Konjugierte-Gradienten-Methode KW - Mehrgitterverfahren KW - conjugate gradient method KW - multigrid method KW - grid-based KW - finite element method KW - matrix-free Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190116-38448 ER - TY - THES A1 - Vollmering, Max T1 - Damage Localization of Mechanical Structures by Subspace Identification and Krein Space Based H-infinity Estimation N2 - This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed. A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth. Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,5 KW - Strukturmechanik KW - Schätztheorie Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180730-37728 ER - TY - THES A1 - Hamdia, Khader T1 - On the fracture toughness of polymeric nanocomposites: Comprehensive stochastic and numerical studies N2 - Polymeric nanocomposites (PNCs) are considered for numerous nanotechnology such as: nano-biotechnology, nano-systems, nanoelectronics, and nano-structured materials. Commonly , they are formed by polymer (epoxy) matrix reinforced with a nanosized filler. The addition of rigid nanofillers to the epoxy matrix has offered great improvements in the fracture toughness without sacrificing other important thermo-mechanical properties. The physics of the fracture in PNCs is rather complicated and is influenced by different parameters. The presence of uncertainty in the predicted output is expected as a result of stochastic variance in the factors affecting the fracture mechanism. Consequently, evaluating the improved fracture toughness in PNCs is a challenging problem. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) have been employed to predict the fracture energy of polymer/particle nanocomposites. The ANN and ANFIS models were constructed, trained, and tested based on a collection of 115 experimental datasets gathered from the literature. The performance evaluation indices of the developed ANN and ANFIS showed relatively small error, with high coefficients of determination (R2), and low root mean square error and mean absolute percentage error. In the framework for uncertainty quantification of PNCs, a sensitivity analysis (SA) has been conducted to examine the influence of uncertain input parameters on the fracture toughness of polymer/clay nanocomposites (PNCs). The phase-field approach is employed to predict the macroscopic properties of the composite considering six uncertain input parameters. The efficiency, robustness, and repeatability are compared and evaluated comprehensively for five different SA methods. The Bayesian method is applied to develop a methodology in order to evaluate the performance of different analytical models used in predicting the fracture toughness of polymeric particles nanocomposites. The developed method have considered the model and parameters uncertainties based on different reference data (experimental measurements) gained from the literature. Three analytical models differing in theory and assumptions were examined. The coefficients of variation of the model predictions to the measurements are calculated using the approximated optimal parameter sets. Then, the model selection probability is obtained with respect to the different reference data. Stochastic finite element modeling is implemented to predict the fracture toughness of polymer/particle nanocomposites. For this purpose, 2D finite element model containing an epoxy matrix and rigid nanoparticles surrounded by an interphase zone is generated. The crack propagation is simulated by the cohesive segments method and phantom nodes. Considering the uncertainties in the input parameters, a polynomial chaos expansion (PCE) surrogate model is construed followed by a sensitivity analysis. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,4 KW - Bruch KW - Unsicherheit KW - Rissausbreitung KW - Bayes KW - Sensitivitätsanalyse KW - Fracture mechanics KW - Uncertainty analysis KW - Polymer nanocomposites KW - Bayesian method KW - Phase-field modeling Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180712-37652 ER - TY - THES A1 - Wang, Cuixia T1 - Nanomechanical Resonators Based on Quasi-two-dimensional Materials N2 - Advances in nanotechnology lead to the development of nano-electro-mechanical systems (NEMS) such as nanomechanical resonators with ultra-high resonant frequencies. The ultra-high-frequency resonators have recently received significant attention for wide-ranging applications such as molecular separation, molecular transportation, ultra-high sensitive sensing, high-frequency signal processing, and biological imaging. It is well known that for micrometer length scale, first-principles technique, the most accurate approach, poses serious limitations for comparisons with experimental studies. For such larger size, classical molecular dynamics (MD) simulations are desirable, which require interatomic potentials. Additionally, a mesoscale method such as the coarse-grained (CG) method is another useful method to support simulations for even larger system sizes. Furthermore, quasi-two-dimensional (Q2D) materials have attracted intensive research interest due to their many novel properties over the past decades. However, the energy dissipation mechanisms of nanomechanical resonators based on several Q2D materials are still unknown. In this work, the addressed main issues include the development of the CG models for molybdenum disulphide (MoS2), investigation of the mechanism effects on black phosphorus (BP) nanoresonators and the application of graphene nanoresonators. The primary coverage and results of the dissertation are as follows: Method development. Firstly, a two-dimensional (2D) CG model for single layer MoS2 (SLMoS2) is analytically developed. The Stillinger-Weber (SW) potential for this 2D CG model is further parametrized, in which all SW geometrical parameters are determined analytically according to the equilibrium condition for each individual potential term, while the SW energy parameters are derived analytically based on the valence force field model. Next, the 2D CG model is further simplified to one-dimensional (1D) CG model, which describes the 2D SLMoS2 structure using a 1D chain model. This 1D CG model is applied to investigate the relaxed configuration and the resonant oscillation of the folded SLMoS2. Owning to the simplicity nature of the 1D CG model, the relaxed configuration of the folded SLMoS2 is determined analytically, and the resonant oscillation frequency is derived analytically. Considering the increasing interest in studying the properties of other 2D layered materials, and in particular those in the semiconducting transition metal dichalcogenide class like MoS2, the CG models proposed in current work provide valuable simulation approaches. Mechanism understanding. Two energy dissipation mechanisms of BP nanoresonators are focused exclusively, i.e. mechanical strain effects and defect effects (including vacancy and oxidation). Vacancy defect is intrinsic damping factor for the quality (Q)-factor, while mechanical strain and oxidation are extrinsic damping factors. Intrinsic dissipation (induced by thermal vibrations) in BP resonators (BPRs) is firstly investigated. Specifically, classical MD simulations are performed to examine the temperature dependence for the Q-factor of the single layer BPR (SLBPR) along the armchair and zigzag directions, where two-step fitting procedure is used to extract the frequency and Q-factor from the kinetic energy time history. The Q-factors of BPRs are evaluated through comparison with those of graphene and MoS2 nanoresonators. Next, effects of mechanical strain, vacancy and oxidation on BP nanoresonators are investigated in turn. Considering the increasing interest in studying the properties of BP, and in particular the lack of theoretical study for the BPRs, the results in current work provide a useful reference. Application. A novel application for graphene nanoresonators, using them to self-assemble small nanostructures such as water chains, is proposed. All of the underlying physics enabling this phenomenon is elucidated. In particular, by drawing inspiration from macroscale self-assembly using the higher order resonant modes of Chladni plates, classical MD simulations are used to investigate the self-assembly of water molecules using graphene nanoresonators. An analytic formula for the critical resonant frequency based on the interaction between water molecules and graphene is provided. Furthermore, the properties of the water chains assembled by the graphene nanoresonators are studied. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,3 KW - Nanomechanik KW - Resonator KW - Nanomechanical Resonators Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180709-37609 ER - TY - THES A1 - Abbas, Tajammal T1 - Assessment of Numerical Prediction Models for Aeroelastic Instabilities of Bridges N2 - The phenomenon of aerodynamic instability caused by the wind is usually a major design criterion for long-span cable-supported bridges. If the wind speed exceeds the critical flutter speed of the bridge, this constitutes an Ultimate Limit State. The prediction of the flutter boundary, therefore, requires accurate and robust models. The complexity and uncertainty of models for such engineering problems demand strategies for model assessment. This study is an attempt to use the concepts of sensitivity and uncertainty analyses to assess the aeroelastic instability prediction models for long-span bridges. The state-of-the-art theory concerning the determination of the flutter stability limit is presented. Since flutter is a coupling of aerodynamic forcing with a structural dynamics problem, different types and classes of structural and aerodynamic models can be combined to study the interaction. Here, both numerical approaches and analytical models are utilised and coupled in different ways to assess the prediction quality of the coupled model. T3 - Schriftenreihe des DFG Graduiertenkollegs 1462 Modellqualitäten // Graduiertenkolleg Modellqualitäten - 16 KW - Brücke KW - Flattern KW - Unsicherheit KW - Flutter KW - Bridges KW - Sensitivity KW - Uncertainty KW - Model assessment Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180515-27161 UR - https://asw-verlage.de/katalog/assessment_of_numerical_prediction_models_for_aeroelastic_instabilities_of_bridges-1897.html PB - Jonas Verlag CY - Weimar ER - TY - THES A1 - Nariman, Nazim T1 - Numerical Methods for the Multi-Physical Analysis of Long Span Cable-Stayed Bridges N2 - The main categories of wind effects on long span bridge decks are buffeting, flutter, vortex-induced vibrations (VIV) which are often critical for the safety and serviceability of the structure. With the rapid increase of bridge spans, research on controlling wind-induced vibrations of long span bridges has been a problem of great concern.The developments of vibration control theories have led to the wide use of tuned mass dampers (TMDs) which has been proven to be effective for suppressing these vibrations both analytically and experimentally. Fire incidents are also of special interest in the stability and safety of long span bridges due to significant role of the complex phenomenon through triple interaction between the deck with the incoming wind flow and the thermal boundary of the surrounding air. This work begins with analyzing the buffeting response and flutter instability of three dimensional computational structural dynamics (CSD) models of a cable stayed bridge due to strong wind excitations using ABAQUS finite element commercial software. Optimization and global sensitivity analysis are utilized to target the vertical and torsional vibrations of the segmental deck through considering three aerodynamic parameters (wind attack angle, deck streamlined length and viscous damping of the stay cables). The numerical simulations results in conjunction with the frequency analysis results emphasized the existence of these vibrations and further theoretical studies are possible with a high level of accuracy. Model validation is performed by comparing the results of lift and moment coefficients between the created CSD models and two benchmarks from the literature (flat plate theory) and flat plate by (Xavier and co-authors) which resulted in very good agreements between them. Optimum values of the parameters have been identified. Global sensitivity analysis based on Monte Carlo sampling method was utilized to formulate the surrogate models and calculate the sensitivity indices. The rational effect and the role of each parameter on the aerodynamic stability of the structure were calculated and efficient insight has been constructed for the stability of the long span bridge. 2D computational fluid dynamics (CFD) models of the decks are created with the support of MATLAB codes to simulate and analyze the vortex shedding and VIV of the deck. Three aerodynamic parameters (wind speed, deck streamlined length and dynamic viscosity of the air) are dedicated to study their effects on the kinetic energy of the system and the vortices shapes and patterns. Two benchmarks from the literature (Von Karman) and (Dyrbye and Hansen) are used to validate the numerical simulations of the vortex shedding for the CFD models. A good consent between the results was detected. Latin hypercube experimental method is dedicated to generate the surrogate models for the kinetic energy of the system and the generated lift forces. Variance based sensitivity analysis is utilized to calculate the main sensitivity indices and the interaction orders for each parameter. The kinetic energy approach performed very well in revealing the rational effect and the role of each parameter in the generation of vortex shedding and predicting the early VIV and the critical wind speed. Both one-way fluid-structure interaction (one-way FSI) simulations and two-way fluid-structure interaction (two-way FSI) co-simulations for the 2D models of the deck are executed to calculate the shedding frequencies for the associated wind speeds in the lock-in region in addition to the lift and drag coefficients. Validation is executed with the results of (Simiu and Scanlan) and the results of flat plate theory compiled by (Munson and co-authors) respectively. High levels of agreements between all the results were detected. A decrease in the critical wind speed and the shedding frequencies considering (two-way FSI) was identified compared to those obtained in the (one-way FSI). The results from the (two-way FSI) approach predicted appreciable decrease in the lift and drag forces as well as prediction of earlier VIV for lower critical wind speeds and lock-in regions which exist at lower natural frequencies of the system. These conclusions help the designers to efficiently plan and consider for the design and safety of the long span bridge before and after construction. Multiple tuned mass dampers (MTMDs) system has been applied in the three dimensional CSD models of the cable stayed bridge to analyze their control efficiency in suppressing both wind -induced vertical and torsional vibrations of the deck by optimizing three design parameters (mass ratio, frequency ratio and damping ratio) for the (TMDs) supporting on actual field data and minimax optimization technique in addition to MATLAB codes and Fast Fourier Transform technique. The optimum values of each parameter were identified and validated with two benchmarks from the literature, first with (Wang and co-authors) and then with (Lin and co-authors). The validation procedure detected a good agreement between the results. Box-Behnken experimental method is dedicated to formulate the surrogate models to represent the control efficiency of the vertical and torsional vibrations. Sobol's sensitivity indices are calculated for the design parameters in addition to their interaction orders. The optimization results revealed better performance of the MTMDs in controlling both the vertical and the torsional vibrations for higher mode shapes. Furthermore, the calculated rational effect of each design parameter facilitates to increase the control efficiency of the MTMDs in conjunction with the support of the surrogate models which simplifies the process of analysis for vibration control to a great extent. A novel structural modification approach has been adopted to eliminate the early coupling between the bending and torsional mode shapes of the cable stayed bridge. Two lateral steel beams are added to the middle span of the structure. Frequency analysis is dedicated to obtain the natural frequencies of the first eight mode shapes of vibrations before and after the structural modification. Numerical simulations of wind excitations are conducted for the 3D model of the cable stayed bridge. Both vertical and torsional displacements are calculated at the mid span of the deck to analyze the bending and the torsional stiffness of the system before and after the structural modification. The results of the frequency analysis after applying lateral steel beams declared that the coupling between the vertical and torsional mode shapes of vibrations has been removed to larger natural frequencies magnitudes and higher rare critical wind speeds with a high factor of safety. Finally, thermal fluid-structure interaction (TFSI) and coupled thermal-stress analysis are utilized to identify the effects of transient and steady state heat-transfer on the VIV and fatigue of the deck due to fire incidents. Numerical simulations of TFSI models of the deck are dedicated to calculate the lift and drag forces in addition to determining the lock-in regions once using FSI models and another using TFSI models. Vorticity and thermal fields of three fire scenarios are simulated and analyzed. The benchmark of (Simiu and Scanlan) is used to validate the TFSI models, where a good agreement was manifested between the two results. Extended finite element method (XFEM) is adopted to create 3D models of the cable stayed bridge to simulate the fatigue of the deck considering three fire scenarios. The benchmark of (Choi and Shin) is used to validate the damaged models of the deck in which a good coincide was seen between them. The results revealed that the TFSI models and the coupled thermal-stress models are significant in detecting earlier vortex induced vibration and lock-in regions in addition to predicting damages and fatigue of the deck and identifying the role of wind-induced vibrations in speeding up the damage generation and the collapse of the structure in critical situations. KW - Stabilität KW - Brückenbau KW - Aerodynamic Stability KW - Vortex Induced Vibration KW - Fluid-Structure Interaction KW - Mass Tuned Damper KW - Thermal Fluid-Structure Interaction Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20171122-37105 ER - TY - THES A1 - Msekh, Mohammed Abdulrazzak T1 - Phase Field Modeling for Fracture with Applications to Homogeneous and Heterogeneous Materials N2 - The thesis presents an implementation including different applications of a variational-based approach for gradient type standard dissipative solids. Phase field model for brittle fracture is an application of the variational-based framework for gradient type solids. This model allows the prediction of different crack topologies and states. Of significant concern is the application of theoretical and numerical formulation of the phase field modeling into the commercial finite element software Abaqus in 2D and 3D. The fully coupled incremental variational formulation of phase field method is implemented by using the UEL and UMAT subroutines of Abaqus. The phase field method considerably reduces the implementation complexity of fracture problems as it removes the need for numerical tracking of discontinuities in the displacement field that are characteristic of discrete crack methods. This is accomplished by replacing the sharp discontinuities with a scalar damage phase field representing the diffuse crack topology wherein the amount of diffusion is controlled by a regularization parameter. The nonlinear coupled system consisting of the linear momentum equation and a diffusion type equation governing the phase field evolution is solved simultaneously via a Newton- Raphson approach. Post-processing of simulation results to be used as visualization module is performed via an additional UMAT subroutine implemented in the standard Abaqus viewer. In the same context, we propose a simple yet effective algorithm to initiate and propagate cracks in 2D geometries which is independent of both particular constitutive laws and specific element technology and dimension. It consists of a localization limiter in the form of the screened Poisson equation with, optionally, local mesh refinement. A staggered scheme for standard equilibrium and screened Cauchy equations is used. The remeshing part of the algorithm consists of a sequence of mesh subdivision and element erosion steps. Element subdivision is based on edge split operations using a given constitutive quantity (either damage or void fraction). Mesh smoothing makes use of edge contraction as function of a given constitutive quantity such as the principal stress or void fraction. To assess the robustness and accuracy of this algorithm, we use both quasi-brittle benchmarks and ductile tests. Furthermore, we introduce a computational approach regarding mechanical loading in microscale on an inelastically deforming composite material. The nanocomposites material of fully exfoliated clay/epoxy is shaped to predict macroscopic elastic and fracture related material parameters based on their fine–scale features. Two different configurations of polymer nanocomposites material (PNCs) have been studied. These configurations are fully bonded PNCs and PNCs with an interphase zone formation between the matrix and the clay reinforcement. The representative volume element of PNCs specimens with different clay weight contents, different aspect ratios, and different interphase zone thicknesses are generated by adopting Python scripting. Different constitutive models are employed for the matrix, the clay platelets, and the interphase zones. The brittle fracture behavior of the epoxy matrix and the interphase zones material are modeled using the phase field approach, whereas the stiff silicate clay platelets of the composite are designated as a linear elastic material. The comprehensive study investigates the elastic and fracture behavior of PNCs composites, in addition to predict Young’s modulus, tensile strength, fracture toughness, surface energy dissipation, and cracks surface area in the composite for different material parameters, geometry, and interphase zones properties and thicknesses. T2 - Phasenfeldmodellierung für Brüche mit Anwendungen auf homogene und heterogene Materialien KW - Finite-Elemente-Methode KW - Phase field model KW - Fracture KW - Abaqus KW - Finite Element Model Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170615-32291 ER - TY - THES A1 - Abeltshauser, Rainer T1 - Identification and separation of physical effects of coupled systems by using defined model abstractions N2 - The thesis investigates at the computer aided simulation process for operational vibration analysis of complex coupled systems. As part of the internal methods project “Absolute Values” of the BMW Group, the thesis deals with the analysis of the structural dynamic interactions and excitation interactions. The overarching aim of the methods project is to predict the operational vibrations of engines. Simulations are usually used to analyze technical aspects (e. g. operational vibrations, strength, ...) of single components in the industrial development. The boundary conditions of submodels are mostly based on experiences. So the interactions with neighboring components and systems are neglected. To get physically more realistic results but still efficient simulations, this work wants to support the engineer during the preprocessing phase by useful criteria. At first suitable abstraction levels based on the existing literature are defined to identify structural dynamic interactions and excitation interactions of coupled systems. So it is possible to separate different effects of the coupled subsystems. On this basis, criteria are derived to assess the influence of interactions between the considered systems. These criteria can be used during the preprocessing phase and help the engineer to build up efficient models with respect to the interactions with neighboring systems. The method was developed by using several models with different complexity levels. Furthermore, the method is proved for the application in the industrial environment by using the example of a current combustion engine. T2 - Identifikation und Separation physikalischer Effekte von gekoppelten Systemen mittels definierter Modellabstraktionen T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2017,1 KW - Strukturdynamik KW - Wechselwirkung KW - Schwingung KW - Berechnung KW - Numerische Berechnung KW - Modellbildung KW - Schwingungsanalyse KW - Simulationsprozess Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28600 ER - TY - THES A1 - Schwedler, Michael T1 - Integrated structural analysis using isogeometric finite element methods N2 - The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration. An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof. The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback. The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested. Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model. The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed. When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2016,2 KW - Finite-Elemente-Methode KW - NURBS KW - Isogeometrische Analyse KW - finite element method KW - isogeometric analysis KW - mortar method KW - building information modelling Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170130-27372 ER - TY - THES A1 - Nanthakumar, S.S. T1 - Inverse and optimization problems in piezoelectric materials using Extended Finite Element Method and Level sets T1 - Inverse und Optimierungsprobleme für piezoelektrische Materialien mit der Extended Finite Elemente Methode und Level sets N2 - Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics. An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries. Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations. The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels. Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams. Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed . KW - Finite-Elemente-Methode KW - Piezoelectricity KW - Inverse problems KW - Optimization problems KW - Nanostructures KW - XFEM KW - level set method KW - Surface effects Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161128-27095 ER - TY - THES A1 - Ghasemi, Hamid T1 - Stochastic optimization of fiber reinforced composites considering uncertainties N2 - Briefly, the two basic questions that this research is supposed to answer are: 1. Howmuch fiber is needed and how fibers should be distributed through a fiber reinforced composite (FRC) structure in order to obtain the optimal and reliable structural response? 2. How do uncertainties influence the optimization results and reliability of the structure? Giving answer to the above questions a double stage sequential optimization algorithm for finding the optimal content of short fiber reinforcements and their distribution in the composite structure, considering uncertain design parameters, is presented. In the first stage, the optimal amount of short fibers in a FRC structure with uniformly distributed fibers is conducted in the framework of a Reliability Based Design Optimization (RBDO) problem. Presented model considers material, structural and modeling uncertainties. In the second stage, the fiber distribution optimization (with the aim to further increase in structural reliability) is performed by defining a fiber distribution function through a Non-Uniform Rational BSpline (NURBS) surface. The advantages of using the NURBS surface as a fiber distribution function include: using the same data set for the optimization and analysis; high convergence rate due to the smoothness of the NURBS; mesh independency of the optimal layout; no need for any post processing technique and its non-heuristic nature. The output of stage 1 (the optimal fiber content for homogeneously distributed fibers) is considered as the input of stage 2. The output of stage 2 is the Reliability Index (b ) of the structure with the optimal fiber content and distribution. First order reliability method (in order to approximate the limit state function) as well as different material models including Rule of Mixtures, Mori-Tanaka, energy-based approach and stochastic multi-scales are implemented in different examples. The proposed combined model is able to capture the role of available uncertainties in FRC structures through a computationally efficient algorithm using all sequential, NURBS and sensitivity based techniques. The methodology is successfully implemented for interfacial shear stress optimization in sandwich beams and also for optimization of the internal cooling channels in a ceramic matrix composite. Finally, after some changes and modifications by combining Isogeometric Analysis, level set and point wise density mapping techniques, the computational framework is extended for topology optimization of piezoelectric / flexoelectric materials. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2016,1 KW - Optimization KW - Fiber Reinforced Composite KW - Finite Element Method KW - Isogeometric Analysis KW - Flexoelectricity KW - Finite-Elemente-Methode KW - Optimierung Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161117-27042 ER - TY - THES A1 - Amiri, Fatemeh T1 - Computational modelling of fracture with local maximum entropy approximations N2 - The key objective of this research is to study fracture with a meshfree method, local maximum entropy approximations, and model fracture in thin shell structures with complex geometry and topology. This topic is of high relevance for real-world applications, for example in the automotive industry and in aerospace engineering. The shell structure can be described efficiently by meshless methods which are capable of describing complex shapes as a collection of points instead of a structured mesh. In order to find the appropriate numerical method to achieve this goal, the first part of the work was development of a method based on local maximum entropy (LME) shape functions together with enrichment functions used in partition of unity methods to discretize problems in linear elastic fracture mechanics. We obtain improved accuracy relative to the standard extended finite element method (XFEM) at a comparable computational cost. In addition, we keep the advantages of the LME shape functions,such as smoothness and non-negativity. We show numerically that optimal convergence (same as in FEM) for energy norm and stress intensity factors can be obtained through the use of geometric (fixed area) enrichment with no special treatment of the nodes near the crack such as blending or shifting. As extension of this method to three dimensional problems and complex thin shell structures with arbitrary crack growth is cumbersome, we developed a phase field model for fracture using LME. Phase field models provide a powerful tool to tackle moving interface problems, and have been extensively used in physics and materials science. Phase methods are gaining popularity in a wide set of applications in applied science and engineering, recently a second order phase field approximation for brittle fracture has gathered significant interest in computational fracture such that sharp cracks discontinuities are modeled by a diffusive crack. By minimizing the system energy with respect to the mechanical displacements and the phase-field, subject to an irreversibility condition to avoid crack healing, this model can describe crack nucleation, propagation, branching and merging. One of the main advantages of the phase field modeling of fractures is the unified treatment of the interfacial tracking and mechanics, which potentially leads to simple, robust, scalable computer codes applicable to complex systems. In other words, this approximation reduces considerably the implementation complexity because the numerical tracking of the fracture is not needed, at the expense of a high computational cost. We present a fourth-order phase field model for fracture based on local maximum entropy (LME) approximations. The higher order continuity of the meshfree LME approximation allows to directly solve the fourth-order phase field equations without splitting the fourth-order differential equation into two second order differential equations. Notably, in contrast to previous discretizations that use at least a quadratic basis, only linear completeness is needed in the LME approximation. We show that the crack surface can be captured more accurately in the fourth-order model than the second-order model. Furthermore, less nodes are needed for the fourth-order model to resolve the crack path. Finally, we demonstrate the performance of the proposed meshfree fourth order phase-field formulation for 5 representative numerical examples. Computational results will be compared to analytical solutions within linear elastic fracture mechanics and experimental data for three-dimensional crack propagation. In the last part of this research, we present a phase-field model for fracture in Kirchoff-Love thin shells using the local maximum-entropy (LME) meshfree method. Since the crack is a natural outcome of the analysis it does not require an explicit representation and tracking, which is advantageous over techniques as the extended finite element method that requires tracking of the crack paths. The geometric description of the shell is based on statistical learning techniques that allow dealing with general point set surfaces avoiding a global parametrization, which can be applied to tackle surfaces of complex geometry and topology. We show the flexibility and robustness of the present methodology for two examples: plate in tension and a set of open connected pipes. KW - Fracture mechanics KW - Local maximum entropy approximants KW - PU Enrichment method KW - Phase-field model KW - Thin shell KW - Kirchoff--love theory Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20160719-26310 ER - TY - THES A1 - Vu, Bac Nam T1 - Stochastic uncertainty quantification for multiscale modeling of polymeric nanocomposites N2 - Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex. The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials. This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major goal, the following tasks are carried out: At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs. At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length scales. In particular, we homogenized the RVE into an equivalent fiber. The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale. Stochastic modeling and uncertainty quantification consist of the following ingredients: - Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively. - Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data. - Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance. In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided. The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results. KW - Polymere KW - nanocomposite KW - Nanoverbundstruktur KW - stochastic KW - multiscale Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20160322-25551 ER - TY - THES A1 - Jia, Yue T1 - Methods based on B-splines for model representation, numerical analysis and image registration N2 - The thesis consists of inter-connected parts for modeling and analysis using newly developed isogeometric methods. The main parts are reproducing kernel triangular B-splines, extended isogeometric analysis for solving weakly discontinuous problems, collocation methods using superconvergent points, and B-spline basis in image registration applications. Each topic is oriented towards application of isogeometric analysis basis functions to ease the process of integrating the modeling and analysis phases of simulation. First, we develop reproducing a kernel triangular B-spline-based FEM for solving PDEs. We review the triangular B-splines and their properties. By definition, the triangular basis function is very flexible in modeling complicated domains. However, instability results when it is applied for analysis. We modify the triangular B-spline by a reproducing kernel technique, calculating a correction term for the triangular kernel function from the chosen surrounding basis. The improved triangular basis is capable to obtain the results with higher accuracy and almost optimal convergence rates. Second, we propose an extended isogeometric analysis for dealing with weakly discontinuous problems such as material interfaces. The original IGA is combined with XFEM-like enrichments which are continuous functions themselves but with discontinuous derivatives. Consequently, the resulting solution space can approximate solutions with weak discontinuities. The method is also applied to curved material interfaces, where the inverse mapping and the curved triangular elements are considered. Third, we develop an IGA collocation method using superconvergent points. The collocation methods are efficient because no numerical integration is needed. In particular when higher polynomial basis applied, the method has a lower computational cost than Galerkin methods. However, the positions of the collocation points are crucial for the accuracy of the method, as they affect the convergent rate significantly. The proposed IGA collocation method uses superconvergent points instead of the traditional Greville abscissae points. The numerical results show the proposed method can have better accuracy and optimal convergence rates, while the traditional IGA collocation has optimal convergence only for even polynomial degrees. Lastly, we propose a novel dynamic multilevel technique for handling image registration. It is application of the B-spline functions in image processing. The procedure considered aims to align a target image from a reference image by a spatial transformation. The method starts with an energy function which is the same as a FEM-based image registration. However, we simplify the solving procedure, working on the energy function directly. We dynamically solve for control points which are coefficients of B-spline basis functions. The new approach is more simple and fast. Moreover, it is also enhanced by a multilevel technique in order to prevent instabilities. The numerical testing consists of two artificial images, four real bio-medical MRI brain and CT heart images, and they show our registration method is accurate, fast and efficient, especially for large deformation problems. KW - Finite-Elemente-Methode KW - isogeometric methods Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20151210-24849 ER - TY - THES A1 - Budarapu, Pattabhi Ramaiah T1 - Adaptive multiscale methods for fracture T1 - Adaptive Multiskalen-Methoden zur Modellierung von Materialversagen N2 - One major research focus in the Material Science and Engineering Community in the past decade has been to obtain a more fundamental understanding on the phenomenon 'material failure'. Such an understanding is critical for engineers and scientists developing new materials with higher strength and toughness, developing robust designs against failure, or for those concerned with an accurate estimate of a component's design life. Defects like cracks and dislocations evolve at nano scales and influence the macroscopic properties such as strength, toughness and ductility of a material. In engineering applications, the global response of the system is often governed by the behaviour at the smaller length scales. Hence, the sub-scale behaviour must be computed accurately for good predictions of the full scale behaviour. Molecular Dynamics (MD) simulations promise to reveal the fundamental mechanics of material failure by modeling the atom to atom interactions. Since the atomistic dimensions are of the order of Angstroms ( A), approximately 85 billion atoms are required to model a 1 micro- m^3 volume of Copper. Therefore, pure atomistic models are prohibitively expensive with everyday engineering computations involving macroscopic cracks and shear bands, which are much larger than the atomistic length and time scales. To reduce the computational effort, multiscale methods are required, which are able to couple a continuum description of the structure with an atomistic description. In such paradigms, cracks and dislocations are explicitly modeled at the atomistic scale, whilst a self-consistent continuum model elsewhere. Many multiscale methods for fracture are developed for "fictitious" materials based on "simple" potentials such as the Lennard-Jones potential. Moreover, multiscale methods for evolving cracks are rare. Efficient methods to coarse grain the fine scale defects are missing. However, the existing multiscale methods for fracture do not adaptively adjust the fine scale domain as the crack propagates. Most methods, therefore only "enlarge" the fine scale domain and therefore drastically increase computational cost. Adaptive adjustment requires the fine scale domain to be refined and coarsened. One of the major difficulties in multiscale methods for fracture is to up-scale fracture related material information from the fine scale to the coarse scale, in particular for complex crack problems. Most of the existing approaches therefore were applied to examples with comparatively few macroscopic cracks. Key contributions The bridging scale method is enhanced using the phantom node method so that cracks can be modeled at the coarse scale. To ensure self-consistency in the bulk, a virtual atom cluster is devised providing the response of the intact material at the coarse scale. A molecular statics model is employed in the fine scale where crack propagation is modeled by naturally breaking the bonds. The fine scale and coarse scale models are coupled by enforcing the displacement boundary conditions on the ghost atoms. An energy criterion is used to detect the crack tip location. Adaptive refinement and coarsening schemes are developed and implemented during the crack propagation. The results were observed to be in excellent agreement with the pure atomistic simulations. The developed multiscale method is one of the first adaptive multiscale method for fracture. A robust and simple three dimensional coarse graining technique to convert a given atomistic region into an equivalent coarse region, in the context of multiscale fracture has been developed. The developed method is the first of its kind. The developed coarse graining technique can be applied to identify and upscale the defects like: cracks, dislocations and shear bands. The current method has been applied to estimate the equivalent coarse scale models of several complex fracture patterns arrived from the pure atomistic simulations. The upscaled fracture pattern agree well with the actual fracture pattern. The error in the potential energy of the pure atomistic and the coarse grained model was observed to be acceptable. A first novel meshless adaptive multiscale method for fracture has been developed. The phantom node method is replaced by a meshless differential reproducing kernel particle method. The differential reproducing kernel particle method is comparatively more expensive but allows for a more "natural" coupling between the two scales due to the meshless interpolation functions. The higher order continuity is also beneficial. The centro symmetry parameter is used to detect the crack tip location. The developed multiscale method is employed to study the complex crack propagation. Results based on the meshless adaptive multiscale method were observed to be in excellent agreement with the pure atomistic simulations. The developed multiscale methods are applied to study the fracture in practical materials like Graphene and Graphene on Silicon surface. The bond stretching and the bond reorientation were observed to be the net mechanisms of the crack growth in Graphene. The influence of time step on the crack propagation was studied using two different time steps. Pure atomistic simulations of fracture in Graphene on Silicon surface are presented. Details of the three dimensional multiscale method to study the fracture in Graphene on Silicon surface are discussed. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2015,1 KW - Material KW - Strukturmechanik KW - Materialversagen KW - material failure Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20150507-23918 ER - TY - THES A1 - Mai, Luu T1 - Structural Control Systems in High-speed Railway Bridges N2 - Structural vibration control of high-speed railway bridges using tuned mass dampers, semi-active tuned mass dampers, fluid viscous dampers and magnetorheological dampers to reduce resonant structural vibrations is studied. In this work, the addressed main issues include modeling of the dynamic interaction of the structures, optimization of the parameters of the dampers and comparison of their efficiency. A new approach to optimize multiple tuned mass damper systems on an uncertain model is proposed based on the H-infinity optimization criteria and the DK iteration procedure with norm-bounded uncertainties in frequency domain. The parameters of tuned mass dampers are optimized directly and simultaneously on different modes contributing significantly to the multi-resonant peaks to explore the different possible combinations of parameters. The effectiveness of the present method is also evaluated through comparison with a previous method. In the case of semi-active tuned mass dampers, an optimization algorithm is derived to control the magnetorheological damper in these semi-active damping systems. The use of the proposed algorithm can generate various combinations of control gains and state variables. This can lead to the improvement of the ability of MR dampers to track the desired control forces. An uncertain model to reduce detuning effects is also considered in this work. Next, for fluid viscous dampers, in order to tune the optimal parameters of fluid viscous dampers to the vicinity of the exact values, analytical formulae which can include structural damping are developed based on the perturbation method. The proposed formulae can also be considered as an improvement of the previous analytical formulae, especially for bridge beams with large structural damping. Finally, a new combination of magnetorheological dampers and a double-beam system to improve the performance of the primary structure vibration is proposed. An algorithm to control magnetorheological dampers in this system is developed by using standard linear matrix inequality techniques. Weight functions as a loop shaping procedure are also introduced in the feedback controllers to improve the tracking ability of magnetorheological damping forces. To this end, the effectiveness of magnetorheological dampers controlled by the proposed scheme, along with the effects of the uncertain and time-delay parameters on the models, are evaluated through numerical simulations. Additionally, a comparison of the dampers based on their performance is also considered in this work. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2014,3 KW - High-speed railway bridge KW - Control system KW - Passive damper KW - Semi-active damper KW - Railway bridges Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20141223-23391 SN - 1610-7381 ER - TY - THES A1 - Ahmad, Sofyan T1 - Reference Surface-Based System Identification N2 - Environmental and operational variables and their impact on structural responses have been acknowledged as one of the most important challenges for the application of the ambient vibration-based damage identification in structures. The damage detection procedures may yield poor results, if the impacts of loading and environmental conditions of the structures are not considered. The reference-surface-based method, which is proposed in this thesis, is addressed to overcome this problem. In the proposed method, meta-models are used to take into account significant effects of the environmental and operational variables. The usage of the approximation models, allows the proposed method to simply handle multiple non-damaged variable effects simultaneously, which for other methods seems to be very complex. The input of the meta-model are the multiple non-damaged variables while the output is a damage indicator. The reference-surface-based method diminishes the effect of the non-damaged variables to the vibration based damage detection results. Hence, the structure condition that is assessed by using ambient vibration data at any time would be more reliable. Immediate reliable information regarding the structure condition is required to quickly respond to the event, by means to take necessary actions concerning the future use or further investigation of the structures, for instance shortly after extreme events such as earthquakes. The critical part of the proposed damage detection method is the learning phase, where the meta-models are trained by using input-output relation of observation data. Significant problems that may encounter during the learning phase are outlined and some remedies to overcome the problems are suggested. The proposed damage identification method is applied to numerical and experimental models. In addition to the natural frequencies, wavelet energy and stochastic subspace damage indicators are used. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2013,3 KW - System Identification KW - Schadensdetektionsverfahren KW - Referenzfläche Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20140205-21132 ER - TY - THES A1 - Zhao, Jun-Hua T1 - Multiscale modeling of nanodevices based on carbon nanotubes and polymers T1 - Multiskalige Modellierung von auf Kohlenstoffnanoröhren und Polymeren basierenden Nanobauteilen N2 - This thesis concerns the physical and mechanical interactions on carbon nanotubes and polymers by multiscale modeling. CNTs have attracted considerable interests in view of their unique mechanical, electronic, thermal, optical and structural properties, which enable them to have many potential applications. Carbon nanotube exists in several structure forms, from individual single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) to carbon nanotube bundles and networks. The mechanical properties of SWCNTs and MWCNTs have been extensively studied by continuum modeling and molecular dynamics (MD) simulations in the past decade since the properties could be important in the CNT-based devices. CNT bundles and networks feature outstanding mechanical performance and hierarchical structures and network topologies, which have been taken as a potential saving-energy material. In the synthesis of nanocomposites, the formation of the CNT bundles and networks is a challenge to remain in understanding how to measure and predict the properties of such large systems. Therefore, a mesoscale method such as a coarse-grained (CG) method should be developed to study the nanomechanical characterization of CNT bundles and networks formation. In this thesis, the main contributions can be written as follows: (1) Explicit solutions for the cohesive energy between carbon nanotubes, graphene and substrates are obtained through continuum modeling of the van der Waals interaction between them. (2) The CG potentials of SWCNTs are established by a molecular mechanics model. (3) The binding energy between two parallel and crossing SWCNTs and MWCNTs is obtained by continuum modeling of the van der Waals interaction between them. Crystalline and amorphous polymers are increasingly used in modern industry as tructural materials due to its important mechanical and physical properties. For crystalline polyethylene (PE), despite its importance and the studies of available MD simulations and continuum models, the link between molecular and continuum descriptions of its mechanical properties is still not well established. For amorphous polymers, the chain length and temperature effect on their elastic and elastic-plastic properties has been reported based on the united-atom (UA) and CG MD imulations in our previous work. However, the effect of the CL and temperature on the failure behavior is not understood well yet. Especially, the failure behavior under shear has been scarcely reported in previous work. Therefore, understanding the molecular origins of macroscopic fracture behavior such as fracture energy is a fundamental scientific challenge. In this thesis, the main contributions can be written as follows: (1) An analytical molecular mechanics model is developed to obtain the size-dependent elastic properties of crystalline PE. (2) We show that the two molecular mechanics models, the stick-spiral and the beam models, predict considerably different mechanical properties of materials based on energy equivalence. The difference between the two models is independent of the materials. (3) The tensile and shear failure behavior dependence on chain length and temperature in amorphous polymers are scrutinized using molecular dynamics simulations. Finally, the influence of polymer wrapped two neighbouring SWNTs’ dispersion on their load transfer is investigated by molecular dynamics (MD) simulations, in which the SWNTs' position, the polymer chain length and the temperature on the interaction force is systematically studied. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2014,1 KW - Mehrskalenmodell KW - Kohlenstoff Nanoröhre KW - Polymere KW - Multiscale modeling KW - Carbon nanotubes KW - Polymers Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20140130-21078 ER - TY - THES A1 - Schrader, Kai T1 - Hybrid 3D simulation methods for the damage analysis of multiphase composites T1 - Hybride 3D Simulationsmethoden zur Abbildung der Schädigungsvorgänge in Mehrphasen-Verbundwerkstoffen N2 - Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis. Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time. In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations. Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2013,2 KW - high-performance computing KW - finite element method KW - heterogeneous material KW - domain decomposition KW - scalable smeared crack analysis KW - FEM KW - multiphase KW - damage KW - HPC KW - solver Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20131021-20595 ER - TY - THES A1 - Brehm, Maik T1 - Vibration-based model updating: Reduction and quantification of uncertainties N2 - Numerical models and their combination with advanced solution strategies are standard tools for many engineering disciplines to design or redesign structures and to optimize designs with the purpose to improve specific requirements. As the successful application of numerical models depends on their suitability to represent the behavior related to the intended use, they should be validated by experimentally obtained results. If the discrepancy between numerically derived and experimentally obtained results is not acceptable, a model revision or a revision of the experiment need to be considered. Model revision is divided into two classes, the model updating and the basic revision of the numerical model. The presented thesis is related to a special branch of model updating, the vibration-based model updating. Vibration-based model updating is a tool to improve the correlation of the numerical model by adjusting uncertain model input parameters by means of results extracted from vibration tests. Evidently, uncertainties related to the experiment, the numerical model, or the applied numerical solving strategies can influence the correctness of the identified model input parameters. The reduction of uncertainties for two critical problems and the quantification of uncertainties related to the investigation of several nominally identical structures are the main emphases of this thesis. First, the reduction of uncertainties by optimizing reference sensor positions is considered. The presented approach relies on predicted power spectral amplitudes and an initial finite element model as a basis to define the assessment criterion for predefined sensor positions. In combination with geometry-based design variables, which represent the sensor positions, genetic and particle swarm optimization algorithms are applied. The applicability of the proposed approach is demonstrated on a numerical benchmark study of a simply supported beam and a case study of a real test specimen. Furthermore, the theory of determining the predicted power spectral amplitudes is validated with results from vibration tests. Second, the possibility to reduce uncertainties related to an inappropriate assignment for numerically derived and experimentally obtained modes is investigated. In the context of vibration-based model updating, the correct pairing is essential. The most common criterion for indicating corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases and is not reliable for automatic approaches. Hence, an alternative criterion, the energy-based modal assurance criterion, is proposed. This criterion combines the mathematical characteristic of orthogonality with the physical properties of the structure by modal strain energies. A numerical example and a case study with experimental data are presented to show the advantages of the proposed energy-based modal assurance criterion in comparison to the traditional modal assurance criterion. Third, the application of optimization strategies combined with information theory based objective functions is analyzed for the purpose of stochastic model updating. This approach serves as an alternative to the common sensitivity-based stochastic model updating strategies. Their success depends strongly on the defined initial model input parameters. In contrast, approaches based on optimization strategies can be more flexible. It can be demonstrated, that the investigated nature inspired optimization strategies in combination with Bhattacharyya distance and Kullback-Leibler divergence are appropriate. The obtained accuracies and the respective computational effort are comparable with sensitivity-based stochastic model updating strategies. The application of model updating procedures to improve the quality and suitability of a numerical model is always related to additional costs. The presented innovative approaches will contribute to reduce and quantify uncertainties within a vibration-based model updating process. Therefore, the increased benefit can compensate the additional effort, which is necessary to apply model updating procedures. N2 - Eine typische Anwendung von numerischen Modellen und den damit verbundenen numerischen Lösungsstrategien ist das Entwerfen oder Ertüchtigen von Strukturen und das Optimieren von Entwürfen zur Verbesserung spezifischer Eigenschaften. Der erfolgreiche Einsatz von numerischen Modellen ist abhängig von der Eignung des Modells bezüglich der vorgesehenen Anwendung. Deshalb ist eine Validierung mit experimentellen Ergebnissen sinnvoll. Zeigt die Validierung inakzeptable Unterschiede zwischen den Ergebnissen des numerischen Modells und des Experiments, sollte das numerische Modell oder das experimentelle Vorgehen verbessert werden. Für die Modellverbesserung gibt es zwei verschiedene Möglichkeiten, zum einen die Kalibrierung des Modells und zum anderen die grundsätzliche Änderung von Modellannahmen. Die vorliegende Dissertation befasst sich mit der Kalibrierung von numerischen Modellen auf der Grundlage von Schwingungsversuchen. Modellkalibrierung ist eine Methode zur Verbesserung der Korrelation zwischen einem numerischen Modell und einer realen Struktur durch Anpassung von Modelleingangsparametern unter Verwendung von experimentell ermittelten Daten. Unsicherheiten bezüglich des numerischen Modells, des Experiments und der angewandten numerischen Lösungsstrategie beeinflussen entscheidend die erzielbare Qualität der identifizierten Modelleingangsparameter. Die Schwerpunkte dieser Dissertation sind die Reduzierung von Unsicherheiten für zwei kritische Probleme und die Quantifizierung von Unsicherheiten extrahiert aus Experimenten nominal gleicher Strukturen. Der erste Schwerpunkt beschäftigt sich mit der Reduzierung von Unsicherheiten durch die Optimierung von Referenzsensorpositionen. Das Bewertungskriterium für vordefinierte Sensorpositionen basiert auf einer theoretischen Abschätzung von Amplituden der Spektraldichtefunktion und einem dazugehörigen Finite Elemente Modell. Die Bestimmung der optimalen Konfiguration erfolgt durch eine Anwendung von Optimierungsmethoden basierend auf genetischen Algorithmen und Schwarmintelligenzen. Die Anwendbarkeit dieser Methoden wurde anhand einer numerischen Studie an einem einfach gelagerten Balken und einem real existierenden komplexen Versuchskörper nachgewiesen. Mit Hilfe einer experimentellen Untersuchung wird die Abschätzung der statistischen Eigenschaften der Antwortspektraldichtefunktionen an diesem Versuchskörper validiert. Im zweiten Schwerpunkt konzentrieren sich die Untersuchungen auf die Reduzierung von Unsicherheiten, hervorgerufen durch ungeeignete Kriterien zur Eigenschwingformzuordnung. Diese Zuordnung ist entscheidend für Modellkalibrierungen basierend auf Schwingungsversuchen. Das am Häufigsten verwendete Kriterium zur Zuordnung ist das modal assurance criterion. In manchen Anwendungsfällen ist dieses Kriterium jedoch kein zuverlässiger Indikator. Das entwickelte alternative Kriterium, das energy-based modal assurance criterion, kombiniert das mathematische Merkmal der Orthogonalität mit den physikalischen Eigenschaften der untersuchten Struktur mit Hilfe von modalen Formänderungsarbeiten. Ein numerisches Beispiel und eine Sensitivitätsstudie mit experimentellen Daten zeigen die Vorteile des vorgeschlagenen energiebasierten Kriteriums im Vergleich zum traditionellen modal assurance criterion. Die Anwendung von Optimierungsstrategien auf stochastische Modellkalibrierungsverfahren wird im dritten Schwerpunkt analysiert. Dabei werden Verschiedenheitsmaße der Informationstheorie zur Definition von Zielfunktionen herangezogen. Dieser Ansatz stellt eine Alternative zu herkömmlichen Verfahren dar, welche auf gradientenbasierten Sensitivitätsmatrizen zwischen Eingangs- und Ausgangsgrößen beruhen. Deren erfolgreicher Einsatz ist abhängig von den Anfangswerten der Eingangsgrößen, wobei die vorgeschlagenen Optimierungsstrategien weniger störanfällig sind. Der Bhattacharyya Abstand und die Kullback-Leibler Divergenz als Zielfunktion, kombiniert mit stochastischen Optimierungsverfahren, erwiesen sich als geeignet. Bei vergleichbarem Rechenaufwand konnten ähnliche Genauigkeiten wie bei den Modellkalibrierungsverfahren, die auf Sensitivitätsmatrizen basieren, erzielt werden. Die Anwendung von Modellkalibrierungsverfahren zur Verbesserung der Eignung eines numerischen Modells für einen bestimmten Zweck ist mit einem Mehraufwand verbunden. Die präsentierten innovativen Verfahren tragen zu einer Reduzierung und Quantifizierung von Unsicherheiten innerhalb eines Modellkalibrierungsprozesses basierend auf Schwingungsversuchen bei. Mit dem zusätzlich generierten Nutzen kann der Mehraufwand, der für eine Modellkalibrierung notwendig ist, nachvollziehbar begründet werden. T2 - Modellkalibrierung basierend auf Schwingungsversuchen: Reduzierung und Quantifizierung von Unsicherheiten T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2011,1 KW - Dynamik KW - Optimierung KW - Modellkalibrierung KW - Modezuordung KW - optimale Sensorpositionierung KW - model updating KW - mode pairing KW - optimal sensor positions KW - dissimilarity measures KW - optimization Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20110926-15553 ER -