### Refine

#### Document Type

- Article (224)
- Conference Proceeding (133)
- Doctoral Thesis (38)
- Master's Thesis (5)
- Preprint (3)
- Habilitation (1)

#### Institute

- Institut für Strukturmechanik (404) (remove)

#### Keywords

- Angewandte Mathematik (304)
- Strukturmechanik (295)
- Stochastik (40)
- Computerunterstütztes Verfahren (22)
- Architektur <Informatik> (17)
- Maschinelles Lernen (17)
- Finite-Elemente-Methode (14)
- Machine learning (13)
- Angewandte Informatik (12)
- CAD (10)

A coupled thermo-hydro-mechanical model of jointed hard rock for compressed air energy storage
(2014)

Renewable energy resources such as wind and solar are intermittent, which causes instability when being connected to utility grid of electricity. Compressed air energy storage (CAES) provides an economic and technical viable solution to this problem by utilizing subsurface rock cavern to store the electricity generated by renewable energy in the form of compressed air. Though CAES has been used for over three decades, it is only restricted to salt rock or aquifers for air tightness reason. In this paper, the technical feasibility of utilizing hard rock for CAES is investigated by using a coupled thermo-hydro-mechanical (THM) modelling of nonisothermal gas flow. Governing equations are derived from the rules of energy balance, mass balance, and static equilibrium. Cyclic volumetric mass source and heat source models are applied to simulate the gas injection and production. Evaluation is carried out for intact rock and rock with discrete crack, respectively. In both cases, the heat and pressure losses using air mass control and supplementary air injection are compared.

Explicit solutions for the cohesive energy between carbon nanotubes, graphene and substrates are obtained through continuum modeling of the van der Waals interaction between them. The dependence of the cohesive energy on their size, spacing and crossing angles is analyzed. Checking against full atom molecular dynamics calculations and available experimental results shows that the continuum solution has high accuracy. The equilibrium distances between the nanotubes, graphene and substrates with minimum cohesive energy are also provided explicitly. The obtained analytical solution should be of great help for understanding the interaction between the nanostructures and substrates, and designing composites and nanoelectromechanical systems.

An analytical molecular mechanics model for the elastic properties of crystalline polyethylene
(2012)

We present an analytical model to relate the elastic properties of crystalline polyethylene based on a molecular mechanics approach. Along the polymer chains direction, the united-atom (UA) CH2-CH2 bond stretching, angle bending potentials are replaced with equivalent Euler-Bernoulli beams. Between any two polymer chains, the explicit formulae are derived for the van der Waals interaction represented by the linear springs of different stiffness. Then, the nine independent elastic constants are evaluated systematically using the formulae. The analytical model is finally validated by present united-atom molecular dynamics (MD) simulations and against available all-atom molecular dynamics results in the literature. The established analytical model provides an efficient route for mechanical characterization of crystalline polymers and related materials.

This thesis concerns the physical and mechanical interactions on carbon nanotubes and polymers by multiscale modeling. CNTs have attracted considerable interests in view of their unique mechanical, electronic, thermal, optical and structural properties, which enable them to have many potential applications.
Carbon nanotube exists in several structure forms, from individual single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) to carbon nanotube bundles and networks. The mechanical properties of SWCNTs and MWCNTs have been extensively studied by continuum modeling and molecular dynamics (MD) simulations in the past decade since the properties could be important in the CNT-based devices. CNT bundles and networks feature outstanding mechanical performance and hierarchical structures and network topologies, which have been taken as a potential saving-energy material. In the synthesis of nanocomposites, the formation of the CNT bundles and networks is a challenge to remain in understanding how to measure and predict the properties of such large systems. Therefore, a mesoscale method such as a coarse-grained (CG) method should be developed to study the nanomechanical characterization of CNT bundles and networks formation.
In this thesis, the main contributions can be written as follows: (1) Explicit solutions for the cohesive energy between carbon nanotubes, graphene and substrates are obtained through continuum modeling of the van der Waals interaction between them. (2) The CG potentials of SWCNTs are established by a molecular mechanics model. (3) The binding energy between two parallel and crossing SWCNTs and MWCNTs is obtained by continuum modeling of the van der Waals interaction between them. Crystalline and amorphous polymers are increasingly used in modern industry as tructural materials due to its important mechanical and physical properties. For crystalline polyethylene (PE), despite its importance and the studies of available MD simulations and continuum models, the link between molecular and continuum descriptions of its mechanical properties is still not well established. For amorphous polymers, the chain length and temperature effect on their
elastic and elastic-plastic properties has been reported based on the united-atom (UA) and CG MD imulations in our previous work. However, the effect of the CL and temperature on the failure behavior is not understood well yet. Especially, the failure behavior under shear has been scarcely reported in previous work. Therefore, understanding the molecular origins of macroscopic fracture behavior such as fracture energy is a fundamental scientific challenge.
In this thesis, the main contributions can be written as follows: (1) An analytical molecular mechanics model is developed to obtain the size-dependent elastic properties of crystalline PE.
(2) We show that the two molecular mechanics models, the stick-spiral and the beam models, predict considerably different mechanical properties of materials based on energy equivalence. The difference between the two models is independent of the materials. (3) The tensile and shear failure behavior dependence on chain length and temperature in amorphous polymers are scrutinized using molecular dynamics simulations. Finally, the influence of polymer wrapped two neighbouring SWNTs’ dispersion on their load transfer is investigated by molecular dynamics (MD) simulations, in which the SWNTs' position, the polymer chain length and the temperature on the interaction force is systematically studied.

Tensile strain and compress strain can greatly affect the thermal conductivity of graphene nanoribbons (GNRs). However, the effect of GNRs under shear strain, which is also one of the main strain effect, has not been studied systematically yet. In this work, we employ reverse nonequilibrium molecular dynamics (RNEMD) to the systematical study of the thermal conductivity of GNRs (with model size of 4 nm × 15 nm) under the shear strain. Our studies show that the thermal conductivity of GNRs is not sensitive to the shear strain, and the thermal conductivity decreases only 12–16% before the pristine structure is broken. Furthermore, the phonon frequency and the change of the micro-structure of GNRs, such as band angel and bond length, are analyzed to explore the tendency of thermal conductivity. The results show that the main influence of shear strain is on the in-plane phonon density of states (PDOS), whose G band (higher frequency peaks) moved to the low frequency, thus the thermal conductivity is decreased. The unique thermal properties of GNRs under shear strains suggest their great potentials for graphene nanodevices and great potentials in the thermal managements and thermoelectric applications.

Identification of flaws in structures is a critical element in the management of maintenance and quality assurance processes in engineering. Nondestructive testing (NDT) techniques based on a wide range of physical principles have been developed and are used in common practice for structural health monitoring. However, basic NDT techniques are usually limited in their ability to provide the accurate information on locations, dimensions and shapes of flaws. One alternative to extract additional information from the results of NDT is to append it with a computational model that provides detailed analysis of the physical process involved and enables the accurate identification of the flaw parameters. The aim here is to develop the strategies to uniquely identify cracks in two-dimensional 2D) structures under dynamic loadings.
A local NDT technique combined eXtended Finite Element Method (XFEM) with dynamic loading in order to identify the cracks in the structures quickly and accurately is developed in this dissertation. The Newmark-b time integration method with Rayleigh damping is used for the time integration. We apply Nelder-Mead (NM)and Quasi-Newton (QN) methods for identifying the crack tip in plate. The inverse problem is solved iteratively, in which XFEM is used for solving the forward problem in each iteration. For a timeharmonic excitation with a single frequency and a short-duration signal measured along part of the external boundary, the crack is detected through the solution of an inverse time-dependent problem. Compared to the static load, we show that the dynamic loads are more effective for crack detection problems. Moreover, we tested different dynamic loads and find that NM method works more efficient under the harmonic load than the pounding load while the QN method achieves almost the same results for both load types.
A global strategy, Multilevel Coordinate Search (MCS) with XFEM (XFEM-MCS) methodology under the dynamic electric load, to detect multiple cracks in 2D piezoelectric plates is proposed in this dissertation. The Newmark-b method is employed for the time integration and in each iteration the forward problem is solved by XFEM for various cracks. The objective functional is minimized by using a global search algorithm MCS. The test problems show that the XFEM-MCS algorithm under the dynamic electric load can be effectively employed for multiple cracks detection in piezoelectric materials, and it proves to be robust in identifying defects in piezoelectric structures. Fiber-reinforced composites (FRCs) are extensively applied in practical engineering since they have high stiffness and strength. Experiments reveal a so-called interphase zone, i.e. the space between the outside interface of the fiber and the inside interface of the matrix. The interphase strength between the fiber and the matrix strongly affects the mechanical properties as a result of the large ratio of interface/volume. For the purpose of understanding the mechanical properties of FRCs with functionally graded interphase (FGI), a closed-form expression of the interface strength between a fiber and a matrix is obtained in this dissertation using a continuum modeling approach according to the ver derWaals (vdW) forces. Based on the interatomic potential, we develop a new modified nonlinear cohesive law, which is applied to study the interface delamination of FRCs with FGI under different loadings. The analytical solutions show that the delamination behavior strongly depends on the interphase thickness, the fiber radius, the Young’s moduli and Poisson’s ratios of the fiber and the matrix. Thermal conductivity is the property of a material to conduct heat. With the development and deep research of 2D materials, especially graphene and molybdenum disulfide (MoS2), the thermal conductivity of 2D materials attracts wide attentions. The thermal conductivity of graphene nanoribbons (GNRs) is found to appear a tendency of decreasing under tensile strain by classical molecular dynamics (MD) simulations. Hence, the strain effects of graphene can play a key role in the continuous tunability and applicability of its thermal conductivity property at nanoscale, and the dissipation of thermal conductivity is an obstacle for the applications of thermal management. Up to now, the thermal conductivity of graphene under shear deformation has not been investigated yet. From a practical point of view, good thermal managements of GNRs have significantly potential applications of future GNR-based thermal nanodevices, which can greatly improve performances of the nanosized devices due to heat dissipations. Meanwhile, graphene is a thin membrane structure, it is also important to understand the wrinkling behavior under shear deformation. MoS2 exists in the stable semiconducting 1H phase (1H-MoS2) while the metallic 1T phase (1T-MoS2) is unstable at ambient conditions. As it’s well known that much attention has been focused on studying the nonlinear optical properties of the 1H-MoS2. In a very recent research, the 1T-type monolayer crystals of TMDCs, MX2 (MoS2, WS2 ...) was reported having an intrinsic in-plane negative Poisson’s ratio. Luckily, nearly at the same time, unprecedented long-term (>3months) air stability of the 1T-MoS2 can be achieved by using the donor lithium hydride (LiH). Therefore, it’s very important to study the thermal conductivity of 1T-MoS2.
The thermal conductivity of graphene under shear strain is systematically studied in this dissertation by MD simulations. The results show that, in contrast to the dramatic decrease of thermal conductivity of graphene under uniaxial tensile, the thermal conductivity of graphene is not sensitive to the shear strain, and the thermal conductivity decreases only 12-16%. The wrinkle evolves when the shear strain is around 5%-10%, but the thermal conductivity barely changes.
The thermal conductivities of single-layer 1H-MoS2(1H-SLMoS2) and single-layer 1T-MoS2 (1T-SLMoS2) with different sample sizes, temperatures and strain rates have been studied systematically in this dissertation. We find that the thermal conductivities of 1H-SLMoS2 and 1T-SLMoS2 in both the armchair and the zigzag directions increase with the increasing of the sample length, while the increase of the width of the sample has minor effect on the thermal conductions of these two structures. The thermal conductivity of 1HSLMoS2 is smaller than that of 1T-SLMoS2 under size effect. Furthermore, the temperature effect results show that the thermal conductivities of both 1H-SLMoS2 and 1T-SLMoS2 decrease with the increasing of the temperature. The thermal conductivities of 1HSLMoS2 and 1T-SLMoS2 are nearly the same (difference <6%) in both of the chiral orientations under corresponding temperatures, especially in the armchair direction (difference <2.8%). Moreover, we find that the strain effects on the thermal conductivity of 1HSLMoS2 and 1T-SLMoS2 are different. More specifically, the thermal conductivity decreases with the increasing tensile strain rate for
1T-SLMoS2, while fluctuates with the growth of the strain for 1HSLMoS2. Finally, we find that the thermal conductivity of same sized 1H-SLMoS2 is similar with that of the strained 1H-SLMoS2 structure.

Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines.

In recent years the demand on dynamic analyses of existing structures in civil engineering has remarkably increased. These analyses are mainly based on numerical models. Accordingly, the generated results depend on the quality of the used models. Therefore it is very important that the models describe the considered systems such that the behaviour of the physical structure is realistically represented. As any model is based on assumptions, there is always a certain degree of uncertainty present in the results of a simulation based on the respective numerical model. To minimise these uncertainties in the prediction of the response of a structure to a certain loading, it has become common practice to update or calibrate the parameters of a numerical model based on observations of the structural behaviour of the respective existing system.
The determination of the behaviour of an existing structure requires experimental investigations. If the numerical analyses concern the dynamic response of a structure it is sensible to direct the experimental investigations towards the identification of the dynamic structural behaviour which is determined by the modal parameters of the system. In consequence, several methods for the experimental identification of modal parameters have been developed since the 1980ies.
Due to various technical restraints in civil engineering which limit the possibilities to excitate a structure with economically reasonable effort, several methods have been developed that allow a modal identification form tests with an ambient excitation. The approach of identifying modal parameters only from measurements of the structural response without precise knowledge of the excitation is known as output-only or operational modal analysis.
Since operational modal analysis (OMA) can be considered as a link between numerical modelling and simulation on the one hand and the dynamic behaviour of an existing structure on the other hand, the respective algorithms connect both the concepts of structural dynamics and mathematical tools applied within the processing of experimental data. Accordingly, the related theoretical topics are revised after an introduction into the topic.
Several OMA methods have been developed over the last decades. The most established algorithms are presented here and their application is illustrated by means of both a small numerical and an experimental example. Since experimentally obtained results always underly manifold influences, an appropriate postprocessing of the results is necessary for a respective quality assessment. This quality assessment does not only require respective indicators but should also include the quantification of uncertainties.
One special feature in modal testing is that it is common to instrument the structure in different sensor setups to improve the spacial resolution of identified mode shapes. The modal information identified from tests in several setups needs to be merged a posteriori. Algorithms to cope with this problem are also presented.
Due to the fact that the amount of data generated in modal tests can become very large, manual processing can become extremely expensive or even impossible, for example in the case of a long-term continuous structural monitoring. In these situations an automated analysis and postprocessing are essential. Descriptions of respective methodologies are therefore also included in this work.
Every structural system in civil engineering is unique and so also every identification of modal parameters has its specific challenges. Some aspects that can be faced in practical applications of operational modal analysis are presented and discussed in a chapter that is dedicated specific problems that an analyst may have to overcome. Case studies of systems with very close modes, with limited accessibility as well as the application of different OMA methods are described and discussed. In this context the focus is put on several types of uncertainty that may occur in the multiple stages of an operational modal analysis. In literature only very specific uncertainties at certain stages of the analysis are addressed. Here, the topic of uncertainties has been considered in a broader sense and approaches for treating respective problems are suggested.
Eventually, it is concluded that the methodologies of operatinal modal analysis and related technical solutions have been well-engineered already. However, as in any discipline that includes experiments, a certain degree of uncertainty always remains in the results. From these conclusions has been derived a demand for further research and development that should be directed towards the minimisation of these uncertainties and to a respective optimisation of the steps and corresponding parameters included in an operational modal analysis.

Advances in nanotechnology lead to the development of nano-electro-mechanical systems (NEMS) such as nanomechanical resonators with ultra-high resonant frequencies. The ultra-high-frequency resonators have recently received significant attention for wide-ranging applications such as molecular separation, molecular transportation, ultra-high sensitive sensing, high-frequency signal processing, and biological imaging. It is well known that for micrometer length scale, first-principles technique, the most accurate approach, poses serious limitations for comparisons with experimental studies. For such larger size, classical molecular dynamics (MD) simulations are desirable, which require interatomic potentials. Additionally, a mesoscale method such as the coarse-grained (CG) method is another useful method to support simulations for even larger system sizes.
Furthermore, quasi-two-dimensional (Q2D) materials have attracted intensive research interest due to their many novel properties over the past decades. However, the energy dissipation mechanisms of nanomechanical resonators based on several Q2D materials are still unknown. In this work, the addressed main issues include the development of the CG models for molybdenum disulphide (MoS2), investigation of the mechanism effects on black phosphorus (BP) nanoresonators and the application of graphene nanoresonators. The primary coverage and results of the dissertation are as follows:
Method development. Firstly, a two-dimensional (2D) CG model for single layer MoS2 (SLMoS2) is analytically developed. The Stillinger-Weber (SW) potential for this 2D CG model is further parametrized, in which all SW geometrical parameters are determined analytically according to the equilibrium condition for each individual potential term, while the SW energy parameters are derived analytically based on the valence force field model. Next, the 2D CG model is further simplified to one-dimensional (1D) CG model, which describes the 2D SLMoS2 structure using a 1D chain model. This 1D CG model is applied to investigate the relaxed configuration and the resonant oscillation of the folded SLMoS2. Owning to the simplicity nature of the 1D CG model, the relaxed configuration of the folded SLMoS2 is determined analytically, and the resonant oscillation frequency is derived analytically. Considering the increasing interest in studying the properties of other 2D layered materials, and in particular those in the semiconducting transition metal dichalcogenide class like MoS2, the CG models proposed in current work provide valuable simulation approaches.
Mechanism understanding. Two energy dissipation mechanisms of BP nanoresonators are focused exclusively, i.e. mechanical strain effects and defect effects (including vacancy and oxidation). Vacancy defect is intrinsic damping factor for the quality (Q)-factor, while mechanical strain and oxidation are extrinsic damping factors. Intrinsic dissipation (induced by thermal vibrations) in BP resonators (BPRs) is firstly investigated. Specifically, classical MD simulations are performed to examine the temperature dependence for the Q-factor of the single layer BPR (SLBPR) along the armchair and zigzag directions, where two-step fitting procedure is used to extract the frequency and Q-factor from the kinetic energy time history. The Q-factors of BPRs are evaluated through comparison with those of graphene and MoS2 nanoresonators. Next, effects of mechanical strain, vacancy and oxidation on BP nanoresonators are investigated in turn. Considering the increasing interest in studying the properties of BP, and in particular the lack of theoretical study for the BPRs, the results in current work provide a useful reference.
Application. A novel application for graphene nanoresonators, using them to self-assemble small nanostructures such as water chains, is proposed. All of the underlying physics enabling this phenomenon is elucidated. In particular, by drawing inspiration from macroscale self-assembly using the higher order resonant modes of Chladni plates, classical MD simulations are used to investigate the self-assembly of water molecules using
graphene nanoresonators. An analytic formula for the critical resonant frequency based on the interaction between water molecules and graphene is provided. Furthermore, the properties of the water chains assembled by the graphene nanoresonators are studied.

This paper presents a novel numerical procedure based on the combination of an edge-based smoothed finite element (ES-FEM) with a phantom-node method for 2D linear elastic fracture mechanics. In the standard phantom-node method, the cracks are formulated by adding phantom nodes, and the cracked element is replaced by two new superimposed elements. This approach is quite simple to implement into existing explicit finite element programs. The shape functions associated with discontinuous elements are similar to those of the standard finite elements, which leads to certain simplification with implementing in the existing codes. The phantom-node method allows modeling discontinuities at an arbitrary location in the mesh. The ES-FEM model owns a close-to-exact stiffness that is much softer than lower-order finite element methods (FEM). Taking advantage of both the ES-FEM and the phantom-node method, we introduce an edge-based strain smoothing technique for the phantom-node method. Numerical results show that the proposed method achieves high accuracy compared with the extended finite element method (XFEM) and other reference solutions.

Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex.
The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials.
This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties
of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major
goal, the following tasks are carried out:
At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs.
At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length
scales. In particular, we homogenized the RVE into an equivalent fiber.
The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale.
Stochastic modeling and uncertainty quantification consist of the following ingredients:
- Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively.
- Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data.
- Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance.
In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided.
The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results.

This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed.
A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth.
Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven.

The extended finite element method (XFEM) offers an elegant tool to model material discontinuities and cracks within a regular mesh, so that the element edges do not necessarily coincide with the discontinuities. This allows the modeling of propagating cracks without the requirement to adapt the mesh incrementally. Using a regular mesh offers the advantage, that simple refinement strategies based on the quadtree data structure can be used to refine the mesh in regions, that require a high mesh density. An additional benefit of the XFEM is, that the transmission of cohesive forces through a crack can be modeled in a straightforward way without introducing additional interface elements. Finally different criteria for the determination of the crack propagation angle are investigated and applied to numerical tests of cracked concrete specimens, which are compared with experimental results.

PARAMETER IDENTIFICATION OF MESOSCALE MODELS FROM MACROSCOPIC TESTS USING BAYESIAN NEURAL NETWORKS
(2010)

In this paper, a parameter identification procedure using Bayesian neural networks is proposed. Based on a training set of numerical simulations, where the material parameters are simulated in a predefined range using Latin Hypercube sampling, a Bayesian neural network, which has been extended to describe the noise of multiple outputs using a full covariance matrix, is trained to approximate the inverse relation from the experiment (displacements, forces etc.) to the material parameters. The method offers not only the possibility to determine the parameters itself, but also the accuracy of the estimate and the correlation between these parameters. As a result, a set of experiments can be designed to calibrate a numerical model.

From a macroscopic point of view, failure within concrete structures is characterized by the initiation and propagation of cracks. In the first part of the thesis, a methodology for macroscopic crack growth simulations for concrete structures using a cohesive discrete crack approach based on the extended finite element method is introduced. Particular attention is turned to the investigation of criteria for crack initiation and crack growth. A drawback of the macroscopic simulation is that the real physical phenomena leading to the nonlinear behavior are only modeled phenomenologically. For concrete, the nonlinear behavior is characterized by the initiation of microcracks which coalesce into macroscopic cracks. In order to obtain a higher resolution of this failure zones, a mesoscale model for concrete is developed that models particles, mortar matrix and the interfacial transition zone (ITZ) explicitly. The essential features are a representation of particles using a prescribed grading curve, a material formulation based on a cohesive approach for the ITZ and a combined model with damage and plasticity for the mortar matrix. Compared to numerical simulations, the response of real structures exhibits a stochastic scatter. This is e.g. due to the intrinsic heterogeneities of the structure. For mesoscale models, these intrinsic heterogeneities are simulated by using a random distribution of particles and by a simulation of spatially variable material parameters using random fields. There are two major problems related to numerical simulations on the mesoscale. First of all, the material parameters for the constitutive description of the materials are often difficult to measure directly. In order to estimate material parameters from macroscopic experiments, a parameter identification procedure based on Bayesian neural networks is developed which is universally applicable to any parameter identification problem in numerical simulations based on experimental results. This approach offers information about the most probable set of material parameters based on experimental data and information about the accuracy of the estimate. Consequently, this approach can be used a priori to determine a set of experiments to be carried out in order to fit the parameters of a numerical model to experimental data. The second problem is the computational effort required for mesoscale simulations of a full macroscopic structure. For this purpose, a coupling between mesoscale and macroscale model is developed. Representative mesoscale simulations are used to train a metamodel that is finally used as a constitutive model in a macroscopic simulation. Special focus is placed on the ability of appropriately simulating unloading.

The focus of the thesis is to process measurements acquired from a continuous
monitoring system at a railway bridge. Temperature, strain and ambient vibration
records are analysed and two main directions of investigation are pursued.
The first and the most demanding task is to develop processing routines able to extract modal parameters from ambient vibration measurements. For this purpose, reliable experimental models are achieved on the basis of a stochastic system identification(SSI) procedure. A fully automated algorithm based on a three-stage clustering is implemented to perform a modal parameter estimation for every single measurement. After selecting a baseline of modal parameters, the evolution of eigenfrequencies is
studied and correlated to environmental and operational factors.
The second aspect deals with the structural response to passing trains. Corresponding
triggered records of strain and temperature are processed and their assessment is
accomplished using the average strains induced by each train as the reference parameter.
Three influences due to speed, temperature and loads are distinguished and treated individually. An attempt to estimate the maximum response variation due to each factor is also carried out.

Damping in Bolted Joints
(2013)

With the help of modern CAE-based simulation processes, it is possible to predict the dynamic behavior of fatigue strength problems in order to improve products of many industries, e.g. the building, the machine construction or the automotive industry. Amongst others, it can be used to improve the acoustic design of automobiles in an early development stage.
Nowadays, the acoustics of automobiles plays a crucial role in the process of vehicle development. Because of the advanced demand of comfort and due to statutory rules the manufacturers are faced with the challenge of optimizing their car’s sound emissions. The optimization includes not only the reduction of noises. Lately with the trend to hybrid and electric cars, it has been shown that vehicles can become too quiet. Thus, the prediction of structural and acoustic properties based on FE-simulations is becoming increasingly important before any experimental prototype is examined. With the state of the art, qualitative comparisons between different implementations are possible. However, an accurate and reliable quantitative prediction is still a challenge.
One aspect in the context of increasing the prediction quality of acoustic (or general oscillating) problems - especially in power-trains of automobiles - is the more accurate implementation of damping in joint structures. While material damping occurs globally and homogenous in a structural system, the damping due to joints is a very local problem, since energy is especially dissipated in the vicinity of joints.
This paper focusses on experimental and numerical studies performed on a single (extracted) screw connection. Starting with experimental studies that are used to identify the underlying physical model of the energy loss, the locally influencing parameters (e.g. the damping factor) should be identified. In contrast to similar research projects, the approach tends to a more local consideration within the joint interface. Tangential stiffness and energy loss within the interface are spatially distributed and interactions between the influencing parameters are regarded. As a result, the damping matrix is no longer proportional to mass or stiffness matrix, since it is composed of the global material damping and the local joint damping. With this new approach, the prediction quality can be increased, since the local distribution of the physical parameters within the joint interface corresponds much closer to the reality.

This paper presents a novel numerical procedure based on the framework of isogeometric analysis for static, free vibration, and buckling analysis of laminated composite plates using the first-order shear deformation theory. The isogeometric approach utilizes non-uniform rational B-splines to implement for the quadratic, cubic, and quartic elements. Shear locking problem still exists in the stiffness formulation, and hence, it can be significantly alleviated by a stabilization technique. Several numerical examples are presented to show the performance of the method, and the results obtained are compared with other available ones.

From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.

Due to an increased need for hydro-electricity, water storage, and flood protection, it is assumed that a series of new dams will be built throughout the world. Comparing existing design methodologies for arch-type dams, model-based shape optimization can effectively reduce construction costs and leverage the properties of construction materials. To apply the means of shape optimization, suitable variables need to be chosen to formulate the objective function, which is the volume of the arch dam here. In order to increase the consistency with practical conditions, a great number of geometrical and behavioral constraints are included in the mathematical model. An optimization method, namely Genetic Algorithm is adopted which allows a global search.
Traditional optimization techniques are realized based on a deterministic approach, which means that the material properties and loading conditions are assumed to be fixed values. As a result, the real-world structures that are optimized by these approaches suffer from uncertainties that one needs to be aware of. Hence, in any optimization process for arch dams, it is nec- essary to find a methodology that is capable of considering the influences of uncertainties and generating a solution which is robust enough against the uncertainties.
The focus of this thesis is the formulation and the numerical method for the optimization of the arch dam under the uncertainties. The two main models, the probabilistic model, and non-probabilistic models are intro- duced and discussed. Classic procedures of probabilistic approaches un- der uncertainties, such as RDO (robust design optimization) and RBDO (reliability-based design optimization), are in general computationally ex- pensive and rely on estimates of the system’s response variance and fail- ure probabilities. Instead, the robust optimization (RO) method which is based on the non-probabilistic model, will not follow a full probabilistic approach but works with pre-defined confidence levels. This leads to a bi-level optimization program where the volume of the dam is optimized under the worst combination of the uncertain parameters. By this, robust and reliable designs are obtained and the result is independent of any as- sumptions on stochastic properties of the random variables in the model.
The optimization of an arch-type dam is realized here by a robust optimiza- tion method under load uncertainty, where hydraulic and thermal loads are considered. The load uncertainty is modeled as an ellipsoidal expression. Comparing with any traditional deterministic optimization (DO) method, which only concerns the minimum objective value and offers a solution candidate close to limit-states, the RO method provides a robust solution against uncertainties.
All the above mentioned methods are applied to the optimization of the arch dam to compare with the optimal design with DO methods. The re- sults are compared and analyzed to discuss the advantages and drawbacks of each method.
In order to reduce the computational cost, a ranking strategy and an ap- proximation model are further involved to do a preliminary screening. By means of these, the robust design can generate an improved arch dam structure which ensures both safety and serviceability during its lifetime.

A simple multiscale analysis framework for heterogeneous solids based on a computational homogenization technique is presented. The macroscopic strain is linked kinematically to the boundary displacement of a circular or spherical representative volume which contains the microscopic information of the material. The macroscopic stress is obtained from the energy principle between the macroscopic scale and the microscopic scale. This new method is applied to several standard examples to show its accuracy and consistency of the method proposed.

Meshfree methods (MMs) such as the element free Galerkin (EFG)method have gained popularity because of some advantages over other numerical methods such as the finite element method (FEM). A group of problems that have attracted a great deal of attention from the EFG method community includes the treatment of large deformations and dealing with strong discontinuities such as cracks. One efficient solution to model cracks is adding special enrichment functions to the standard shape functions such as extended FEM, within the FEM context, and the cracking particles method, based on EFG method. It is well known that explicit time integration in dynamic applications is conditionally stable. Furthermore, in enriched methods, the critical time step may tend to very small values leading to computationally expensive simulations. In this work, we study the stability of enriched MMs and propose two mass-lumping strategies. Then we show that the critical time step for enriched MMs based on lumped mass matrices is of the same order as the critical time step of MMs without enrichment. Moreover, we show that, in contrast to extended FEM, even with a consistent mass matrix, the critical time step does not vanish even when the crack directly crosses a node.

The concept of isogeometric analysis, where functions that are used to describe geometry in CAD software are used to approximate the unknown fields in numerical simulations, has received great attention in recent years. The method has the potential to have profound impact on engineering design, since the task of meshing, which in some cases can add significant overhead, has been circumvented. Much of the research effort has been focused on finite element implementations of the isogeometric concept, but at present, little has been seen on the application to the Boundary Element Method. The current paper proposes an Isogeometric Boundary Element Method (BEM), which we term IGABEM, applied to two-dimensional elastostatic problems using Non-Uniform Rational B-Splines (NURBS). We find it is a natural fit with the isogeometric concept since both the NURBS approximation and BEM deal with quantities entirely on the boundary. The method is verified against analytical solutions where it is seen that superior accuracies are achieved over a conventional quadratic isoparametric BEM implementation.

Paraffin Nanocomposites for Heat Management of Lithium-Ion Batteries: A Computational Investigation
(2016)

Lithium-ion (Li-ion) batteries are currently considered as vital components for advances in mobile technologies such as those in communications and transport. Nonetheless, Li-ion batteries suffer from temperature rises which sometimes lead to operational damages or may even cause fire. An appropriate solution to control the temperature changes during the operation of Li-ion batteries is to embed batteries inside a paraffin matrix to absorb and dissipate heat. In the present work, we aimed to investigate the possibility of making paraffin nanocomposites for better heat management of a Li-ion battery pack. To fulfill this aim, heat generation during a battery charging/discharging cycles was simulated using Newman’s well established electrochemical pseudo-2D model. We couple this model to a 3D heat transfer model to predict the temperature evolution during the battery operation. In the later model, we considered different paraffin nanocomposites structures made by the addition of graphene, carbon nanotubes, and fullerene by assuming the same thermal conductivity for all fillers. This way, our results mainly correlate with the geometry of the fillers. Our results assess the degree of enhancement in heat dissipation of Li-ion batteries through the use of paraffin nanocomposites. Our results may be used as a guide for experimental set-ups to improve the heat management of Li-ion batteries.

FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)

Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.

A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.

Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.

The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration.
An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof.
The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback.
The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested.
Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model.
The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed.
When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure.

The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency.

Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.

Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too.