TY - THES A1 - Hatahet, Tareq T1 - On the Analysis of the Disproportionate Structural Collapse in RC Buildings N2 - Increasing structural robustness is the goal which is of interest for structural engineering community. The partial collapse of RC buildings is subject of this dissertation. Understanding the robustness of RC buildings will guide the development of safer structures against abnormal loading scenarios such as; explosions, earthquakes, fine, and/or long-term accumulation effects leading to deterioration or fatigue. Any of these may result in local immediate structural damage, that can propagate to the rest of the structure causing what is known by the disproportionate collapse. This work handels collapse propagation through various analytical approaches which simplifies the mechanical description of damaged reinfoced concrete structures due to extreme acidental event. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,2 KW - Beton KW - disproportionate collapse KW - buildings KW - reinforced concrete KW - catenary action KW - compressive arching KW - dynamic amplifification KW - structural robustness Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180329-37405 ER - TY - THES A1 - Fröbel, Toni T1 - Data coupled civil engineering applications: Modeling and quality assessment methods T1 - Datenkopplung für Anwendungen im Bauingenieurwesen: Methoden zur Modellierung und Qualitätsbewertung N2 - The planning process in civil engineering is highly complex and not manageable in its entirety. The state of the art decomposes complex tasks into smaller, manageable sub-tasks. Due to the close interrelatedness of the sub-tasks, it is essential to couple them. However, from a software engineering point of view, this is quite challenging to do because of the numerous incompatible software applications on the market. This study is concerned with two main objectives: The first is the generic formulation of coupling strategies in order to support engineers in the implementation and selection of adequate coupling strategies. This has been achieved by the use of a coupling pattern language combined with a four-layered, metamodel architecture, whose applicability has been performed on a real coupling scenario. The second one is the quality assessment of coupled software. This has been developed based on the evaluated schema mapping. This approach has been described using mathematical expressions derived from the set theory and graph theory by taking the various mapping patterns into account. Moreover, the coupling quality has been evaluated within the formalization process by considering the uncertainties that arise during mapping and has resulted in global quality values, which can be used by the user to assess the exchange. Finally, the applicability of the proposed approach has been shown using an engineering case study. N2 - Der Planungsprozess im Bauwesen ist hochkomplex und daher in seiner Gesamtheit nicht zu erfassen. Deshalb wird dieser in kleinere und beherrschbarere Teilaufgaben zerlegt. Auf Grund ihrer starken Wechselwirkungen ist deren Kopplung unabdingbar. Aus Sicht der Informatik wird dies jedoch durch eine große Anzahl inkompatibler Softwareanwendungen erschwert. Die Arbeit beschäftigt sich daher mit zwei wesentlichen Aufgabenfeldern im Bereich der Softwarekopplung. Als erstes werden Kopplungskonzepte unabhängig von spezifischen Hardware- oder Softwareeigenschaften beschrieben, um den Ingenieur bei der Durchführung und Auswahl von entsprechenden Kopplungsstrategien zu unterstützen. Dies wird durch eine Kopplungs-Mustersprache in Verbindung mit einer Meta-Modell-Architektur erreicht. Seine Anwendbarkeit wird an einem Kopplungsszenario gezeigt. Das zweite Aufgabenfeld beschäftigt sich mit der Qualität von gekoppelten Softwaresystemen. Eine Qualitätsbewertung erfolgt hierbei auf Basis von bewertetem Schema-Mapping. Der Ansatz ist auf Grundlage der Mengen- und Graphentheorie mathematisch beschrieben. Er berücksichtigt die gängigen Mapping-Muster und Unsicherheiten, die während des Mappingprozesses auftreten können. Der Bewertungsprozess liefert einen globalen Qualitätswert, der vom Ingenieur direkt verwendet werden kann, um den Austausch zu bewerten. Die Anwendbarkeit wird an einem Beispiel gezeigt. T3 - Schriftenreihe des DFG Graduiertenkollegs 1462 Modellqualitäten // Graduiertenkolleg Modellqualitäten - 6 KW - Data exchange, Schema mapping, Quality assessment, Uncertainty, Coupling, BIM, Design patterns, Metamodel architecture KW - Data exchange, Schema mapping, Quality assessment, Uncertainty, Coupling, BIM, Design patterns, Metamodel architecture Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20130128-18366 SN - 978-3-86068-486-3 PB - Verlag der Bauhaus-Universität Weimar 2013 CY - Weimar ER - TY - THES A1 - Vu, Bac Nam T1 - Stochastic uncertainty quantification for multiscale modeling of polymeric nanocomposites N2 - Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex. The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials. This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major goal, the following tasks are carried out: At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs. At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length scales. In particular, we homogenized the RVE into an equivalent fiber. The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale. Stochastic modeling and uncertainty quantification consist of the following ingredients: - Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively. - Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data. - Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance. In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided. The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results. KW - Polymere KW - nanocomposite KW - Nanoverbundstruktur KW - stochastic KW - multiscale Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20160322-25551 ER - TY - THES A1 - Hanna, John T1 - Computational Fracture Modeling and Design of Encapsulation-Based Self-Healing Concrete Using XFEM and Cohesive Surface Technique N2 - Encapsulation-based self-healing concrete (SHC) is the most promising technique for providing a self-healing mechanism to concrete. This is due to its capacity to heal fractures effectively without human interventions, extending the operational life and lowering maintenance costs. The healing mechanism is created by embedding capsules containing the healing agent inside the concrete. The healing agent will be released once the capsules are fractured and the healing occurs in the vicinity of the damaged part. The healing efficiency of the SHC is still not clear and depends on several factors; in the case of microcapsules SHC the fracture of microcapsules is the most important aspect to release the healing agents and hence heal the cracks. This study contributes to verifying the healing efficiency of SHC and the fracture mechanism of the microcapsules. Extended finite element method (XFEM) is a flexible, and powerful discrete crack method that allows crack propagation without the requirement for re-meshing and has been shown high accuracy for modeling fracture in concrete. In this thesis, a computational fracture modeling approach of Encapsulation-based SHC is proposed based on the XFEM and cohesive surface technique (CS) to study the healing efficiency and the potential of fracture and debonding of the microcapsules or the solidified healing agents from the concrete matrix as well. The concrete matrix and a microcapsule shell both are modeled by the XFEM and combined together by CS. The effects of the healed-crack length, the interfacial fracture properties, and microcapsule size on the load carrying capability and fracture pattern of the SHC have been studied. The obtained results are compared to those obtained from the zero thickness cohesive element approach to demonstrate the significant accuracy and the validity of the proposed simulation. The present fracture simulation is developed to study the influence of the capsular clustering on the fracture mechanism by varying the contact surface area of the CS between the microcapsule shell and the concrete matrix. The proposed fracture simulation is expanded to 3D simulations to validate the 2D computational simulations and to estimate the accuracy difference ratio between 2D and 3D simulations. In addition, a proposed design method is developed to design the size of the microcapsules consideration of a sufficient volume of healing agent to heal the expected crack width. This method is based on the configuration of the unit cell (UC), Representative Volume Element (RVE), Periodic Boundary Conditions (PBC), and associated them to the volume fraction (Vf) and the crack width as variables. The proposed microcapsule design is verified through computational fracture simulations. KW - Beton KW - Bruchverhalten KW - Finite-Elemente-Methode KW - Self-healing concrete KW - Computational fracture modeling KW - Capsular clustering; Design of microcapsules KW - XFEM KW - Cohesive surface technique KW - Mikrokapsel KW - Selbstheilendem Beton KW - Computermodellierung des Bruchverhaltens KW - Entwurf von Mikrokapseln KW - Kapselclustern KW - Erweiterte Finite-Elemente-Methode KW - Kohäsionsflächenverfahren Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221124-47467 ER - TY - THES A1 - Nanthakumar, S.S. T1 - Inverse and optimization problems in piezoelectric materials using Extended Finite Element Method and Level sets T1 - Inverse und Optimierungsprobleme für piezoelektrische Materialien mit der Extended Finite Elemente Methode und Level sets N2 - Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics. An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries. Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations. The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels. Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams. Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed . KW - Finite-Elemente-Methode KW - Piezoelectricity KW - Inverse problems KW - Optimization problems KW - Nanostructures KW - XFEM KW - level set method KW - Surface effects Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161128-27095 ER - TY - THES A1 - Wang, Cuixia T1 - Nanomechanical Resonators Based on Quasi-two-dimensional Materials N2 - Advances in nanotechnology lead to the development of nano-electro-mechanical systems (NEMS) such as nanomechanical resonators with ultra-high resonant frequencies. The ultra-high-frequency resonators have recently received significant attention for wide-ranging applications such as molecular separation, molecular transportation, ultra-high sensitive sensing, high-frequency signal processing, and biological imaging. It is well known that for micrometer length scale, first-principles technique, the most accurate approach, poses serious limitations for comparisons with experimental studies. For such larger size, classical molecular dynamics (MD) simulations are desirable, which require interatomic potentials. Additionally, a mesoscale method such as the coarse-grained (CG) method is another useful method to support simulations for even larger system sizes. Furthermore, quasi-two-dimensional (Q2D) materials have attracted intensive research interest due to their many novel properties over the past decades. However, the energy dissipation mechanisms of nanomechanical resonators based on several Q2D materials are still unknown. In this work, the addressed main issues include the development of the CG models for molybdenum disulphide (MoS2), investigation of the mechanism effects on black phosphorus (BP) nanoresonators and the application of graphene nanoresonators. The primary coverage and results of the dissertation are as follows: Method development. Firstly, a two-dimensional (2D) CG model for single layer MoS2 (SLMoS2) is analytically developed. The Stillinger-Weber (SW) potential for this 2D CG model is further parametrized, in which all SW geometrical parameters are determined analytically according to the equilibrium condition for each individual potential term, while the SW energy parameters are derived analytically based on the valence force field model. Next, the 2D CG model is further simplified to one-dimensional (1D) CG model, which describes the 2D SLMoS2 structure using a 1D chain model. This 1D CG model is applied to investigate the relaxed configuration and the resonant oscillation of the folded SLMoS2. Owning to the simplicity nature of the 1D CG model, the relaxed configuration of the folded SLMoS2 is determined analytically, and the resonant oscillation frequency is derived analytically. Considering the increasing interest in studying the properties of other 2D layered materials, and in particular those in the semiconducting transition metal dichalcogenide class like MoS2, the CG models proposed in current work provide valuable simulation approaches. Mechanism understanding. Two energy dissipation mechanisms of BP nanoresonators are focused exclusively, i.e. mechanical strain effects and defect effects (including vacancy and oxidation). Vacancy defect is intrinsic damping factor for the quality (Q)-factor, while mechanical strain and oxidation are extrinsic damping factors. Intrinsic dissipation (induced by thermal vibrations) in BP resonators (BPRs) is firstly investigated. Specifically, classical MD simulations are performed to examine the temperature dependence for the Q-factor of the single layer BPR (SLBPR) along the armchair and zigzag directions, where two-step fitting procedure is used to extract the frequency and Q-factor from the kinetic energy time history. The Q-factors of BPRs are evaluated through comparison with those of graphene and MoS2 nanoresonators. Next, effects of mechanical strain, vacancy and oxidation on BP nanoresonators are investigated in turn. Considering the increasing interest in studying the properties of BP, and in particular the lack of theoretical study for the BPRs, the results in current work provide a useful reference. Application. A novel application for graphene nanoresonators, using them to self-assemble small nanostructures such as water chains, is proposed. All of the underlying physics enabling this phenomenon is elucidated. In particular, by drawing inspiration from macroscale self-assembly using the higher order resonant modes of Chladni plates, classical MD simulations are used to investigate the self-assembly of water molecules using graphene nanoresonators. An analytic formula for the critical resonant frequency based on the interaction between water molecules and graphene is provided. Furthermore, the properties of the water chains assembled by the graphene nanoresonators are studied. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,3 KW - Nanomechanik KW - Resonator KW - Nanomechanical Resonators Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180709-37609 ER - TY - THES A1 - Vollmering, Max T1 - Damage Localization of Mechanical Structures by Subspace Identification and Krein Space Based H-infinity Estimation N2 - This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed. A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth. Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,5 KW - Strukturmechanik KW - Schätztheorie Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180730-37728 ER - TY - THES A1 - Liu, Bokai T1 - Stochastic multiscale modeling of polymeric nanocomposites using Data-driven techniques N2 - In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks. Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis. While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models. As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling. This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components: -Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS). -Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks. -Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,3 KW - Polymere KW - Nanoverbundstruktur KW - multiscale KW - nanocomposite KW - stochastic KW - Data-driven Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220503-46379 ER - TY - THES A1 - Schemmann, Christoph T1 - Optimierung von radialen Verdichterlaufrädern unter Berücksichtigung empirischer und analytischer Vorinformationen mittels eines mehrstufigen Sampling Verfahrens T1 - Optimization of Centrifugal Compressor Impellers by a Multi-fidelity Sampling Method Taking Analytical and Empirical Information into Account N2 - Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too. N2 - Turbomaschinen sind eine entscheidende Komponente in vielen Energiewandlungs- oder Energieerzeugungsprozessen und daher als vielversprechender Ansatzpunkt für eine Effizienzsteigerung der Energie-und Ressourcennutzung anzusehen. Im Laufe des letzten Jahrzehnts haben automatisierte Optimierungsmethoden in Verbindung mit numerischer Simulation zunehmend breitere Verwendung als Mittel zur Effizienzsteigerung in vielen Bereichen der Ingenieurwissenschaften gefunden. Allerdings standen die komplexen Interaktionen zwischen Strömungs- und Strukturmechanik sowie der hohe nummerische Aufwand einem weitverbreiteten Einsatz dieser Methoden im Turbomaschinenbereich bisher entgegen. Das Ziel dieser Forschungsaktivität ist die Entwicklung einer effizienten Strategie zur metamodellbasierten Optimierung von radialen Verdichterlaufrädern. Dabei liegt der Schwerpunkt auf einer Reduktion des benötigten numerischen Aufwandes. Der in diesem Vorhaben gewählte Ansatz ist das Einbeziehen analytischer und empirischer Vorinformationen (“lowfidelity“) in den Sampling Prozess, um vielversprechende Bereiche des Parameterraumes zu identifizieren. Diese Informationen werden genutzt um die aufwendigen numerischen Berechnungen (“high-fidelity“) des strömungs- und strukturmechanischen Verhaltens der Laufräder in diesen Bereichen zu konzentrieren, während gleichzeitig eine ausreichende Abdeckung des gesamten Parameterraumes sichergestellt wird. Die Entwicklung der Optimierungsstrategie ist in drei zentrale Arbeitspakete aufgeteilt. In einem ersten Schritt werden die verfügbaren empirischen und analytischen Methoden gesichtet und bewertet. In dieser Recherche sind Verlustmodelle basierend auf eindimensionaler Strömungsmechanik und empirischen Korrelationen als bestgeeignete Methode zur Vorhersage des aerodynamischen Verhaltens der Verdichter identifiziert worden. Um eine hohe Vorhersagegüte sicherzustellen, sind diese Modelle anhand verfügbarer Leistungsdaten kalibriert worden. Da zur Vorhersage der mechanischen Belastung des Laufrades keine brauchbaren analytischen oder empirischen Modelle ermittelt werden konnten, ist hier ein Metamodel basierend auf Finite-Element Berechnungen gewählt worden. Das zweite Arbeitspaket beinhaltet die Entwicklung der angepassten Samplingmethode, welche Samples in Bereichen des Parameterraumes konzentriert, die auf Basis der Vorinformationen als vielversrechend angesehen werden können. Gleichzeitig müssen eine gleichmäßige Abdeckung des gesamten Parameterraumes und ein niedriges Niveau an Eingangskorrelationen sichergestellt sein. Da etablierte Methoden wie Markov-Ketten-Monte-Carlo-Methoden oder die Verwerfungsmethode diese Voraussetzungen nicht erfüllen, ist ein neues, mehrstufiges Samplingverfahren (“Filtered Sampling“) entwickelt worden. Das letzte Arbeitspaket umfasst die Entwicklung eines automatisiertenSimulations-Workflows. Dieser Workflow umfasst Geometrieparametrisierung, Geometrieerzeugung, Netzerzeugung sowie die Berechnung des aerodynamischen Betriebsverhaltens und der strukturmechanischen Belastung. Dabei liegt ein Schwerpunkt auf der Entwicklung eines Parametrisierungskonzeptes, welches auf strömungsmechanischen Zusammenhängen beruht, um so physikalisch nicht zielführende Parameterkombinationen zu vermeiden. Abschließend ist die auf den zuvor entwickelten Werkzeugen aufbauende Optimierungsstrategie erfolgreich eingesetzt worden, um drei Optimierungsfragestellungen zu bearbeiten. Im ersten und zweiten Testcase sind bestehende Verdichterlaufräder mit der vorgestellten Methode optimiert worden. Die erzielten Optimierungsergebnisse sind von ähnlicher Güte wie die solcher Optimierungen, die keine Vorinformationen berücksichtigen, allerdingswirdnurdieHälfteannumerischemAufwandbenötigt. IneinemdrittenTestcase ist die Methode eingesetzt worden, um ein neues Laufraddesign zu erzeugen. Im Gegensatz zu den vorherigen Beispielen werden im Rahmen dieser Optimierung stark unterschiedliche Designs untersucht. Dadurch kann an diesem dritten Beispiel aufgezeigt werden, dass die Methode auch für Parameterräume mit stakt variierenden Designs funktioniert. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,3 KW - Simulation KW - Maschinenbau KW - Optimierung KW - Strömungsmechanik KW - Strukturmechanik Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190910-39748 ER - TY - THES A1 - Zabel, Volkmar ED - Könke, Carsten ED - Lahmer, Tom ED - Rabczuk, Timon T1 - Operational modal analysis - Theory and aspects of application in civil engineering N2 - In recent years the demand on dynamic analyses of existing structures in civil engineering has remarkably increased. These analyses are mainly based on numerical models. Accordingly, the generated results depend on the quality of the used models. Therefore it is very important that the models describe the considered systems such that the behaviour of the physical structure is realistically represented. As any model is based on assumptions, there is always a certain degree of uncertainty present in the results of a simulation based on the respective numerical model. To minimise these uncertainties in the prediction of the response of a structure to a certain loading, it has become common practice to update or calibrate the parameters of a numerical model based on observations of the structural behaviour of the respective existing system. The determination of the behaviour of an existing structure requires experimental investigations. If the numerical analyses concern the dynamic response of a structure it is sensible to direct the experimental investigations towards the identification of the dynamic structural behaviour which is determined by the modal parameters of the system. In consequence, several methods for the experimental identification of modal parameters have been developed since the 1980ies. Due to various technical restraints in civil engineering which limit the possibilities to excitate a structure with economically reasonable effort, several methods have been developed that allow a modal identification form tests with an ambient excitation. The approach of identifying modal parameters only from measurements of the structural response without precise knowledge of the excitation is known as output-only or operational modal analysis. Since operational modal analysis (OMA) can be considered as a link between numerical modelling and simulation on the one hand and the dynamic behaviour of an existing structure on the other hand, the respective algorithms connect both the concepts of structural dynamics and mathematical tools applied within the processing of experimental data. Accordingly, the related theoretical topics are revised after an introduction into the topic. Several OMA methods have been developed over the last decades. The most established algorithms are presented here and their application is illustrated by means of both a small numerical and an experimental example. Since experimentally obtained results always underly manifold influences, an appropriate postprocessing of the results is necessary for a respective quality assessment. This quality assessment does not only require respective indicators but should also include the quantification of uncertainties. One special feature in modal testing is that it is common to instrument the structure in different sensor setups to improve the spacial resolution of identified mode shapes. The modal information identified from tests in several setups needs to be merged a posteriori. Algorithms to cope with this problem are also presented. Due to the fact that the amount of data generated in modal tests can become very large, manual processing can become extremely expensive or even impossible, for example in the case of a long-term continuous structural monitoring. In these situations an automated analysis and postprocessing are essential. Descriptions of respective methodologies are therefore also included in this work. Every structural system in civil engineering is unique and so also every identification of modal parameters has its specific challenges. Some aspects that can be faced in practical applications of operational modal analysis are presented and discussed in a chapter that is dedicated specific problems that an analyst may have to overcome. Case studies of systems with very close modes, with limited accessibility as well as the application of different OMA methods are described and discussed. In this context the focus is put on several types of uncertainty that may occur in the multiple stages of an operational modal analysis. In literature only very specific uncertainties at certain stages of the analysis are addressed. Here, the topic of uncertainties has been considered in a broader sense and approaches for treating respective problems are suggested. Eventually, it is concluded that the methodologies of operatinal modal analysis and related technical solutions have been well-engineered already. However, as in any discipline that includes experiments, a certain degree of uncertainty always remains in the results. From these conclusions has been derived a demand for further research and development that should be directed towards the minimisation of these uncertainties and to a respective optimisation of the steps and corresponding parameters included in an operational modal analysis. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,5 KW - Modalanalyse KW - Strukturdynamik KW - Operational modal analysis KW - modal analysis KW - structural dynamics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20191030-40061 ER -