TY - THES A1 - Brehm, Maik T1 - Vibration-based model updating: Reduction and quantification of uncertainties N2 - Numerical models and their combination with advanced solution strategies are standard tools for many engineering disciplines to design or redesign structures and to optimize designs with the purpose to improve specific requirements. As the successful application of numerical models depends on their suitability to represent the behavior related to the intended use, they should be validated by experimentally obtained results. If the discrepancy between numerically derived and experimentally obtained results is not acceptable, a model revision or a revision of the experiment need to be considered. Model revision is divided into two classes, the model updating and the basic revision of the numerical model. The presented thesis is related to a special branch of model updating, the vibration-based model updating. Vibration-based model updating is a tool to improve the correlation of the numerical model by adjusting uncertain model input parameters by means of results extracted from vibration tests. Evidently, uncertainties related to the experiment, the numerical model, or the applied numerical solving strategies can influence the correctness of the identified model input parameters. The reduction of uncertainties for two critical problems and the quantification of uncertainties related to the investigation of several nominally identical structures are the main emphases of this thesis. First, the reduction of uncertainties by optimizing reference sensor positions is considered. The presented approach relies on predicted power spectral amplitudes and an initial finite element model as a basis to define the assessment criterion for predefined sensor positions. In combination with geometry-based design variables, which represent the sensor positions, genetic and particle swarm optimization algorithms are applied. The applicability of the proposed approach is demonstrated on a numerical benchmark study of a simply supported beam and a case study of a real test specimen. Furthermore, the theory of determining the predicted power spectral amplitudes is validated with results from vibration tests. Second, the possibility to reduce uncertainties related to an inappropriate assignment for numerically derived and experimentally obtained modes is investigated. In the context of vibration-based model updating, the correct pairing is essential. The most common criterion for indicating corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases and is not reliable for automatic approaches. Hence, an alternative criterion, the energy-based modal assurance criterion, is proposed. This criterion combines the mathematical characteristic of orthogonality with the physical properties of the structure by modal strain energies. A numerical example and a case study with experimental data are presented to show the advantages of the proposed energy-based modal assurance criterion in comparison to the traditional modal assurance criterion. Third, the application of optimization strategies combined with information theory based objective functions is analyzed for the purpose of stochastic model updating. This approach serves as an alternative to the common sensitivity-based stochastic model updating strategies. Their success depends strongly on the defined initial model input parameters. In contrast, approaches based on optimization strategies can be more flexible. It can be demonstrated, that the investigated nature inspired optimization strategies in combination with Bhattacharyya distance and Kullback-Leibler divergence are appropriate. The obtained accuracies and the respective computational effort are comparable with sensitivity-based stochastic model updating strategies. The application of model updating procedures to improve the quality and suitability of a numerical model is always related to additional costs. The presented innovative approaches will contribute to reduce and quantify uncertainties within a vibration-based model updating process. Therefore, the increased benefit can compensate the additional effort, which is necessary to apply model updating procedures. N2 - Eine typische Anwendung von numerischen Modellen und den damit verbundenen numerischen Lösungsstrategien ist das Entwerfen oder Ertüchtigen von Strukturen und das Optimieren von Entwürfen zur Verbesserung spezifischer Eigenschaften. Der erfolgreiche Einsatz von numerischen Modellen ist abhängig von der Eignung des Modells bezüglich der vorgesehenen Anwendung. Deshalb ist eine Validierung mit experimentellen Ergebnissen sinnvoll. Zeigt die Validierung inakzeptable Unterschiede zwischen den Ergebnissen des numerischen Modells und des Experiments, sollte das numerische Modell oder das experimentelle Vorgehen verbessert werden. Für die Modellverbesserung gibt es zwei verschiedene Möglichkeiten, zum einen die Kalibrierung des Modells und zum anderen die grundsätzliche Änderung von Modellannahmen. Die vorliegende Dissertation befasst sich mit der Kalibrierung von numerischen Modellen auf der Grundlage von Schwingungsversuchen. Modellkalibrierung ist eine Methode zur Verbesserung der Korrelation zwischen einem numerischen Modell und einer realen Struktur durch Anpassung von Modelleingangsparametern unter Verwendung von experimentell ermittelten Daten. Unsicherheiten bezüglich des numerischen Modells, des Experiments und der angewandten numerischen Lösungsstrategie beeinflussen entscheidend die erzielbare Qualität der identifizierten Modelleingangsparameter. Die Schwerpunkte dieser Dissertation sind die Reduzierung von Unsicherheiten für zwei kritische Probleme und die Quantifizierung von Unsicherheiten extrahiert aus Experimenten nominal gleicher Strukturen. Der erste Schwerpunkt beschäftigt sich mit der Reduzierung von Unsicherheiten durch die Optimierung von Referenzsensorpositionen. Das Bewertungskriterium für vordefinierte Sensorpositionen basiert auf einer theoretischen Abschätzung von Amplituden der Spektraldichtefunktion und einem dazugehörigen Finite Elemente Modell. Die Bestimmung der optimalen Konfiguration erfolgt durch eine Anwendung von Optimierungsmethoden basierend auf genetischen Algorithmen und Schwarmintelligenzen. Die Anwendbarkeit dieser Methoden wurde anhand einer numerischen Studie an einem einfach gelagerten Balken und einem real existierenden komplexen Versuchskörper nachgewiesen. Mit Hilfe einer experimentellen Untersuchung wird die Abschätzung der statistischen Eigenschaften der Antwortspektraldichtefunktionen an diesem Versuchskörper validiert. Im zweiten Schwerpunkt konzentrieren sich die Untersuchungen auf die Reduzierung von Unsicherheiten, hervorgerufen durch ungeeignete Kriterien zur Eigenschwingformzuordnung. Diese Zuordnung ist entscheidend für Modellkalibrierungen basierend auf Schwingungsversuchen. Das am Häufigsten verwendete Kriterium zur Zuordnung ist das modal assurance criterion. In manchen Anwendungsfällen ist dieses Kriterium jedoch kein zuverlässiger Indikator. Das entwickelte alternative Kriterium, das energy-based modal assurance criterion, kombiniert das mathematische Merkmal der Orthogonalität mit den physikalischen Eigenschaften der untersuchten Struktur mit Hilfe von modalen Formänderungsarbeiten. Ein numerisches Beispiel und eine Sensitivitätsstudie mit experimentellen Daten zeigen die Vorteile des vorgeschlagenen energiebasierten Kriteriums im Vergleich zum traditionellen modal assurance criterion. Die Anwendung von Optimierungsstrategien auf stochastische Modellkalibrierungsverfahren wird im dritten Schwerpunkt analysiert. Dabei werden Verschiedenheitsmaße der Informationstheorie zur Definition von Zielfunktionen herangezogen. Dieser Ansatz stellt eine Alternative zu herkömmlichen Verfahren dar, welche auf gradientenbasierten Sensitivitätsmatrizen zwischen Eingangs- und Ausgangsgrößen beruhen. Deren erfolgreicher Einsatz ist abhängig von den Anfangswerten der Eingangsgrößen, wobei die vorgeschlagenen Optimierungsstrategien weniger störanfällig sind. Der Bhattacharyya Abstand und die Kullback-Leibler Divergenz als Zielfunktion, kombiniert mit stochastischen Optimierungsverfahren, erwiesen sich als geeignet. Bei vergleichbarem Rechenaufwand konnten ähnliche Genauigkeiten wie bei den Modellkalibrierungsverfahren, die auf Sensitivitätsmatrizen basieren, erzielt werden. Die Anwendung von Modellkalibrierungsverfahren zur Verbesserung der Eignung eines numerischen Modells für einen bestimmten Zweck ist mit einem Mehraufwand verbunden. Die präsentierten innovativen Verfahren tragen zu einer Reduzierung und Quantifizierung von Unsicherheiten innerhalb eines Modellkalibrierungsprozesses basierend auf Schwingungsversuchen bei. Mit dem zusätzlich generierten Nutzen kann der Mehraufwand, der für eine Modellkalibrierung notwendig ist, nachvollziehbar begründet werden. T2 - Modellkalibrierung basierend auf Schwingungsversuchen: Reduzierung und Quantifizierung von Unsicherheiten T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2011,1 KW - Dynamik KW - Optimierung KW - Modellkalibrierung KW - Modezuordung KW - optimale Sensorpositionierung KW - model updating KW - mode pairing KW - optimal sensor positions KW - dissimilarity measures KW - optimization Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20110926-15553 ER - TY - THES A1 - Nickerson, Seth T1 - Thermo-Mechanical Behavior of Honeycomb, Porous, Microcracked Ceramics BT - Characterization and analysis of thermally induced stresses with specific consideration of synthetic, porous cordierite honeycomb substrates N2 - The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties. Primary novel factors of this work center on two aspects. 1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners. 2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions. Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis. This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,4 KW - Keramik KW - ceramics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190911-39753 ER - TY - THES A1 - Mai, Luu T1 - Structural Control Systems in High-speed Railway Bridges N2 - Structural vibration control of high-speed railway bridges using tuned mass dampers, semi-active tuned mass dampers, fluid viscous dampers and magnetorheological dampers to reduce resonant structural vibrations is studied. In this work, the addressed main issues include modeling of the dynamic interaction of the structures, optimization of the parameters of the dampers and comparison of their efficiency. A new approach to optimize multiple tuned mass damper systems on an uncertain model is proposed based on the H-infinity optimization criteria and the DK iteration procedure with norm-bounded uncertainties in frequency domain. The parameters of tuned mass dampers are optimized directly and simultaneously on different modes contributing significantly to the multi-resonant peaks to explore the different possible combinations of parameters. The effectiveness of the present method is also evaluated through comparison with a previous method. In the case of semi-active tuned mass dampers, an optimization algorithm is derived to control the magnetorheological damper in these semi-active damping systems. The use of the proposed algorithm can generate various combinations of control gains and state variables. This can lead to the improvement of the ability of MR dampers to track the desired control forces. An uncertain model to reduce detuning effects is also considered in this work. Next, for fluid viscous dampers, in order to tune the optimal parameters of fluid viscous dampers to the vicinity of the exact values, analytical formulae which can include structural damping are developed based on the perturbation method. The proposed formulae can also be considered as an improvement of the previous analytical formulae, especially for bridge beams with large structural damping. Finally, a new combination of magnetorheological dampers and a double-beam system to improve the performance of the primary structure vibration is proposed. An algorithm to control magnetorheological dampers in this system is developed by using standard linear matrix inequality techniques. Weight functions as a loop shaping procedure are also introduced in the feedback controllers to improve the tracking ability of magnetorheological damping forces. To this end, the effectiveness of magnetorheological dampers controlled by the proposed scheme, along with the effects of the uncertain and time-delay parameters on the models, are evaluated through numerical simulations. Additionally, a comparison of the dampers based on their performance is also considered in this work. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2014,3 KW - High-speed railway bridge KW - Control system KW - Passive damper KW - Semi-active damper KW - Railway bridges Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20141223-23391 SN - 1610-7381 ER - TY - THES A1 - Ghasemi, Hamid T1 - Stochastic optimization of fiber reinforced composites considering uncertainties N2 - Briefly, the two basic questions that this research is supposed to answer are: 1. Howmuch fiber is needed and how fibers should be distributed through a fiber reinforced composite (FRC) structure in order to obtain the optimal and reliable structural response? 2. How do uncertainties influence the optimization results and reliability of the structure? Giving answer to the above questions a double stage sequential optimization algorithm for finding the optimal content of short fiber reinforcements and their distribution in the composite structure, considering uncertain design parameters, is presented. In the first stage, the optimal amount of short fibers in a FRC structure with uniformly distributed fibers is conducted in the framework of a Reliability Based Design Optimization (RBDO) problem. Presented model considers material, structural and modeling uncertainties. In the second stage, the fiber distribution optimization (with the aim to further increase in structural reliability) is performed by defining a fiber distribution function through a Non-Uniform Rational BSpline (NURBS) surface. The advantages of using the NURBS surface as a fiber distribution function include: using the same data set for the optimization and analysis; high convergence rate due to the smoothness of the NURBS; mesh independency of the optimal layout; no need for any post processing technique and its non-heuristic nature. The output of stage 1 (the optimal fiber content for homogeneously distributed fibers) is considered as the input of stage 2. The output of stage 2 is the Reliability Index (b ) of the structure with the optimal fiber content and distribution. First order reliability method (in order to approximate the limit state function) as well as different material models including Rule of Mixtures, Mori-Tanaka, energy-based approach and stochastic multi-scales are implemented in different examples. The proposed combined model is able to capture the role of available uncertainties in FRC structures through a computationally efficient algorithm using all sequential, NURBS and sensitivity based techniques. The methodology is successfully implemented for interfacial shear stress optimization in sandwich beams and also for optimization of the internal cooling channels in a ceramic matrix composite. Finally, after some changes and modifications by combining Isogeometric Analysis, level set and point wise density mapping techniques, the computational framework is extended for topology optimization of piezoelectric / flexoelectric materials. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2016,1 KW - Optimization KW - Fiber Reinforced Composite KW - Finite Element Method KW - Isogeometric Analysis KW - Flexoelectricity KW - Finite-Elemente-Methode KW - Optimierung Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161117-27042 ER - TY - THES A1 - Liu, Bokai T1 - Stochastic multiscale modeling of polymeric nanocomposites using Data-driven techniques N2 - In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks. Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis. While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models. As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling. This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components: -Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS). -Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks. -Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,3 KW - Polymere KW - Nanoverbundstruktur KW - multiscale KW - nanocomposite KW - stochastic KW - Data-driven Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220503-46379 ER - TY - THES A1 - Chan, Chiu Ling T1 - Smooth representation of thin shells and volume structures for isogeometric analysis N2 - The purpose of this study is to develop self-contained methods for obtaining smooth meshes which are compatible with isogeometric analysis (IGA). The study contains three main parts. We start by developing a better understanding of shapes and splines through the study of an image-related problem. Then we proceed towards obtaining smooth volumetric meshes of the given voxel-based images. Finally, we treat the smoothness issue on the multi-patch domains with C1 coupling. Following are the highlights of each part. First, we present a B-spline convolution method for boundary representation of voxel-based images. We adopt the filtering technique to compute the B-spline coefficients and gradients of the images effectively. We then implement the B-spline convolution for developing a non-rigid images registration method. The proposed method is in some sense of “isoparametric”, for which all the computation is done within the B-splines framework. Particularly, updating the images by using B-spline composition promote smooth transformation map between the images. We show the possible medical applications of our method by applying it for registration of brain images. Secondly, we develop a self-contained volumetric parametrization method based on the B-splines boundary representation. We aim to convert a given voxel-based data to a matching C1 representation with hierarchical cubic splines. The concept of the osculating circle is employed to enhance the geometric approximation, where it is done by a single template and linear transformations (scaling, translations, and rotations) without the need for solving an optimization problem. Moreover, we use the Laplacian smoothing and refinement techniques to avoid irregular meshes and to improve mesh quality. We show with several examples that the method is capable of handling complex 2D and 3D configurations. In particular, we parametrize the 3D Stanford bunny which contains irregular shapes and voids. Finally, we propose the B´ezier ordinates approach and splines approach for C1 coupling. In the first approach, the new basis functions are defined in terms of the B´ezier Bernstein polynomials. For the second approach, the new basis is defined as a linear combination of C0 basis functions. The methods are not limited to planar or bilinear mappings. They allow the modeling of solutions to fourth order partial differential equations (PDEs) on complex geometric domains, provided that the given patches are G1 continuous. Both methods have their advantages. In particular, the B´ezier approach offer more degree of freedoms, while the spline approach is more computationally efficient. In addition, we proposed partial degree elevation to overcome the C1-locking issue caused by the over constraining of the solution space. We demonstrate the potential of the resulting C1 basis functions for application in IGA which involve fourth order PDEs such as those appearing in Kirchhoff-Love shell models, Cahn-Hilliard phase field application, and biharmonic problems. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2020,2 KW - Modellierung KW - Isogeometrische Analyse KW - NURBS KW - Geometric Modeling KW - Isogeometric Analysis KW - NURBS Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200812-42083 ER - TY - THES A1 - Tan, Fengjie T1 - Shape Optimization Design of Arch Type Dams under Uncertainties N2 - Due to an increased need for hydro-electricity, water storage, and flood protection, it is assumed that a series of new dams will be built throughout the world. Comparing existing design methodologies for arch-type dams, model-based shape optimization can effectively reduce construction costs and leverage the properties of construction materials. To apply the means of shape optimization, suitable variables need to be chosen to formulate the objective function, which is the volume of the arch dam here. In order to increase the consistency with practical conditions, a great number of geometrical and behavioral constraints are included in the mathematical model. An optimization method, namely Genetic Algorithm is adopted which allows a global search. Traditional optimization techniques are realized based on a deterministic approach, which means that the material properties and loading conditions are assumed to be fixed values. As a result, the real-world structures that are optimized by these approaches suffer from uncertainties that one needs to be aware of. Hence, in any optimization process for arch dams, it is nec- essary to find a methodology that is capable of considering the influences of uncertainties and generating a solution which is robust enough against the uncertainties. The focus of this thesis is the formulation and the numerical method for the optimization of the arch dam under the uncertainties. The two main models, the probabilistic model, and non-probabilistic models are intro- duced and discussed. Classic procedures of probabilistic approaches un- der uncertainties, such as RDO (robust design optimization) and RBDO (reliability-based design optimization), are in general computationally ex- pensive and rely on estimates of the system’s response variance and fail- ure probabilities. Instead, the robust optimization (RO) method which is based on the non-probabilistic model, will not follow a full probabilistic approach but works with pre-defined confidence levels. This leads to a bi-level optimization program where the volume of the dam is optimized under the worst combination of the uncertain parameters. By this, robust and reliable designs are obtained and the result is independent of any as- sumptions on stochastic properties of the random variables in the model. The optimization of an arch-type dam is realized here by a robust optimiza- tion method under load uncertainty, where hydraulic and thermal loads are considered. The load uncertainty is modeled as an ellipsoidal expression. Comparing with any traditional deterministic optimization (DO) method, which only concerns the minimum objective value and offers a solution candidate close to limit-states, the RO method provides a robust solution against uncertainties. All the above mentioned methods are applied to the optimization of the arch dam to compare with the optimal design with DO methods. The re- sults are compared and analyzed to discuss the advantages and drawbacks of each method. In order to reduce the computational cost, a ranking strategy and an ap- proximation model are further involved to do a preliminary screening. By means of these, the robust design can generate an improved arch dam structure which ensures both safety and serviceability during its lifetime. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,2 KW - Wasserbau KW - Staudamm KW - dams Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190819-39608 ER - TY - THES A1 - Ahmad, Sofyan T1 - Reference Surface-Based System Identification N2 - Environmental and operational variables and their impact on structural responses have been acknowledged as one of the most important challenges for the application of the ambient vibration-based damage identification in structures. The damage detection procedures may yield poor results, if the impacts of loading and environmental conditions of the structures are not considered. The reference-surface-based method, which is proposed in this thesis, is addressed to overcome this problem. In the proposed method, meta-models are used to take into account significant effects of the environmental and operational variables. The usage of the approximation models, allows the proposed method to simply handle multiple non-damaged variable effects simultaneously, which for other methods seems to be very complex. The input of the meta-model are the multiple non-damaged variables while the output is a damage indicator. The reference-surface-based method diminishes the effect of the non-damaged variables to the vibration based damage detection results. Hence, the structure condition that is assessed by using ambient vibration data at any time would be more reliable. Immediate reliable information regarding the structure condition is required to quickly respond to the event, by means to take necessary actions concerning the future use or further investigation of the structures, for instance shortly after extreme events such as earthquakes. The critical part of the proposed damage detection method is the learning phase, where the meta-models are trained by using input-output relation of observation data. Significant problems that may encounter during the learning phase are outlined and some remedies to overcome the problems are suggested. The proposed damage identification method is applied to numerical and experimental models. In addition to the natural frequencies, wavelet energy and stochastic subspace damage indicators are used. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2013,3 KW - System Identification KW - Schadensdetektionsverfahren KW - Referenzfläche Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20140205-21132 ER - TY - THES A1 - Goswami, Somdatta T1 - Phase field modeling of fracture with isogeometric analysis and machine learning methods N2 - This thesis presents the advances and applications of phase field modeling in fracture analysis. In this approach, the sharp crack surface topology in a solid is approximated by a diffusive crack zone governed by a scalar auxiliary variable. The uniqueness of phase field modeling is that the crack paths are automatically determined as part of the solution and no interface tracking is required. The damage parameter varies continuously over the domain. But this flexibility comes with associated difficulties: (1) a very fine spatial discretization is required to represent sharp local gradients correctly; (2) fine discretization results in high computational cost; (3) computation of higher-order derivatives for improved convergence rates and (4) curse of dimensionality in conventional numerical integration techniques. As a consequence, the practical applicability of phase field models is severely limited. The research presented in this thesis addresses the difficulties of the conventional numerical integration techniques for phase field modeling in quasi-static brittle fracture analysis. The first method relies on polynomial splines over hierarchical T-meshes (PHT-splines) in the framework of isogeometric analysis (IGA). An adaptive h-refinement scheme is developed based on the variational energy formulation of phase field modeling. The fourth-order phase field model provides increased regularity in the exact solution of the phase field equation and improved convergence rates for numerical solutions on a coarser discretization, compared to the second-order model. However, second-order derivatives of the phase field are required in the fourth-order model. Hence, at least a minimum of C1 continuous basis functions are essential, which is achieved using hierarchical cubic B-splines in IGA. PHT-splines enable the refinement to remain local at singularities and high gradients, consequently reducing the computational cost greatly. Unfortunately, when modeling complex geometries, multiple parameter spaces (patches) are joined together to describe the physical domain and there is typically a loss of continuity at the patch boundaries. This decrease of smoothness is dictated by the geometry description, where C0 parameterizations are normally used to deal with kinks and corners in the domain. Hence, the application of the fourth-order model is severely restricted. To overcome the high computational cost for the second-order model, we develop a dual-mesh adaptive h-refinement approach. This approach uses a coarser discretization for the elastic field and a finer discretization for the phase field. Independent refinement strategies have been used for each field. The next contribution is based on physics informed deep neural networks. The network is trained based on the minimization of the variational energy of the system described by general non-linear partial differential equations while respecting any given law of physics, hence the name physics informed neural network (PINN). The developed approach needs only a set of points to define the geometry, contrary to the conventional mesh-based discretization techniques. The concept of `transfer learning' is integrated with the developed PINN approach to improve the computational efficiency of the network at each displacement step. This approach allows a numerically stable crack growth even with larger displacement steps. An adaptive h-refinement scheme based on the generation of more quadrature points in the damage zone is developed in this framework. For all the developed methods, displacement-controlled loading is considered. The accuracy and the efficiency of both methods are studied numerically showing that the developed methods are powerful and computationally efficient tools for accurately predicting fractures. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2021,1 KW - Phasenfeldmodell KW - Neuronales Netz KW - Sprödbruch KW - Isogeometric Analysis KW - Physics informed neural network KW - phase field KW - deep neural network KW - brittle fracture Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210304-43841 ER - TY - THES A1 - Jaouadi, Zouhour T1 - Pareto and Reliability-Oriented Aeroelastic Shape Optimization of Bridge Decks N2 - Due to the development of new technologies and materials, optimized bridge design has recently gained more attention. The aim is to reduce the bridge components materials and the CO2 emission from the cement manufacturing process. Thus, most long-span bridges are designed to be with high flexibility, low structural damping, and longer and slender spans. Such designs lead, however, to aeroelastic challenges. Moreover, the consideration of both the structural and aeroelastic behavior in bridges leads to contradictory solutions as the structural constraints lead to deck prototypes with high depth which provide high inertia to material volume ratios. On the other hand, considering solely the aerodynamic requirements, slender airfoil-shaped bridge box girders are recommended since they prevent vortex shedding and exhibit minimum drag. Within this framework comes this study which provides approaches to find optimal bridge deck cross-sections while considering the aerodynamic effects. Shape optimization of deck cross-section is usually formulated to minimize the amount of material by finding adequate parameters such as the depth, the height, and the thickness and while ensuring the overall stability of the structure by the application of some constraints. Codes and studies have been implemented to analyze the wind phenomena and the structural responses towards bridge deck cross-sections where simplifications have been adopted due to the complexity and the uniqueness of such components besides the difficulty of obtaining a final model of the aerodynamic behavior. In this thesis, two main perspectives have been studied; the first is fully deterministic and presents a novel framework on generating optimal aerodynamic shapes for streamlined and trapezoidal cross-sections based on the meta-modeling approach. Single and multi-objective optimizations were both carried out and a Pareto Front is generated. The performance of the optimal designs is checked afterwards. In the second part, a new strategy based on Reliability-Based Design Optimization (RBDO) to mitigate the vortex-induced vibration (VIV) on the Trans-Tokyo Bay bridge is proposed. Small changes in the leading and trailing edges are presented and uncertainties are considered in the structural system. Probabilistic constraints based on polynomial regression are evaluated and the problem is solved while applying the Reliability Index Approach (RIA) and the Performance Measure Approach (PMA). The results obtained in the first part showed that the aspect ratio has a significant effect on the aerodynamic behavior where deeper cross-sections have lower resistance against flutter and should be avoided. In the second part, the adopted RBDO approach succeeded to mitigate the VIV, and it is proven that designs with narrow or prolonged bottom-base length and featuring an abrupt surface change in the leading and trailing edges can lead to high vertical vibration amplitude. It is expected that this research will help engineers with the selections of the adequate deck cross-section layout, and encourage researchers to apply concepts of optimization regarding this field and develop the presented approaches for further studies. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,10 KW - Gestaltoptimierung KW - Vibration KW - Deck cross-sections KW - Reliability-based design optimization KW - Shape optimization KW - Pareto Front KW - Vortex-induced vibration Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230303-49352 ER -