TY - JOUR A1 - Ouaer, Hocine A1 - Hosseini, Amir Hossein A1 - Amar, Menad Nait A1 - Ben Seghier, Mohamed El Amine A1 - Ghriga, Mohammed Abdelfetah A1 - Nabipour, Narjes A1 - Andersen, Pål Østebø A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin T1 - Rigorous Connectionist Models to Predict Carbon Dioxide Solubility in Various Ionic Liquids JF - Applied Sciences N2 - Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201. KW - Maschinelles Lernen KW - Machine learning KW - OA-Publikationsfonds2020 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200107-40558 UR - https://www.mdpi.com/2076-3417/10/1/304 VL - 2020 IS - Volume 10, Issue 1, 304 PB - MDPI ER - TY - JOUR A1 - Shamshirband, Shahaboddin A1 - Joloudari, Javad Hassannataj A1 - GhasemiGol, Mohammad A1 - Saadatfar, Hamid A1 - Mosavi, Amir A1 - Nabipour, Narjes T1 - FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks JF - Mathematics N2 - Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods. KW - Vernetzung KW - wireless sensor networks KW - machine learning KW - Funktechnik KW - Sensor KW - Maschinelles Lernen KW - Internet of Things KW - OA-Publikationsfonds2019 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200107-40541 UR - https://www.mdpi.com/2227-7390/8/1/28 VL - 2020 IS - Volume 8, Issue 1, article 28 PB - MDPI ER - TY - THES A1 - Hossain, Md Naim T1 - Isogeometric analysis based on Geometry Independent Field approximaTion (GIFT) and Polynomial Splines over Hierarchical T-meshes N2 - This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines). In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required. The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost. T2 - Die isogeometrische Analysis basierend auf der geometrieunabhängigen Feldnäherung (GIFT)und polynomialen Splines über hierarchischen T-Netzen KW - Finite-Elemente-Methode KW - Isogeometrc Analysis KW - Geometry Independent Field Approximation KW - Polynomial Splines over Hierarchical T-meshes KW - Recovery Based Error Estimator Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20191129-40376 ER - TY - THES A1 - Zabel, Volkmar ED - Könke, Carsten ED - Lahmer, Tom ED - Rabczuk, Timon T1 - Operational modal analysis - Theory and aspects of application in civil engineering N2 - In recent years the demand on dynamic analyses of existing structures in civil engineering has remarkably increased. These analyses are mainly based on numerical models. Accordingly, the generated results depend on the quality of the used models. Therefore it is very important that the models describe the considered systems such that the behaviour of the physical structure is realistically represented. As any model is based on assumptions, there is always a certain degree of uncertainty present in the results of a simulation based on the respective numerical model. To minimise these uncertainties in the prediction of the response of a structure to a certain loading, it has become common practice to update or calibrate the parameters of a numerical model based on observations of the structural behaviour of the respective existing system. The determination of the behaviour of an existing structure requires experimental investigations. If the numerical analyses concern the dynamic response of a structure it is sensible to direct the experimental investigations towards the identification of the dynamic structural behaviour which is determined by the modal parameters of the system. In consequence, several methods for the experimental identification of modal parameters have been developed since the 1980ies. Due to various technical restraints in civil engineering which limit the possibilities to excitate a structure with economically reasonable effort, several methods have been developed that allow a modal identification form tests with an ambient excitation. The approach of identifying modal parameters only from measurements of the structural response without precise knowledge of the excitation is known as output-only or operational modal analysis. Since operational modal analysis (OMA) can be considered as a link between numerical modelling and simulation on the one hand and the dynamic behaviour of an existing structure on the other hand, the respective algorithms connect both the concepts of structural dynamics and mathematical tools applied within the processing of experimental data. Accordingly, the related theoretical topics are revised after an introduction into the topic. Several OMA methods have been developed over the last decades. The most established algorithms are presented here and their application is illustrated by means of both a small numerical and an experimental example. Since experimentally obtained results always underly manifold influences, an appropriate postprocessing of the results is necessary for a respective quality assessment. This quality assessment does not only require respective indicators but should also include the quantification of uncertainties. One special feature in modal testing is that it is common to instrument the structure in different sensor setups to improve the spacial resolution of identified mode shapes. The modal information identified from tests in several setups needs to be merged a posteriori. Algorithms to cope with this problem are also presented. Due to the fact that the amount of data generated in modal tests can become very large, manual processing can become extremely expensive or even impossible, for example in the case of a long-term continuous structural monitoring. In these situations an automated analysis and postprocessing are essential. Descriptions of respective methodologies are therefore also included in this work. Every structural system in civil engineering is unique and so also every identification of modal parameters has its specific challenges. Some aspects that can be faced in practical applications of operational modal analysis are presented and discussed in a chapter that is dedicated specific problems that an analyst may have to overcome. Case studies of systems with very close modes, with limited accessibility as well as the application of different OMA methods are described and discussed. In this context the focus is put on several types of uncertainty that may occur in the multiple stages of an operational modal analysis. In literature only very specific uncertainties at certain stages of the analysis are addressed. Here, the topic of uncertainties has been considered in a broader sense and approaches for treating respective problems are suggested. Eventually, it is concluded that the methodologies of operatinal modal analysis and related technical solutions have been well-engineered already. However, as in any discipline that includes experiments, a certain degree of uncertainty always remains in the results. From these conclusions has been derived a demand for further research and development that should be directed towards the minimisation of these uncertainties and to a respective optimisation of the steps and corresponding parameters included in an operational modal analysis. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,5 KW - Modalanalyse KW - Strukturdynamik KW - Operational modal analysis KW - modal analysis KW - structural dynamics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20191030-40061 ER - TY - THES A1 - Zafar, Usman T1 - Probabilistic Reliability Analysis of Wind Turbines N2 - Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines. KW - Windturbine KW - Windenergie KW - Wind Turbines KW - Wind Energy KW - Reliability Analysis KW - Zuverlässigkeitsanalyse Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20240507-39773 ER - TY - THES A1 - Nickerson, Seth T1 - Thermo-Mechanical Behavior of Honeycomb, Porous, Microcracked Ceramics BT - Characterization and analysis of thermally induced stresses with specific consideration of synthetic, porous cordierite honeycomb substrates N2 - The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties. Primary novel factors of this work center on two aspects. 1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners. 2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions. Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis. This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,4 KW - Keramik KW - ceramics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190911-39753 ER - TY - THES A1 - Schemmann, Christoph T1 - Optimierung von radialen Verdichterlaufrädern unter Berücksichtigung empirischer und analytischer Vorinformationen mittels eines mehrstufigen Sampling Verfahrens T1 - Optimization of Centrifugal Compressor Impellers by a Multi-fidelity Sampling Method Taking Analytical and Empirical Information into Account N2 - Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too. N2 - Turbomaschinen sind eine entscheidende Komponente in vielen Energiewandlungs- oder Energieerzeugungsprozessen und daher als vielversprechender Ansatzpunkt für eine Effizienzsteigerung der Energie-und Ressourcennutzung anzusehen. Im Laufe des letzten Jahrzehnts haben automatisierte Optimierungsmethoden in Verbindung mit numerischer Simulation zunehmend breitere Verwendung als Mittel zur Effizienzsteigerung in vielen Bereichen der Ingenieurwissenschaften gefunden. Allerdings standen die komplexen Interaktionen zwischen Strömungs- und Strukturmechanik sowie der hohe nummerische Aufwand einem weitverbreiteten Einsatz dieser Methoden im Turbomaschinenbereich bisher entgegen. Das Ziel dieser Forschungsaktivität ist die Entwicklung einer effizienten Strategie zur metamodellbasierten Optimierung von radialen Verdichterlaufrädern. Dabei liegt der Schwerpunkt auf einer Reduktion des benötigten numerischen Aufwandes. Der in diesem Vorhaben gewählte Ansatz ist das Einbeziehen analytischer und empirischer Vorinformationen (“lowfidelity“) in den Sampling Prozess, um vielversprechende Bereiche des Parameterraumes zu identifizieren. Diese Informationen werden genutzt um die aufwendigen numerischen Berechnungen (“high-fidelity“) des strömungs- und strukturmechanischen Verhaltens der Laufräder in diesen Bereichen zu konzentrieren, während gleichzeitig eine ausreichende Abdeckung des gesamten Parameterraumes sichergestellt wird. Die Entwicklung der Optimierungsstrategie ist in drei zentrale Arbeitspakete aufgeteilt. In einem ersten Schritt werden die verfügbaren empirischen und analytischen Methoden gesichtet und bewertet. In dieser Recherche sind Verlustmodelle basierend auf eindimensionaler Strömungsmechanik und empirischen Korrelationen als bestgeeignete Methode zur Vorhersage des aerodynamischen Verhaltens der Verdichter identifiziert worden. Um eine hohe Vorhersagegüte sicherzustellen, sind diese Modelle anhand verfügbarer Leistungsdaten kalibriert worden. Da zur Vorhersage der mechanischen Belastung des Laufrades keine brauchbaren analytischen oder empirischen Modelle ermittelt werden konnten, ist hier ein Metamodel basierend auf Finite-Element Berechnungen gewählt worden. Das zweite Arbeitspaket beinhaltet die Entwicklung der angepassten Samplingmethode, welche Samples in Bereichen des Parameterraumes konzentriert, die auf Basis der Vorinformationen als vielversrechend angesehen werden können. Gleichzeitig müssen eine gleichmäßige Abdeckung des gesamten Parameterraumes und ein niedriges Niveau an Eingangskorrelationen sichergestellt sein. Da etablierte Methoden wie Markov-Ketten-Monte-Carlo-Methoden oder die Verwerfungsmethode diese Voraussetzungen nicht erfüllen, ist ein neues, mehrstufiges Samplingverfahren (“Filtered Sampling“) entwickelt worden. Das letzte Arbeitspaket umfasst die Entwicklung eines automatisiertenSimulations-Workflows. Dieser Workflow umfasst Geometrieparametrisierung, Geometrieerzeugung, Netzerzeugung sowie die Berechnung des aerodynamischen Betriebsverhaltens und der strukturmechanischen Belastung. Dabei liegt ein Schwerpunkt auf der Entwicklung eines Parametrisierungskonzeptes, welches auf strömungsmechanischen Zusammenhängen beruht, um so physikalisch nicht zielführende Parameterkombinationen zu vermeiden. Abschließend ist die auf den zuvor entwickelten Werkzeugen aufbauende Optimierungsstrategie erfolgreich eingesetzt worden, um drei Optimierungsfragestellungen zu bearbeiten. Im ersten und zweiten Testcase sind bestehende Verdichterlaufräder mit der vorgestellten Methode optimiert worden. Die erzielten Optimierungsergebnisse sind von ähnlicher Güte wie die solcher Optimierungen, die keine Vorinformationen berücksichtigen, allerdingswirdnurdieHälfteannumerischemAufwandbenötigt. IneinemdrittenTestcase ist die Methode eingesetzt worden, um ein neues Laufraddesign zu erzeugen. Im Gegensatz zu den vorherigen Beispielen werden im Rahmen dieser Optimierung stark unterschiedliche Designs untersucht. Dadurch kann an diesem dritten Beispiel aufgezeigt werden, dass die Methode auch für Parameterräume mit stakt variierenden Designs funktioniert. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,3 KW - Simulation KW - Maschinenbau KW - Optimierung KW - Strömungsmechanik KW - Strukturmechanik Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190910-39748 ER - TY - THES A1 - Tan, Fengjie T1 - Shape Optimization Design of Arch Type Dams under Uncertainties N2 - Due to an increased need for hydro-electricity, water storage, and flood protection, it is assumed that a series of new dams will be built throughout the world. Comparing existing design methodologies for arch-type dams, model-based shape optimization can effectively reduce construction costs and leverage the properties of construction materials. To apply the means of shape optimization, suitable variables need to be chosen to formulate the objective function, which is the volume of the arch dam here. In order to increase the consistency with practical conditions, a great number of geometrical and behavioral constraints are included in the mathematical model. An optimization method, namely Genetic Algorithm is adopted which allows a global search. Traditional optimization techniques are realized based on a deterministic approach, which means that the material properties and loading conditions are assumed to be fixed values. As a result, the real-world structures that are optimized by these approaches suffer from uncertainties that one needs to be aware of. Hence, in any optimization process for arch dams, it is nec- essary to find a methodology that is capable of considering the influences of uncertainties and generating a solution which is robust enough against the uncertainties. The focus of this thesis is the formulation and the numerical method for the optimization of the arch dam under the uncertainties. The two main models, the probabilistic model, and non-probabilistic models are intro- duced and discussed. Classic procedures of probabilistic approaches un- der uncertainties, such as RDO (robust design optimization) and RBDO (reliability-based design optimization), are in general computationally ex- pensive and rely on estimates of the system’s response variance and fail- ure probabilities. Instead, the robust optimization (RO) method which is based on the non-probabilistic model, will not follow a full probabilistic approach but works with pre-defined confidence levels. This leads to a bi-level optimization program where the volume of the dam is optimized under the worst combination of the uncertain parameters. By this, robust and reliable designs are obtained and the result is independent of any as- sumptions on stochastic properties of the random variables in the model. The optimization of an arch-type dam is realized here by a robust optimiza- tion method under load uncertainty, where hydraulic and thermal loads are considered. The load uncertainty is modeled as an ellipsoidal expression. Comparing with any traditional deterministic optimization (DO) method, which only concerns the minimum objective value and offers a solution candidate close to limit-states, the RO method provides a robust solution against uncertainties. All the above mentioned methods are applied to the optimization of the arch dam to compare with the optimal design with DO methods. The re- sults are compared and analyzed to discuss the advantages and drawbacks of each method. In order to reduce the computational cost, a ranking strategy and an ap- proximation model are further involved to do a preliminary screening. By means of these, the robust design can generate an improved arch dam structure which ensures both safety and serviceability during its lifetime. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,2 KW - Wasserbau KW - Staudamm KW - dams Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190819-39608 ER - TY - THES A1 - Alalade, Muyiwa T1 - An Enhanced Full Waveform Inversion Method for the Structural Analysis of Dams N2 - Since the Industrial Revolution in the 1700s, the high emission of gaseous wastes into the atmosphere from the usage of fossil fuels has caused a general increase in temperatures globally. To combat the environmental imbalance, there is an increase in the demand for renewable energy sources. Dams play a major role in the generation of “green" energy. However, these structures require frequent and strict monitoring to ensure safe and efficient operation. To tackle the challenges faced in the application of convention dam monitoring techniques, this work proposes the inverse analysis of numerical models to identify damaged regions in the dam. Using a dynamic coupled hydro-mechanical Extended Finite Element Method (XFEM) model and a global optimization strategy, damage (crack) in the dam is identified. By employing seismic waves to probe the dam structure, a more detailed information on the distribution of heterogeneous materials and damaged regions are obtained by the application of the Full Waveform Inversion (FWI) method. The FWI is based on a local optimization strategy and thus it is highly dependent on the starting model. A variety of data acquisition setups are investigated, and an optimal setup is proposed. The effect of different starting models and noise in the measured data on the damage identification is considered. Combining the non-dependence of a starting model of the global optimization strategy based dynamic coupled hydro-mechanical XFEM method and the detailed output of the local optimization strategy based FWI method, an enhanced Full Waveform Inversion is proposed for the structural analysis of dams. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,1 KW - Talsperre KW - Staumauer KW - Damage identification KW - Inverse analysis KW - Dams KW - Full waveform inversion KW - Wave propagation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190813-39566 ER - TY - INPR A1 - Radmard Rahmani, Hamid A1 - Könke, Carsten T1 - Passive Control of Tall Buildings Using Distributed Multiple Tuned Mass Dampers N2 - The vibration control of the tall building during earthquake excitations is a challenging task due to their complex seismic behavior. This paper investigates the optimum placement and properties of the Tuned Mass Dampers (TMDs) in tall buildings, which are employed to control the vibrations during earthquakes. An algorithm was developed to spend a limited mass either in a single TMD or in multiple TMDs and distribute them optimally over the height of the building. The Non-dominated Sorting Genetic Algorithm (NSGA – II) method was improved by adding multi-variant genetic operators and utilized to simultaneously study the optimum design parameters of the TMDs and the optimum placement. The results showed that under earthquake excitations with noticeable amplitude in higher modes, distributing TMDs over the height of the building is more effective in mitigating the vibrations compared to the use of a single TMD system. From the optimization, it was observed that the locations of the TMDs were related to the stories corresponding to the maximum modal displacements in the lower modes and the stories corresponding to the maximum modal displacements in the modes which were highly activated by the earthquake excitations. It was also noted that the frequency content of the earthquake has significant influence on the optimum location of the TMDs. KW - Schwingungsdämpfer KW - Hochbau KW - tall buildings KW - passive control KW - genetic algorithm KW - tuned mass dampers Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190311-38597 UR - https://www.researchgate.net/publication/330508976_Seismic_Control_of_Tall_Buildings_Using_Distributed_Multiple_Tuned_Mass_Dampers ER -