TY - JOUR A1 - Zhuang, Xiaoying A1 - Huang, Runqiu A1 - Liang, Chao A1 - Rabczuk, Timon T1 - A coupled thermo-hydro-mechanical model of jointed hard rock for compressed air energy storage JF - Mathematical Problems in Engineering N2 - Renewable energy resources such as wind and solar are intermittent, which causes instability when being connected to utility grid of electricity. Compressed air energy storage (CAES) provides an economic and technical viable solution to this problem by utilizing subsurface rock cavern to store the electricity generated by renewable energy in the form of compressed air. Though CAES has been used for over three decades, it is only restricted to salt rock or aquifers for air tightness reason. In this paper, the technical feasibility of utilizing hard rock for CAES is investigated by using a coupled thermo-hydro-mechanical (THM) modelling of nonisothermal gas flow. Governing equations are derived from the rules of energy balance, mass balance, and static equilibrium. Cyclic volumetric mass source and heat source models are applied to simulate the gas injection and production. Evaluation is carried out for intact rock and rock with discrete crack, respectively. In both cases, the heat and pressure losses using air mass control and supplementary air injection are compared. KW - Energiespeicherung KW - Druckluft KW - Kaverne KW - Modellierung Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170428-31726 ER - TY - THES A1 - Zhao, Jun-Hua T1 - Multiscale modeling of nanodevices based on carbon nanotubes and polymers T1 - Multiskalige Modellierung von auf Kohlenstoffnanoröhren und Polymeren basierenden Nanobauteilen N2 - This thesis concerns the physical and mechanical interactions on carbon nanotubes and polymers by multiscale modeling. CNTs have attracted considerable interests in view of their unique mechanical, electronic, thermal, optical and structural properties, which enable them to have many potential applications. Carbon nanotube exists in several structure forms, from individual single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) to carbon nanotube bundles and networks. The mechanical properties of SWCNTs and MWCNTs have been extensively studied by continuum modeling and molecular dynamics (MD) simulations in the past decade since the properties could be important in the CNT-based devices. CNT bundles and networks feature outstanding mechanical performance and hierarchical structures and network topologies, which have been taken as a potential saving-energy material. In the synthesis of nanocomposites, the formation of the CNT bundles and networks is a challenge to remain in understanding how to measure and predict the properties of such large systems. Therefore, a mesoscale method such as a coarse-grained (CG) method should be developed to study the nanomechanical characterization of CNT bundles and networks formation. In this thesis, the main contributions can be written as follows: (1) Explicit solutions for the cohesive energy between carbon nanotubes, graphene and substrates are obtained through continuum modeling of the van der Waals interaction between them. (2) The CG potentials of SWCNTs are established by a molecular mechanics model. (3) The binding energy between two parallel and crossing SWCNTs and MWCNTs is obtained by continuum modeling of the van der Waals interaction between them. Crystalline and amorphous polymers are increasingly used in modern industry as tructural materials due to its important mechanical and physical properties. For crystalline polyethylene (PE), despite its importance and the studies of available MD simulations and continuum models, the link between molecular and continuum descriptions of its mechanical properties is still not well established. For amorphous polymers, the chain length and temperature effect on their elastic and elastic-plastic properties has been reported based on the united-atom (UA) and CG MD imulations in our previous work. However, the effect of the CL and temperature on the failure behavior is not understood well yet. Especially, the failure behavior under shear has been scarcely reported in previous work. Therefore, understanding the molecular origins of macroscopic fracture behavior such as fracture energy is a fundamental scientific challenge. In this thesis, the main contributions can be written as follows: (1) An analytical molecular mechanics model is developed to obtain the size-dependent elastic properties of crystalline PE. (2) We show that the two molecular mechanics models, the stick-spiral and the beam models, predict considerably different mechanical properties of materials based on energy equivalence. The difference between the two models is independent of the materials. (3) The tensile and shear failure behavior dependence on chain length and temperature in amorphous polymers are scrutinized using molecular dynamics simulations. Finally, the influence of polymer wrapped two neighbouring SWNTs’ dispersion on their load transfer is investigated by molecular dynamics (MD) simulations, in which the SWNTs' position, the polymer chain length and the temperature on the interaction force is systematically studied. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2014,1 KW - Mehrskalenmodell KW - Kohlenstoff Nanoröhre KW - Polymere KW - Multiscale modeling KW - Carbon nanotubes KW - Polymers Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20140130-21078 ER - TY - JOUR A1 - Zhang, Yongzheng A1 - Ren, Huilong T1 - Implicit implementation of the nonlocal operator method: an open source code JF - Engineering with computers N2 - In this paper, we present an open-source code for the first-order and higher-order nonlocal operator method (NOM) including a detailed description of the implementation. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combined with the method of weighed residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. The implementation in this paper is focused on linear elastic solids for sake of conciseness through the NOM can handle more complex nonlinear problems. The NOM can be very flexible and efficient to solve partial differential equations (PDEs), it’s also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Finally, we present some classical benchmark problems including the classical cantilever beam and plate-with-a-hole problem, and we also make an extension of this method to solve complicated problems including phase-field fracture modeling and gradient elasticity material. KW - Strukturmechanik KW - Nonlocal operator method KW - Operator energy functional KW - Implicit KW - Dual-support KW - Variational principle KW - Taylor series expansion KW - Stiffness matrix Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220216-45930 UR - https://link.springer.com/article/10.1007/s00366-021-01537-x VL - 2022 SP - 1 EP - 35 PB - Springer CY - London ER - TY - JOUR A1 - Zhang, Yongzheng T1 - Nonlocal dynamic Kirchhoff plate formulation based on nonlocal operator method JF - Engineering with Computers N2 - In this study, we propose a nonlocal operator method (NOM) for the dynamic analysis of (thin) Kirchhoff plates. The nonlocal Hessian operator is derived based on a second-order Taylor series expansion. The NOM does not require any shape functions and associated derivatives as ’classical’ approaches such as FEM, drastically facilitating the implementation. Furthermore, NOM is higher order continuous, which is exploited for thin plate analysis that requires C1 continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for the time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation. KW - Angewandte Mathematik KW - nonlocal operator method KW - nonlocal Hessian operator KW - operator energy functional KW - dual-support KW - variational principle Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220209-45849 UR - https://link.springer.com/article/10.1007/s00366-021-01587-1 VL - 2022 SP - 1 EP - 35 PB - Springer CY - London ER - TY - THES A1 - Zhang, Yongzheng T1 - A Nonlocal Operator Method for Quasi-static and Dynamic Fracture Modeling N2 - Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons. Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form. The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows: -The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method. -A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation. -A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,9 KW - Variationsprinzip KW - Partial Differential Equations KW - Taylor Series Expansion KW - Peridynamics KW - Variational principle KW - Phase field method KW - Peridynamik KW - Phasenfeldmodell KW - Partielle Differentialgleichung KW - Nichtlokale Operatormethode Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221026-47321 ER - TY - JOUR A1 - Zhang, Chao A1 - Hao, Xiao-Li A1 - Wang, Cuixia A1 - Wei, Ning A1 - Rabczuk, Timon T1 - Thermal conductivity of graphene nanoribbons under shear deformation: A molecular dynamics simulation JF - Scientific Reports N2 - Tensile strain and compress strain can greatly affect the thermal conductivity of graphene nanoribbons (GNRs). However, the effect of GNRs under shear strain, which is also one of the main strain effect, has not been studied systematically yet. In this work, we employ reverse nonequilibrium molecular dynamics (RNEMD) to the systematical study of the thermal conductivity of GNRs (with model size of 4 nm × 15 nm) under the shear strain. Our studies show that the thermal conductivity of GNRs is not sensitive to the shear strain, and the thermal conductivity decreases only 12–16% before the pristine structure is broken. Furthermore, the phonon frequency and the change of the micro-structure of GNRs, such as band angel and bond length, are analyzed to explore the tendency of thermal conductivity. The results show that the main influence of shear strain is on the in-plane phonon density of states (PDOS), whose G band (higher frequency peaks) moved to the low frequency, thus the thermal conductivity is decreased. The unique thermal properties of GNRs under shear strains suggest their great potentials for graphene nanodevices and great potentials in the thermal managements and thermoelectric applications. KW - Wärmeleitfähigkeit KW - Graphen KW - Schubspannung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170428-31718 ER - TY - THES A1 - ZHANG, CHAO T1 - Crack Identification using Dynamic Extended Finite Element Method and Thermal Conductivity Engineering for Nanomaterials N2 - Identification of flaws in structures is a critical element in the management of maintenance and quality assurance processes in engineering. Nondestructive testing (NDT) techniques based on a wide range of physical principles have been developed and are used in common practice for structural health monitoring. However, basic NDT techniques are usually limited in their ability to provide the accurate information on locations, dimensions and shapes of flaws. One alternative to extract additional information from the results of NDT is to append it with a computational model that provides detailed analysis of the physical process involved and enables the accurate identification of the flaw parameters. The aim here is to develop the strategies to uniquely identify cracks in two-dimensional 2D) structures under dynamic loadings. A local NDT technique combined eXtended Finite Element Method (XFEM) with dynamic loading in order to identify the cracks in the structures quickly and accurately is developed in this dissertation. The Newmark-b time integration method with Rayleigh damping is used for the time integration. We apply Nelder-Mead (NM)and Quasi-Newton (QN) methods for identifying the crack tip in plate. The inverse problem is solved iteratively, in which XFEM is used for solving the forward problem in each iteration. For a timeharmonic excitation with a single frequency and a short-duration signal measured along part of the external boundary, the crack is detected through the solution of an inverse time-dependent problem. Compared to the static load, we show that the dynamic loads are more effective for crack detection problems. Moreover, we tested different dynamic loads and find that NM method works more efficient under the harmonic load than the pounding load while the QN method achieves almost the same results for both load types. A global strategy, Multilevel Coordinate Search (MCS) with XFEM (XFEM-MCS) methodology under the dynamic electric load, to detect multiple cracks in 2D piezoelectric plates is proposed in this dissertation. The Newmark-b method is employed for the time integration and in each iteration the forward problem is solved by XFEM for various cracks. The objective functional is minimized by using a global search algorithm MCS. The test problems show that the XFEM-MCS algorithm under the dynamic electric load can be effectively employed for multiple cracks detection in piezoelectric materials, and it proves to be robust in identifying defects in piezoelectric structures. Fiber-reinforced composites (FRCs) are extensively applied in practical engineering since they have high stiffness and strength. Experiments reveal a so-called interphase zone, i.e. the space between the outside interface of the fiber and the inside interface of the matrix. The interphase strength between the fiber and the matrix strongly affects the mechanical properties as a result of the large ratio of interface/volume. For the purpose of understanding the mechanical properties of FRCs with functionally graded interphase (FGI), a closed-form expression of the interface strength between a fiber and a matrix is obtained in this dissertation using a continuum modeling approach according to the ver derWaals (vdW) forces. Based on the interatomic potential, we develop a new modified nonlinear cohesive law, which is applied to study the interface delamination of FRCs with FGI under different loadings. The analytical solutions show that the delamination behavior strongly depends on the interphase thickness, the fiber radius, the Young’s moduli and Poisson’s ratios of the fiber and the matrix. Thermal conductivity is the property of a material to conduct heat. With the development and deep research of 2D materials, especially graphene and molybdenum disulfide (MoS2), the thermal conductivity of 2D materials attracts wide attentions. The thermal conductivity of graphene nanoribbons (GNRs) is found to appear a tendency of decreasing under tensile strain by classical molecular dynamics (MD) simulations. Hence, the strain effects of graphene can play a key role in the continuous tunability and applicability of its thermal conductivity property at nanoscale, and the dissipation of thermal conductivity is an obstacle for the applications of thermal management. Up to now, the thermal conductivity of graphene under shear deformation has not been investigated yet. From a practical point of view, good thermal managements of GNRs have significantly potential applications of future GNR-based thermal nanodevices, which can greatly improve performances of the nanosized devices due to heat dissipations. Meanwhile, graphene is a thin membrane structure, it is also important to understand the wrinkling behavior under shear deformation. MoS2 exists in the stable semiconducting 1H phase (1H-MoS2) while the metallic 1T phase (1T-MoS2) is unstable at ambient conditions. As it’s well known that much attention has been focused on studying the nonlinear optical properties of the 1H-MoS2. In a very recent research, the 1T-type monolayer crystals of TMDCs, MX2 (MoS2, WS2 ...) was reported having an intrinsic in-plane negative Poisson’s ratio. Luckily, nearly at the same time, unprecedented long-term (>3months) air stability of the 1T-MoS2 can be achieved by using the donor lithium hydride (LiH). Therefore, it’s very important to study the thermal conductivity of 1T-MoS2. The thermal conductivity of graphene under shear strain is systematically studied in this dissertation by MD simulations. The results show that, in contrast to the dramatic decrease of thermal conductivity of graphene under uniaxial tensile, the thermal conductivity of graphene is not sensitive to the shear strain, and the thermal conductivity decreases only 12-16%. The wrinkle evolves when the shear strain is around 5%-10%, but the thermal conductivity barely changes. The thermal conductivities of single-layer 1H-MoS2(1H-SLMoS2) and single-layer 1T-MoS2 (1T-SLMoS2) with different sample sizes, temperatures and strain rates have been studied systematically in this dissertation. We find that the thermal conductivities of 1H-SLMoS2 and 1T-SLMoS2 in both the armchair and the zigzag directions increase with the increasing of the sample length, while the increase of the width of the sample has minor effect on the thermal conductions of these two structures. The thermal conductivity of 1HSLMoS2 is smaller than that of 1T-SLMoS2 under size effect. Furthermore, the temperature effect results show that the thermal conductivities of both 1H-SLMoS2 and 1T-SLMoS2 decrease with the increasing of the temperature. The thermal conductivities of 1HSLMoS2 and 1T-SLMoS2 are nearly the same (difference <6%) in both of the chiral orientations under corresponding temperatures, especially in the armchair direction (difference <2.8%). Moreover, we find that the strain effects on the thermal conductivity of 1HSLMoS2 and 1T-SLMoS2 are different. More specifically, the thermal conductivity decreases with the increasing tensile strain rate for 1T-SLMoS2, while fluctuates with the growth of the strain for 1HSLMoS2. Finally, we find that the thermal conductivity of same sized 1H-SLMoS2 is similar with that of the strained 1H-SLMoS2 structure. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,6 KW - crack KW - Wärmeleitfähigkeit KW - crack identification KW - thermal conductivity Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190119-38478 ER - TY - THES A1 - Zafar, Usman T1 - Probabilistic Reliability Analysis of Wind Turbines N2 - Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines. KW - Windturbine KW - Windenergie KW - Wind Turbines KW - Wind Energy KW - Reliability Analysis KW - Zuverlässigkeitsanalyse Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20240507-39773 ER - TY - THES A1 - Zacharias, Christin T1 - Numerical Simulation Models for Thermoelastic Damping Effects N2 - Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment. This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy. The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping. Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation. The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential. The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes. N2 - Die Finite-Elemente Simulation von dynamisch angeregten Strukturen wird im Wesentlich durch die Steifigkeits-, Massen- und Dämpfungseigenschaften des Systems sowie durch die äußere Belastung bestimmt. Die Vorhersagequalität von dynamischen Simulationen schwingungsanfälliger Bauteile hängt wesentlich von der Verwendung geeigneter Dämpfungsmodelle ab. Dämpfungsphänomene haben einen wesentlichen Einfluss auf die Schwingungsamplitude, die Frequenz und teilweise sogar die Existenz von Vibrationen. Allerdings ist die Entwicklung von realitätsnahen Dämpfungsmodellen oft schwierig, da eine Vielzahl von physikalischen Effekten zur Energiedissipation während eines Schwingungsvorgangs führt. Beispiele hierfür sind die Materialdämpfung, verschiedene Formen der Reibung sowie vielfältige Wechselwirkungen mit dem umgebenden Medium. Diese Dissertation befasst sich mit thermoelastischer Dämpfung, die in homogenen Materialien die dominante Ursache der Materialdämpfung darstellt. Der thermoelastische Effekt wird ausgelöst durch eine Temperaturänderung aufgrund mechanischer Spannungen. In der schwingenden Struktur entstehen während der Deformation Temperaturgradienten zwischen benachbarten Regionen unter Zug- und Druckbelastung. In Abhängigkeit von der Vibrationsfrequenz führen diese zu Wärmeströmen und irreversibler Umwandlung mechanischer in thermische Energie. Die Zielstellung dieser Arbeit besteht in der Entwicklung recheneffizienter Simulationsmethoden, um thermoelastische Dämpfung in zeitabhängigen Finite-Elemente Analysen, die auf modaler Superposition beruhen, zu integrieren. Der thermoelastische Verlustfaktor wird auf der Grundlage der mechanischen Eigenformen und -frequenzen bestimmt. In nachfolgenden Analysen im Zeit- und Frequenzbereich wird er als modaler Dämpfungsgrad verwendet. Zwei Ansätze werden entwickelt, um den thermoelastischen Verlustfaktor in dünn-wandigen Plattenstrukturen, sowie in dreidimensionalen Volumenbauteilen zu simulieren. Die realitätsnahe Vorhersage der Energiedissipation wird durch die Verifizierung an experimentellen Daten bestätigt. Dafür wird ein Versuchsaufbau entwickelt, der eine Messung von Materialdämpfung unter Ausschluss anderer Dissipationsquellen ermöglicht. Für den Fall der Volumenbauteile wird ein Ansatz verwendet, der auf der Berechnung der Entropieänderung und damit der erzeugte Wärmeenergie während eines Schwingungszyklus beruht. Im Verhältnis zur Formänderungsenergie ist dies ein Maß für die thermoelastische Dämpfung. Für dünne Plattenstrukturen wird der Anteil an Biegeenergie in der Eigenform bestimmt und im sogenannten modalen Biegefaktor (MBF) zusammengefasst. Der maximale Grad an thermoelastischer Dämpfung kann im Zustand reiner Biegung auftreten, sodass der MBF eine quantitative Klassifikation der Eigenformen hinsichtlich ihres thermoelastischen Dämpfungspotentials zulässt. Die Ergebnisse der entwickelten Simulationsmethoden stimmen sehr gut mit den experimentellen Daten überein und sind geeignet, um thermoelastische Dämpfungsgrade vorherzusagen. Beide Ansätze basieren auf modaler Superposition und ermöglichen damit zeitabhängige Simulationen mit einer hohen Recheneffizienz. Insgesamt stellt die Modellierung der thermoelastischen Dämpfung einen Baustein in einem umfassenden Dämpfungsmodell dar, welches zur realitätsnahen Simulation von Schwingungsvorgängen notwendig ist. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,8 KW - Werkstoffdämpfung KW - Finite-Elemente-Methode KW - Strukturdynamik KW - Thermoelastic damping KW - modal damping KW - decay experiments KW - energy dissipation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221116-47352 ER - TY - THES A1 - Zabel, Volkmar ED - Könke, Carsten ED - Lahmer, Tom ED - Rabczuk, Timon T1 - Operational modal analysis - Theory and aspects of application in civil engineering N2 - In recent years the demand on dynamic analyses of existing structures in civil engineering has remarkably increased. These analyses are mainly based on numerical models. Accordingly, the generated results depend on the quality of the used models. Therefore it is very important that the models describe the considered systems such that the behaviour of the physical structure is realistically represented. As any model is based on assumptions, there is always a certain degree of uncertainty present in the results of a simulation based on the respective numerical model. To minimise these uncertainties in the prediction of the response of a structure to a certain loading, it has become common practice to update or calibrate the parameters of a numerical model based on observations of the structural behaviour of the respective existing system. The determination of the behaviour of an existing structure requires experimental investigations. If the numerical analyses concern the dynamic response of a structure it is sensible to direct the experimental investigations towards the identification of the dynamic structural behaviour which is determined by the modal parameters of the system. In consequence, several methods for the experimental identification of modal parameters have been developed since the 1980ies. Due to various technical restraints in civil engineering which limit the possibilities to excitate a structure with economically reasonable effort, several methods have been developed that allow a modal identification form tests with an ambient excitation. The approach of identifying modal parameters only from measurements of the structural response without precise knowledge of the excitation is known as output-only or operational modal analysis. Since operational modal analysis (OMA) can be considered as a link between numerical modelling and simulation on the one hand and the dynamic behaviour of an existing structure on the other hand, the respective algorithms connect both the concepts of structural dynamics and mathematical tools applied within the processing of experimental data. Accordingly, the related theoretical topics are revised after an introduction into the topic. Several OMA methods have been developed over the last decades. The most established algorithms are presented here and their application is illustrated by means of both a small numerical and an experimental example. Since experimentally obtained results always underly manifold influences, an appropriate postprocessing of the results is necessary for a respective quality assessment. This quality assessment does not only require respective indicators but should also include the quantification of uncertainties. One special feature in modal testing is that it is common to instrument the structure in different sensor setups to improve the spacial resolution of identified mode shapes. The modal information identified from tests in several setups needs to be merged a posteriori. Algorithms to cope with this problem are also presented. Due to the fact that the amount of data generated in modal tests can become very large, manual processing can become extremely expensive or even impossible, for example in the case of a long-term continuous structural monitoring. In these situations an automated analysis and postprocessing are essential. Descriptions of respective methodologies are therefore also included in this work. Every structural system in civil engineering is unique and so also every identification of modal parameters has its specific challenges. Some aspects that can be faced in practical applications of operational modal analysis are presented and discussed in a chapter that is dedicated specific problems that an analyst may have to overcome. Case studies of systems with very close modes, with limited accessibility as well as the application of different OMA methods are described and discussed. In this context the focus is put on several types of uncertainty that may occur in the multiple stages of an operational modal analysis. In literature only very specific uncertainties at certain stages of the analysis are addressed. Here, the topic of uncertainties has been considered in a broader sense and approaches for treating respective problems are suggested. Eventually, it is concluded that the methodologies of operatinal modal analysis and related technical solutions have been well-engineered already. However, as in any discipline that includes experiments, a certain degree of uncertainty always remains in the results. From these conclusions has been derived a demand for further research and development that should be directed towards the minimisation of these uncertainties and to a respective optimisation of the steps and corresponding parameters included in an operational modal analysis. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,5 KW - Modalanalyse KW - Strukturdynamik KW - Operational modal analysis KW - modal analysis KW - structural dynamics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20191030-40061 ER - TY - THES A1 - Yousefi, Hassan T1 - Discontinuous propagating fronts: linear and nonlinear systems N2 - The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales. In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis. Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems). The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended. At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied. KW - Partielle Differentialgleichung KW - Adaptives System KW - Wavelet KW - Tichonov-Regularisierung KW - Hyperbolic PDEs KW - Adaptive central high resolution schemes KW - Wavelet based adaptation KW - Tikhonov regularization Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220922-47178 ER - TY - THES A1 - Winkel, Benjamin T1 - A three-dimensional model of skeletal muscle for physiological, pathological and experimental mechanical simulations T1 - Ein dreidimensionales Skelettmuskel-Modell für physiologische, pathologische und experimentelle mechanische Simulationen N2 - In recent decades, a multitude of concepts and models were developed to understand, assess and predict muscular mechanics in the context of physiological and pathological events. Most of these models are highly specialized and designed to selectively address fields in, e.g., medicine, sports science, forensics, product design or CGI; their data are often not transferable to other ranges of application. A single universal model, which covers the details of biochemical and neural processes, as well as the development of internal and external force and motion patterns and appearance could not be practical with regard to the diversity of the questions to be investigated and the task to find answers efficiently. With reasonable limitations though, a generalized approach is feasible. The objective of the work at hand was to develop a model for muscle simulation which covers the phenomenological aspects, and thus is universally applicable in domains where up until now specialized models were utilized. This includes investigations on active and passive motion, structural interaction of muscles within the body and with external elements, for example in crash scenarios, but also research topics like the verification of in vivo experiments and parameter identification. For this purpose, elements for the simulation of incompressible deformations were studied, adapted and implemented into the finite element code SLang. Various anisotropic, visco-elastic muscle models were developed or enhanced. The applicability was demonstrated on the base of several examples, and a general base for the implementation of further material models was developed and elaborated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2020,3 KW - Biomechanik KW - Nichtlineare Finite-Elemente-Methode KW - Muskel KW - Brustkorb KW - Muscle model KW - FEM KW - Biomechanics KW - Incompressibility KW - Thorax Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20201211-43002 ER - TY - THES A1 - Will, Johannes T1 - Beitrag zur Standsicherheitsberechnung im geklüfteten Fels in der Kontinuums- und Diskontinuumsmechanik unter Verwendung impliziter und expliziter Berechnungsstrategien T1 - Structural safety analysis for jointed rock with continuum and discontinuum mechanics in implizit and explizit codes KW - Staumauer KW - Standsicherheit KW - Klüftung KW - Finite-Elemente-Methode KW - Diskrete-Elemente-Methode KW - Kontinuumsmechanik KW - Diskontinuumsmechanik KW - jointed rock KW - continuum mechanics KW - diskontinuum mechanics Y1 - 1999 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20040310-613 ER - TY - THES A1 - Wang, Jiasheng T1 - Lebensdauerabschätzung von Bauteilen aus globularem Grauguss auf der Grundlage der lokalen gießprozessabhängigen Werkstoffzustände N2 - Das Ziel der Arbeit ist, eine mögliche Verbesserung der Güte der Lebensdauervorhersage für Gusseisenwerkstoffe mit Kugelgraphit zu erreichen, wobei die Gießprozesse verschiedener Hersteller berücksichtigt werden. Im ersten Schritt wurden Probenkörper aus GJS500 und GJS600 von mehreren Gusslieferanten gegossen und daraus Schwingproben erstellt. Insgesamt wurden Schwingfestigkeitswerte der einzelnen gegossenen Proben sowie der Proben des Bauteils von verschiedenen Gussherstellern weltweit entweder durch direkte Schwingversuche oder durch eine Sammlung von Betriebsfestigkeitsversuchen bestimmt. Dank der metallografischen Arbeit und Korrelationsanalyse konnten drei wesentliche Parameter zur Bestimmung der lokalen Dauerfestigkeit festgestellt werden: 1. statische Festigkeit, 2. Ferrit- und Perlitanteil der Mikrostrukturen und 3. Kugelgraphitanzahl pro Flächeneinheit. Basierend auf diesen Erkenntnissen wurde ein neues Festigkeitsverhältnisdiagramm (sogenanntes Sd/Rm-SG-Diagramm) entwickelt. Diese neue Methodik sollte vor allem ermöglichen, die Bauteildauerfestigkeit auf der Grundlage der gemessenen oder durch eine Gießsimulation vorhersagten lokalen Zugfestigkeitswerte sowie Mikrogefügenstrukturen besser zu prognostizieren. Mithilfe der Versuche sowie der Gießsimulation ist es gelungen, unterschiedliche Methoden der Lebensdauervorhersage unter Berücksichtigung der Herstellungsprozesse weiterzuentwickeln. KW - Grauguss KW - Lebensdauerabschätzung KW - Werkstoffprüfung Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220111-45542 ER - TY - THES A1 - Wang, Cuixia T1 - Nanomechanical Resonators Based on Quasi-two-dimensional Materials N2 - Advances in nanotechnology lead to the development of nano-electro-mechanical systems (NEMS) such as nanomechanical resonators with ultra-high resonant frequencies. The ultra-high-frequency resonators have recently received significant attention for wide-ranging applications such as molecular separation, molecular transportation, ultra-high sensitive sensing, high-frequency signal processing, and biological imaging. It is well known that for micrometer length scale, first-principles technique, the most accurate approach, poses serious limitations for comparisons with experimental studies. For such larger size, classical molecular dynamics (MD) simulations are desirable, which require interatomic potentials. Additionally, a mesoscale method such as the coarse-grained (CG) method is another useful method to support simulations for even larger system sizes. Furthermore, quasi-two-dimensional (Q2D) materials have attracted intensive research interest due to their many novel properties over the past decades. However, the energy dissipation mechanisms of nanomechanical resonators based on several Q2D materials are still unknown. In this work, the addressed main issues include the development of the CG models for molybdenum disulphide (MoS2), investigation of the mechanism effects on black phosphorus (BP) nanoresonators and the application of graphene nanoresonators. The primary coverage and results of the dissertation are as follows: Method development. Firstly, a two-dimensional (2D) CG model for single layer MoS2 (SLMoS2) is analytically developed. The Stillinger-Weber (SW) potential for this 2D CG model is further parametrized, in which all SW geometrical parameters are determined analytically according to the equilibrium condition for each individual potential term, while the SW energy parameters are derived analytically based on the valence force field model. Next, the 2D CG model is further simplified to one-dimensional (1D) CG model, which describes the 2D SLMoS2 structure using a 1D chain model. This 1D CG model is applied to investigate the relaxed configuration and the resonant oscillation of the folded SLMoS2. Owning to the simplicity nature of the 1D CG model, the relaxed configuration of the folded SLMoS2 is determined analytically, and the resonant oscillation frequency is derived analytically. Considering the increasing interest in studying the properties of other 2D layered materials, and in particular those in the semiconducting transition metal dichalcogenide class like MoS2, the CG models proposed in current work provide valuable simulation approaches. Mechanism understanding. Two energy dissipation mechanisms of BP nanoresonators are focused exclusively, i.e. mechanical strain effects and defect effects (including vacancy and oxidation). Vacancy defect is intrinsic damping factor for the quality (Q)-factor, while mechanical strain and oxidation are extrinsic damping factors. Intrinsic dissipation (induced by thermal vibrations) in BP resonators (BPRs) is firstly investigated. Specifically, classical MD simulations are performed to examine the temperature dependence for the Q-factor of the single layer BPR (SLBPR) along the armchair and zigzag directions, where two-step fitting procedure is used to extract the frequency and Q-factor from the kinetic energy time history. The Q-factors of BPRs are evaluated through comparison with those of graphene and MoS2 nanoresonators. Next, effects of mechanical strain, vacancy and oxidation on BP nanoresonators are investigated in turn. Considering the increasing interest in studying the properties of BP, and in particular the lack of theoretical study for the BPRs, the results in current work provide a useful reference. Application. A novel application for graphene nanoresonators, using them to self-assemble small nanostructures such as water chains, is proposed. All of the underlying physics enabling this phenomenon is elucidated. In particular, by drawing inspiration from macroscale self-assembly using the higher order resonant modes of Chladni plates, classical MD simulations are used to investigate the self-assembly of water molecules using graphene nanoresonators. An analytic formula for the critical resonant frequency based on the interaction between water molecules and graphene is provided. Furthermore, the properties of the water chains assembled by the graphene nanoresonators are studied. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,3 KW - Nanomechanik KW - Resonator KW - Nanomechanical Resonators Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180709-37609 ER - TY - JOUR A1 - Vu-Bac, N. A1 - Nguyen-Xuan, Hung A1 - Chen, Lei A1 - Lee, C.K. A1 - Zi, Goangseup A1 - Zhuang, Xiaoying A1 - Liu, G.R. A1 - Rabczuk, Timon T1 - A phantom-node method with edge-based strain smoothing for linear elastic fracture mechanics JF - Journal of Applied Mathematics N2 - This paper presents a novel numerical procedure based on the combination of an edge-based smoothed finite element (ES-FEM) with a phantom-node method for 2D linear elastic fracture mechanics. In the standard phantom-node method, the cracks are formulated by adding phantom nodes, and the cracked element is replaced by two new superimposed elements. This approach is quite simple to implement into existing explicit finite element programs. The shape functions associated with discontinuous elements are similar to those of the standard finite elements, which leads to certain simplification with implementing in the existing codes. The phantom-node method allows modeling discontinuities at an arbitrary location in the mesh. The ES-FEM model owns a close-to-exact stiffness that is much softer than lower-order finite element methods (FEM). Taking advantage of both the ES-FEM and the phantom-node method, we introduce an edge-based strain smoothing technique for the phantom-node method. Numerical results show that the proposed method achieves high accuracy compared with the extended finite element method (XFEM) and other reference solutions. KW - Finite-Elemente-Methode KW - Steifigkeit KW - Bruchmechanik KW - Riss Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170426-31676 ER - TY - THES A1 - Vu, Bac Nam T1 - Stochastic uncertainty quantification for multiscale modeling of polymeric nanocomposites N2 - Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex. The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials. This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major goal, the following tasks are carried out: At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs. At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length scales. In particular, we homogenized the RVE into an equivalent fiber. The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale. Stochastic modeling and uncertainty quantification consist of the following ingredients: - Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively. - Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data. - Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance. In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided. The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results. KW - Polymere KW - nanocomposite KW - Nanoverbundstruktur KW - stochastic KW - multiscale Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20160322-25551 ER - TY - THES A1 - Vollmering, Max T1 - Damage Localization of Mechanical Structures by Subspace Identification and Krein Space Based H-infinity Estimation N2 - This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed. A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth. Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,5 KW - Strukturmechanik KW - Schätztheorie Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180730-37728 ER - TY - THES A1 - Valizadeh, Navid T1 - Developments in Isogeometric Analysis and Application to High-Order Phase-Field Models of Biomembranes N2 - Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics. As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects. As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models. Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,1 KW - Phasenfeldmodell KW - Vesikel KW - Hydrodynamik KW - Multiphysics KW - Isogeometrische Analyse KW - Isogeometric Analysis KW - Vesicle dynamics KW - Phase-field modeling KW - Geometric Partial Differential Equations KW - Residual-based variational multiscale method Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220114-45658 ER - TY - CHAP A1 - Unger, Jörg F. A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - DISCRETE CRACK SIMULATION OF CONCRETE USING THE EXTENDED FINITE ELEMENTMETHOD N2 - The extended finite element method (XFEM) offers an elegant tool to model material discontinuities and cracks within a regular mesh, so that the element edges do not necessarily coincide with the discontinuities. This allows the modeling of propagating cracks without the requirement to adapt the mesh incrementally. Using a regular mesh offers the advantage, that simple refinement strategies based on the quadtree data structure can be used to refine the mesh in regions, that require a high mesh density. An additional benefit of the XFEM is, that the transmission of cohesive forces through a crack can be modeled in a straightforward way without introducing additional interface elements. Finally different criteria for the determination of the crack propagation angle are investigated and applied to numerical tests of cracked concrete specimens, which are compared with experimental results. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-30303 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - CHAP A1 - Unger, Jörg F. A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - PARAMETER IDENTIFICATION OF MESOSCALE MODELS FROM MACROSCOPIC TESTS USING BAYESIAN NEURAL NETWORKS N2 - In this paper, a parameter identification procedure using Bayesian neural networks is proposed. Based on a training set of numerical simulations, where the material parameters are simulated in a predefined range using Latin Hypercube sampling, a Bayesian neural network, which has been extended to describe the noise of multiple outputs using a full covariance matrix, is trained to approximate the inverse relation from the experiment (displacements, forces etc.) to the material parameters. The method offers not only the possibility to determine the parameters itself, but also the accuracy of the estimate and the correlation between these parameters. As a result, a set of experiments can be designed to calibrate a numerical model. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Architektur KW - Computerunterstütztes Verfahren KW - Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28984 UR - http://euklid.bauing.uni-weimar.de/ikm2009/paper.html SN - 1611-4086 ER - TY - THES A1 - Unger, Jörg F. T1 - Neural networks in a multiscale approach for concrete N2 - From a macroscopic point of view, failure within concrete structures is characterized by the initiation and propagation of cracks. In the first part of the thesis, a methodology for macroscopic crack growth simulations for concrete structures using a cohesive discrete crack approach based on the extended finite element method is introduced. Particular attention is turned to the investigation of criteria for crack initiation and crack growth. A drawback of the macroscopic simulation is that the real physical phenomena leading to the nonlinear behavior are only modeled phenomenologically. For concrete, the nonlinear behavior is characterized by the initiation of microcracks which coalesce into macroscopic cracks. In order to obtain a higher resolution of this failure zones, a mesoscale model for concrete is developed that models particles, mortar matrix and the interfacial transition zone (ITZ) explicitly. The essential features are a representation of particles using a prescribed grading curve, a material formulation based on a cohesive approach for the ITZ and a combined model with damage and plasticity for the mortar matrix. Compared to numerical simulations, the response of real structures exhibits a stochastic scatter. This is e.g. due to the intrinsic heterogeneities of the structure. For mesoscale models, these intrinsic heterogeneities are simulated by using a random distribution of particles and by a simulation of spatially variable material parameters using random fields. There are two major problems related to numerical simulations on the mesoscale. First of all, the material parameters for the constitutive description of the materials are often difficult to measure directly. In order to estimate material parameters from macroscopic experiments, a parameter identification procedure based on Bayesian neural networks is developed which is universally applicable to any parameter identification problem in numerical simulations based on experimental results. This approach offers information about the most probable set of material parameters based on experimental data and information about the accuracy of the estimate. Consequently, this approach can be used a priori to determine a set of experiments to be carried out in order to fit the parameters of a numerical model to experimental data. The second problem is the computational effort required for mesoscale simulations of a full macroscopic structure. For this purpose, a coupling between mesoscale and macroscale model is developed. Representative mesoscale simulations are used to train a metamodel that is finally used as a constitutive model in a macroscopic simulation. Special focus is placed on the ability of appropriately simulating unloading. N2 - Makroskopisch betrachtet kann das Versagen von Beton durch die Entstehung und das Wachstum von Rissen beschrieben werden. Im ersten Teil der Arbeit wird eine Methode zur Simulation der makroskopischen Rissentwicklung von Beton unter Verwendung von kohäsiven diskreten Rissen basierend auf der erweiterten Finiten Elemente Methode vorgestellt. Besondere Bedeutung liegt dabei auf der Untersuchung von Kriterien zur Rissentstehung und zum Risswachstum. Ein Nachteil von makroskopischen Simulationen liegt in der nur phänomenologischen Berücksichtigung der tatsächlichen Vorgänge. Nichtlineares Verhalten von Beton ist durch die Entstehung von Mikrorissen gekennzeichnet, die bei weiterer Belastung zu makroskopischen Rissen zusammenwachsen. Um die Versagenszone realitätsnah abbilden zu können, wurde ein Mesoskalenmodell von Beton entwickelt, welches Zuschläge, Matrix und Übergangszone zwischen beiden Materialien (ITZ) direkt abbildet. Hauptmerkmal sind die Simulation der Zuschläge nach einer Sieblinie, eine kohäsive Materialformulierung der ITZ und ein kombiniertes Model aus Schädigung und Plastizität für das Matrixmaterial. Im Gegensatz zu numerischen Simulationen ist die Systemantwort reeller Strukturen eine unscharfe Größe. Dies liegt u.a. an Heterogenitäten innerhalb der Struktur, die im Rahmen der Arbeit durch eine zufällige Verteilung der Zuschläge und über räumlich variierende Materialparameter unter Verwendung von Zufallsfeldern simuliert werden. Zwei Hauptprobleme sind bei den Mesoskalensimulationen aufgetreten. Einerseits sind Materialparameter auf der Mesoskala oft schwer zu bestimmen. Deswegen wurde eine Methode basierend auf Bayes neuronalen Netzen entwickelt, die eine Parameteridentifikation unter Verwendung von makroskopischen Versuchen erlaubt. Diese Methode ist aber universell anwendbar auf alle Parameteridentifikationsprobleme in numerischen Simulationen basierend auf experimentellen Daten. Der Ansatz liefert sowohl Informationen über den wahrscheinlichsten Parametersatz des Models zur numerischen Simulation eines Experiments als auch eine Einschätzung der Genauigkeit dieses Schätzers. Die Methode kann auch verwendet werden, um a priori einen Satz von Experimenten auszuwählen der notwendig ist, um die Parameter eines numerischen Modells zu bestimmen. Ein zweites Problem ist der numerische Aufwand von Mesoskalensimulationen für makroskopische Strukturen. Aus diesem Grund wurde eine Kopplungsstrategie zwischen Meso- und Makromodell entwickelt, bei dem repräsentative Simulationen auf der Mesoebene verwendet werden, um ein Metamodell zu generieren, welches dann die Materialformulierung in einer makroskopischen Simulation darstellt. Ein Fokus liegt dabei auf der korrekten Abbildung von Entlastungen. T2 - Neuronale Netze in einem Multiskalenansatz für Beton T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2009,1 KW - Beton KW - Mehrskalenmodell KW - Mehrskalenanalyse KW - Neuronales Netz KW - Monte-Carlo-Simulation KW - Simulation KW - Monte-Carlo-Integration KW - Kontinuierliche Simul KW - Bayes neuronale Netze KW - Parameteridentification KW - Bayesian neural networks KW - parameter identification Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20090626-14763 ER - TY - THES A1 - Udrea, Mihai-Andrei T1 - Assessment of Data from Dynamic Bridge Monitoring N2 - The focus of the thesis is to process measurements acquired from a continuous monitoring system at a railway bridge. Temperature, strain and ambient vibration records are analysed and two main directions of investigation are pursued. The first and the most demanding task is to develop processing routines able to extract modal parameters from ambient vibration measurements. For this purpose, reliable experimental models are achieved on the basis of a stochastic system identification(SSI) procedure. A fully automated algorithm based on a three-stage clustering is implemented to perform a modal parameter estimation for every single measurement. After selecting a baseline of modal parameters, the evolution of eigenfrequencies is studied and correlated to environmental and operational factors. The second aspect deals with the structural response to passing trains. Corresponding triggered records of strain and temperature are processed and their assessment is accomplished using the average strains induced by each train as the reference parameter. Three influences due to speed, temperature and loads are distinguished and treated individually. An attempt to estimate the maximum response variation due to each factor is also carried out. KW - automatic modal analysis KW - stochastic subspace identification KW - modal tracking KW - modal parameter estimation KW - clustering KW - Messtechnik Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20140429-21742 ER - TY - CHAP A1 - Theiler, Michael A1 - Könke, Carsten ED - Maia, Nuno T1 - Damping in Bolted Joints T2 - Proceedings of International Conference on Structural Engineering Dynamics (ICEDyn) 2013 N2 - With the help of modern CAE-based simulation processes, it is possible to predict the dynamic behavior of fatigue strength problems in order to improve products of many industries, e.g. the building, the machine construction or the automotive industry. Amongst others, it can be used to improve the acoustic design of automobiles in an early development stage. Nowadays, the acoustics of automobiles plays a crucial role in the process of vehicle development. Because of the advanced demand of comfort and due to statutory rules the manufacturers are faced with the challenge of optimizing their car’s sound emissions. The optimization includes not only the reduction of noises. Lately with the trend to hybrid and electric cars, it has been shown that vehicles can become too quiet. Thus, the prediction of structural and acoustic properties based on FE-simulations is becoming increasingly important before any experimental prototype is examined. With the state of the art, qualitative comparisons between different implementations are possible. However, an accurate and reliable quantitative prediction is still a challenge. One aspect in the context of increasing the prediction quality of acoustic (or general oscillating) problems - especially in power-trains of automobiles - is the more accurate implementation of damping in joint structures. While material damping occurs globally and homogenous in a structural system, the damping due to joints is a very local problem, since energy is especially dissipated in the vicinity of joints. This paper focusses on experimental and numerical studies performed on a single (extracted) screw connection. Starting with experimental studies that are used to identify the underlying physical model of the energy loss, the locally influencing parameters (e.g. the damping factor) should be identified. In contrast to similar research projects, the approach tends to a more local consideration within the joint interface. Tangential stiffness and energy loss within the interface are spatially distributed and interactions between the influencing parameters are regarded. As a result, the damping matrix is no longer proportional to mass or stiffness matrix, since it is composed of the global material damping and the local joint damping. With this new approach, the prediction quality can be increased, since the local distribution of the physical parameters within the joint interface corresponds much closer to the reality. KW - Damping Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20130701-19709 SN - 978-989-96276-4-2 ER - TY - CHAP A1 - Tan, Fengjie A1 - Lahmer, Tom A1 - Siddappa, Manju Gyaraganahalll ED - Gürlebeck, Klaus ED - Lahmer, Tom T1 - SECTION OPTIMIZATION AND RELIABILITY ANALYSIS OF ARCH-TYPE DAMS INCLUDING COUPLED MECHANICAL-THERMAL AND HYDRAULIC FIELDS T2 - Digital Proceedings, International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering : July 20 - 22 2015, Bauhaus-University Weimar N2 - From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Building Information Modeling KW - Computerunterstütztes Verfahren KW - Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28212 SN - 1611-4086 ER - TY - THES A1 - Tan, Fengjie T1 - Shape Optimization Design of Arch Type Dams under Uncertainties N2 - Due to an increased need for hydro-electricity, water storage, and flood protection, it is assumed that a series of new dams will be built throughout the world. Comparing existing design methodologies for arch-type dams, model-based shape optimization can effectively reduce construction costs and leverage the properties of construction materials. To apply the means of shape optimization, suitable variables need to be chosen to formulate the objective function, which is the volume of the arch dam here. In order to increase the consistency with practical conditions, a great number of geometrical and behavioral constraints are included in the mathematical model. An optimization method, namely Genetic Algorithm is adopted which allows a global search. Traditional optimization techniques are realized based on a deterministic approach, which means that the material properties and loading conditions are assumed to be fixed values. As a result, the real-world structures that are optimized by these approaches suffer from uncertainties that one needs to be aware of. Hence, in any optimization process for arch dams, it is nec- essary to find a methodology that is capable of considering the influences of uncertainties and generating a solution which is robust enough against the uncertainties. The focus of this thesis is the formulation and the numerical method for the optimization of the arch dam under the uncertainties. The two main models, the probabilistic model, and non-probabilistic models are intro- duced and discussed. Classic procedures of probabilistic approaches un- der uncertainties, such as RDO (robust design optimization) and RBDO (reliability-based design optimization), are in general computationally ex- pensive and rely on estimates of the system’s response variance and fail- ure probabilities. Instead, the robust optimization (RO) method which is based on the non-probabilistic model, will not follow a full probabilistic approach but works with pre-defined confidence levels. This leads to a bi-level optimization program where the volume of the dam is optimized under the worst combination of the uncertain parameters. By this, robust and reliable designs are obtained and the result is independent of any as- sumptions on stochastic properties of the random variables in the model. The optimization of an arch-type dam is realized here by a robust optimiza- tion method under load uncertainty, where hydraulic and thermal loads are considered. The load uncertainty is modeled as an ellipsoidal expression. Comparing with any traditional deterministic optimization (DO) method, which only concerns the minimum objective value and offers a solution candidate close to limit-states, the RO method provides a robust solution against uncertainties. All the above mentioned methods are applied to the optimization of the arch dam to compare with the optimal design with DO methods. The re- sults are compared and analyzed to discuss the advantages and drawbacks of each method. In order to reduce the computational cost, a ranking strategy and an ap- proximation model are further involved to do a preliminary screening. By means of these, the robust design can generate an improved arch dam structure which ensures both safety and serviceability during its lifetime. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,2 KW - Wasserbau KW - Staudamm KW - dams Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190819-39608 ER - TY - JOUR A1 - Talebi, Hossein A1 - Zi, Goangseup A1 - Silani, Mohammad A1 - Samaniego, Esteban A1 - Rabczuk, Timon T1 - A simple circular cell method for multilevel finite element analysis JF - Journal of Applied Mathematics N2 - A simple multiscale analysis framework for heterogeneous solids based on a computational homogenization technique is presented. The macroscopic strain is linked kinematically to the boundary displacement of a circular or spherical representative volume which contains the microscopic information of the material. The macroscopic stress is obtained from the energy principle between the macroscopic scale and the microscopic scale. This new method is applied to several standard examples to show its accuracy and consistency of the method proposed. KW - Finite-Elemente-Methode KW - Feststoff Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170426-31639 ER - TY - JOUR A1 - Shirazi, A. H. N. A1 - Mohebbi, Farzad A1 - Azadi Kakavand, M. R. A1 - He, B. A1 - Rabczuk, Timon T1 - Paraffin Nanocomposites for Heat Management of Lithium-Ion Batteries: A Computational Investigation JF - JOURNAL OF NANOMATERIALS N2 - Lithium-ion (Li-ion) batteries are currently considered as vital components for advances in mobile technologies such as those in communications and transport. Nonetheless, Li-ion batteries suffer from temperature rises which sometimes lead to operational damages or may even cause fire. An appropriate solution to control the temperature changes during the operation of Li-ion batteries is to embed batteries inside a paraffin matrix to absorb and dissipate heat. In the present work, we aimed to investigate the possibility of making paraffin nanocomposites for better heat management of a Li-ion battery pack. To fulfill this aim, heat generation during a battery charging/discharging cycles was simulated using Newman’s well established electrochemical pseudo-2D model. We couple this model to a 3D heat transfer model to predict the temperature evolution during the battery operation. In the later model, we considered different paraffin nanocomposites structures made by the addition of graphene, carbon nanotubes, and fullerene by assuming the same thermal conductivity for all fillers. This way, our results mainly correlate with the geometry of the fillers. Our results assess the degree of enhancement in heat dissipation of Li-ion batteries through the use of paraffin nanocomposites. Our results may be used as a guide for experimental set-ups to improve the heat management of Li-ion batteries. KW - Batterie KW - Wärmeleitfähigkeit Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170411-31141 ER - TY - INPR A1 - Sheikh Khozani, Zohreh A1 - Kumbhakar, Manotosh T1 - Discussion of “Estimation of one-dimensional velocity distribution by measuring velocity at two points” by Yeganeh and Heidari (2020) N2 - The concept of information entropy together with the principle of maximum entropy to open channel flow is essentially based on some physical consideration of the problem under consideration. This paper is a discussion on Yeganeh and Heidari (2020)’s paper, who proposed a new approach for measuring vertical distribution of streamwise velocity in open channels. The discussers argue that their approach is conceptually incorrect and thus leads to a physically unrealistic situation. In addition, the discussers found some wrong mathematical expressions (which are assumed to be typos) written in the paper, and also point out that the authors did not cite some of the original papers on the topic. KW - Geschwindigkeit KW - Entropie Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210216-43663 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/pii/S0955598621000017 ; https://doi.org/10.1016/j.flowmeasinst.2021.101886 ER - TY - JOUR A1 - Shamshirband, Shahaboddin A1 - Joloudari, Javad Hassannataj A1 - GhasemiGol, Mohammad A1 - Saadatfar, Hamid A1 - Mosavi, Amir A1 - Nabipour, Narjes T1 - FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks JF - Mathematics N2 - Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods. KW - Vernetzung KW - wireless sensor networks KW - machine learning KW - Funktechnik KW - Sensor KW - Maschinelles Lernen KW - Internet of Things KW - OA-Publikationsfonds2019 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200107-40541 UR - https://www.mdpi.com/2227-7390/8/1/28 VL - 2020 IS - Volume 8, Issue 1, article 28 PB - MDPI ER - TY - JOUR A1 - Shamshirband, Shahaboddin A1 - Babanezhad, Meisam A1 - Mosavi, Amir A1 - Nabipour, Narjes A1 - Hajnal, Eva A1 - Nadai, Laszlo A1 - Chau, Kwok-Wing T1 - Prediction of flow characteristics in the bubble column reactor by the artificial pheromone-based communication of biological ants JF - Engineering Applications of Computational Fluid Mechanics N2 - A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR. KW - Maschinelles Lernen KW - Machine learning KW - Bubble column reactor KW - ant colony optimization algorithm (ACO) KW - flow pattern KW - computational fluid dynamics (CFD) KW - big data KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200227-41013 UR - https://www.tandfonline.com/doi/full/10.1080/19942060.2020.1715842 VL - 2020 IS - volume 14, issue 1 SP - 367 EP - 378 PB - Taylor & Francis ER - TY - JOUR A1 - Shabani, Sevda A1 - Samadianfard, Saeed A1 - Sattari, Mohammad Taghi A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin A1 - Kmet, Tibor A1 - Várkonyi-Kóczy, Annamária R. T1 - Modeling Pan Evaporation Using Gaussian Process Regression K-Nearest Neighbors Random Forest and Support Vector Machines; Comparative Analysis JF - Atmosphere N2 - Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters. KW - Maschinelles Lernen KW - Machine learning KW - Deep learning Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200110-40561 UR - https://www.mdpi.com/2073-4433/11/1/66 VL - 2020 IS - Volume 11, Issue 1, 66 ER - TY - THES A1 - Schwedler, Michael T1 - Integrated structural analysis using isogeometric finite element methods N2 - The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration. An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof. The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback. The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested. Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model. The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed. When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2016,2 KW - Finite-Elemente-Methode KW - NURBS KW - Isogeometrische Analyse KW - finite element method KW - isogeometric analysis KW - mortar method KW - building information modelling Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170130-27372 ER - TY - CHAP A1 - Schrader, Kai A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - SPARSE APPROXIMATE COMPUTATION OF SADDLE POINT PROBLEMS ARISING FROM FETI-DP DISCRETIZATION N2 - The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Architektur KW - Computerunterstütztes Verfahren KW - Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28874 UR - http://euklid.bauing.uni-weimar.de/ikm2009/paper.html SN - 1611-4086 ER - TY - THES A1 - Schrader, Kai T1 - Hybrid 3D simulation methods for the damage analysis of multiphase composites T1 - Hybride 3D Simulationsmethoden zur Abbildung der Schädigungsvorgänge in Mehrphasen-Verbundwerkstoffen N2 - Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis. Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time. In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations. Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2013,2 KW - high-performance computing KW - finite element method KW - heterogeneous material KW - domain decomposition KW - scalable smeared crack analysis KW - FEM KW - multiphase KW - damage KW - HPC KW - solver Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20131021-20595 ER - TY - JOUR A1 - Schmidt, Albrecht A1 - Lahmer, Tom T1 - Efficient domain decomposition based reliability analysis for polymorphic uncertain material parameters JF - Proceedings in Applied Mathematics & Mechanics N2 - Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square. KW - Polymorphie KW - Stoffeigenschaft KW - Stochastik KW - polymorphe Unschärfemodellierung KW - Materialverhalten KW - hybride Werkstoffe Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220112-45563 UR - https://onlinelibrary.wiley.com/doi/full/10.1002/pamm.202100014 VL - 2021 IS - Volume 21, issue 1 SP - 1 EP - 4 PB - Wiley-VHC CY - Weinheim ER - TY - THES A1 - Schemmann, Christoph T1 - Optimierung von radialen Verdichterlaufrädern unter Berücksichtigung empirischer und analytischer Vorinformationen mittels eines mehrstufigen Sampling Verfahrens T1 - Optimization of Centrifugal Compressor Impellers by a Multi-fidelity Sampling Method Taking Analytical and Empirical Information into Account N2 - Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too. N2 - Turbomaschinen sind eine entscheidende Komponente in vielen Energiewandlungs- oder Energieerzeugungsprozessen und daher als vielversprechender Ansatzpunkt für eine Effizienzsteigerung der Energie-und Ressourcennutzung anzusehen. Im Laufe des letzten Jahrzehnts haben automatisierte Optimierungsmethoden in Verbindung mit numerischer Simulation zunehmend breitere Verwendung als Mittel zur Effizienzsteigerung in vielen Bereichen der Ingenieurwissenschaften gefunden. Allerdings standen die komplexen Interaktionen zwischen Strömungs- und Strukturmechanik sowie der hohe nummerische Aufwand einem weitverbreiteten Einsatz dieser Methoden im Turbomaschinenbereich bisher entgegen. Das Ziel dieser Forschungsaktivität ist die Entwicklung einer effizienten Strategie zur metamodellbasierten Optimierung von radialen Verdichterlaufrädern. Dabei liegt der Schwerpunkt auf einer Reduktion des benötigten numerischen Aufwandes. Der in diesem Vorhaben gewählte Ansatz ist das Einbeziehen analytischer und empirischer Vorinformationen (“lowfidelity“) in den Sampling Prozess, um vielversprechende Bereiche des Parameterraumes zu identifizieren. Diese Informationen werden genutzt um die aufwendigen numerischen Berechnungen (“high-fidelity“) des strömungs- und strukturmechanischen Verhaltens der Laufräder in diesen Bereichen zu konzentrieren, während gleichzeitig eine ausreichende Abdeckung des gesamten Parameterraumes sichergestellt wird. Die Entwicklung der Optimierungsstrategie ist in drei zentrale Arbeitspakete aufgeteilt. In einem ersten Schritt werden die verfügbaren empirischen und analytischen Methoden gesichtet und bewertet. In dieser Recherche sind Verlustmodelle basierend auf eindimensionaler Strömungsmechanik und empirischen Korrelationen als bestgeeignete Methode zur Vorhersage des aerodynamischen Verhaltens der Verdichter identifiziert worden. Um eine hohe Vorhersagegüte sicherzustellen, sind diese Modelle anhand verfügbarer Leistungsdaten kalibriert worden. Da zur Vorhersage der mechanischen Belastung des Laufrades keine brauchbaren analytischen oder empirischen Modelle ermittelt werden konnten, ist hier ein Metamodel basierend auf Finite-Element Berechnungen gewählt worden. Das zweite Arbeitspaket beinhaltet die Entwicklung der angepassten Samplingmethode, welche Samples in Bereichen des Parameterraumes konzentriert, die auf Basis der Vorinformationen als vielversrechend angesehen werden können. Gleichzeitig müssen eine gleichmäßige Abdeckung des gesamten Parameterraumes und ein niedriges Niveau an Eingangskorrelationen sichergestellt sein. Da etablierte Methoden wie Markov-Ketten-Monte-Carlo-Methoden oder die Verwerfungsmethode diese Voraussetzungen nicht erfüllen, ist ein neues, mehrstufiges Samplingverfahren (“Filtered Sampling“) entwickelt worden. Das letzte Arbeitspaket umfasst die Entwicklung eines automatisiertenSimulations-Workflows. Dieser Workflow umfasst Geometrieparametrisierung, Geometrieerzeugung, Netzerzeugung sowie die Berechnung des aerodynamischen Betriebsverhaltens und der strukturmechanischen Belastung. Dabei liegt ein Schwerpunkt auf der Entwicklung eines Parametrisierungskonzeptes, welches auf strömungsmechanischen Zusammenhängen beruht, um so physikalisch nicht zielführende Parameterkombinationen zu vermeiden. Abschließend ist die auf den zuvor entwickelten Werkzeugen aufbauende Optimierungsstrategie erfolgreich eingesetzt worden, um drei Optimierungsfragestellungen zu bearbeiten. Im ersten und zweiten Testcase sind bestehende Verdichterlaufräder mit der vorgestellten Methode optimiert worden. Die erzielten Optimierungsergebnisse sind von ähnlicher Güte wie die solcher Optimierungen, die keine Vorinformationen berücksichtigen, allerdingswirdnurdieHälfteannumerischemAufwandbenötigt. IneinemdrittenTestcase ist die Methode eingesetzt worden, um ein neues Laufraddesign zu erzeugen. Im Gegensatz zu den vorherigen Beispielen werden im Rahmen dieser Optimierung stark unterschiedliche Designs untersucht. Dadurch kann an diesem dritten Beispiel aufgezeigt werden, dass die Methode auch für Parameterräume mit stakt variierenden Designs funktioniert. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,3 KW - Simulation KW - Maschinenbau KW - Optimierung KW - Strömungsmechanik KW - Strukturmechanik Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190910-39748 ER - TY - JOUR A1 - Saqlai, Syed Muhammad A1 - Ghani, Anwar A1 - Khan, Imran A1 - Ahmed Khan Ghayyur, Shahbaz A1 - Shamshirband, Shahaboddin A1 - Nabipour, Narjes A1 - Shokri, Manouchehr T1 - Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification JF - Applied Sciences N2 - Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts. KW - Bildanalyse KW - Mensch KW - Größenverhältnis KW - Geometrie KW - Körper KW - action recognition KW - rule based classification KW - human body proportions KW - human blob KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200904-42322 UR - https://www.mdpi.com/2076-3417/10/16/5453 VL - 2020 IS - volume 10, issue 16, article 5453 PB - MDPI CY - Basel ER - TY - THES A1 - Salavati, Mohammad T1 - Multi-Scale Modeling of Mechanical and Electrochemical Properties of 1D and 2D Nanomaterials, Application in Battery Energy Storage Systems N2 - Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods. Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method. Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD. Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA. The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries. Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials. KW - Batterie KW - Modellierung KW - Nanostrukturiertes Material KW - Mechanical properties KW - Multi-scale modeling KW - Energiespeichersystem KW - Elektrodenmaterial KW - Elektrode KW - Mechanische Eigenschaft KW - Elektrochemische Eigenschaft KW - Electrochemical properties KW - Battery development KW - Nanomaterial Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200623-41830 ER - TY - JOUR A1 - Sadeghzadeh, Milad A1 - Maddah, Heydar A1 - Ahmadi, Mohammad Hossein A1 - Khadang, Amirhosein A1 - Ghazvini, Mahyar A1 - Mosavi, Amir Hosein A1 - Nabipour, Narjes T1 - Prediction of Thermo-Physical Properties of TiO2-Al2O3/Water Nanoparticles by Using Artificial Neural Network JF - Nanomaterials N2 - In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text KW - Wärmeleitfähigkeit KW - Fluid KW - Neuronales Netz KW - Thermal conductivity KW - Nanofluid KW - Artificial neural network Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200421-41308 UR - https://www.mdpi.com/2079-4991/10/4/697 VL - 2020 IS - Volume 10, Issue 4, 697 PB - MDPI CY - Basel ER - TY - JOUR A1 - Saadatfar, Hamid A1 - Khosravi, Samiyeh A1 - Hassannataj Joloudari, Javad A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin T1 - A New K-Nearest Neighbors Classifier for Big Data Based on Efficient Data Pruning JF - Mathematics N2 - The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods. KW - Maschinelles Lernen KW - Machine learning KW - K-nearest neighbors KW - KNN KW - classifier KW - big data KW - clustering KW - cluster shape KW - cluster density KW - classification KW - reinforcement learning KW - data science KW - computation KW - artificial intelligence KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200225-40996 UR - https://www.mdpi.com/2227-7390/8/2/286 VL - 2020 IS - volume 8, issue 2, article 286 PB - MDPI ER - TY - INPR A1 - Rezakazemi, Mashallah A1 - Mosavi, Amir A1 - Shirazian, Saeed T1 - ANFIS pattern for molecular membranes separation optimization N2 - In this work, molecular separation of aqueous-organic was simulated by using combined soft computing-mechanistic approaches. The considered separation system was a microporous membrane contactor for separation of benzoic acid from water by contacting with an organic phase containing extractor molecules. Indeed, extractive separation is carried out using membrane technology where complex of solute-organic is formed at the interface. The main focus was to develop a simulation methodology for prediction of concentration distribution of solute (benzoic acid) in the feed side of the membrane system, as the removal efficiency of the system is determined by concentration distribution of the solute in the feed channel. The pattern of Adaptive Neuro-Fuzzy Inference System (ANFIS) was optimized by finding the optimum membership function, learning percentage, and a number of rules. The ANFIS was trained using the extracted data from the CFD simulation of the membrane system. The comparisons between the predicted concentration distribution by ANFIS and CFD data revealed that the optimized ANFIS pattern can be used as a predictive tool for simulation of the process. The R2 of higher than 0.99 was obtained for the optimized ANFIS model. The main privilege of the developed methodology is its very low computational time for simulation of the system and can be used as a rigorous simulation tool for understanding and design of membrane-based systems. Highlights are, Molecular separation using microporous membranes. Developing hybrid model based on ANFIS-CFD for the separation process, Optimization of ANFIS structure for prediction of separation process KW - Fluid KW - Simulation KW - Molecular Liquids KW - optimization KW - machine learning KW - Membrane contactors KW - CFD Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20181122-38212 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/pii/S0167732218345008, which has been published in final form at https://doi.org/10.1016/j.molliq.2018.11.017. VL - 2018 SP - 1 EP - 20 ER - TY - JOUR A1 - Ren, Huilong A1 - Zhuang, Xiaoying A1 - Oterkus, Erkan A1 - Zhu, Hehua A1 - Rabczuk, Timon T1 - Nonlocal strong forms of thin plate, gradient elasticity, magneto-electro-elasticity and phase-field fracture by nonlocal operator method JF - Engineering with Computers N2 - The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate. KW - Bruchmechanik KW - Elastizität KW - Peridynamik KW - energy form KW - weak form KW - peridynamics KW - variational principle KW - explicit time integration Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20211207-45388 UR - https://link.springer.com/article/10.1007/s00366-021-01502-8 VL - 2021 SP - 1 EP - 22 ER - TY - THES A1 - Ren, Huilong T1 - Dual-horizon peridynamics and Nonlocal operator method N2 - In the last two decades, Peridynamics (PD) attracts much attention in the field of fracture mechanics. One key feature of PD is the nonlocality, which is quite different from the ideas in conventional methods such as FEM and meshless method. However, conventional PD suffers from problems such as constant horizon, explicit algorithm, hourglass mode. In this thesis, by examining the nonlocality with scrutiny, we proposed several new concepts such as dual-horizon (DH) in PD, dual-support (DS) in smoothed particle hydrodynamics (SPH), nonlocal operators and operator energy functional. The conventional PD (SPH) is incorporated in the DH-PD (DS-SPH), which can adopt an inhomogeneous discretization and inhomogeneous support domains. The DH-PD (DS-SPH) can be viewed as some fundamental improvement on the conventional PD (SPH). Dual formulation of PD and SPH allows h-adaptivity while satisfying the conservations of linear momentum, angular momentum and energy. By developing the concept of nonlocality further, we introduced the nonlocal operator method as a generalization of DH-PD. Combined with energy functional of various physical models, the nonlocal forms based on dual-support concept are derived. In addition, the variation of the energy functional allows implicit formulation of the nonlocal theory. At last, we developed the higher order nonlocal operator method which is capable of solving higher order partial differential equations on arbitrary domain in higher dimensional space. Since the concepts are developed gradually, we described our findings chronologically. In chapter 2, we developed a DH-PD formulation that includes varying horizon sizes and solves the "ghost force" issue. The concept of dual-horizon considers the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly with arbitrary particle discretization. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. A simple adaptive refinement procedure (h-adaptivity) is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method. In chapter 3, a nonlocal operator method (NOM) based on the variational principle is proposed for the solution of waveguide problem in computational electromagnetic field. Common differential operators as well as the variational forms are defined within the context of nonlocal operators. The present nonlocal formulation allows the assembling of the tangent stiffness matrix with ease, which is necessary for the eigenvalue analysis of the waveguide problem. The present formulation is applied to solve 1D Schrodinger equation, 2D electrostatic problem and the differential electromagnetic vector wave equations based on electric fields. In chapter 4, a general nonlocal operator method is proposed which is applicable for solving partial differential equations (PDEs) of mechanical problems. The nonlocal operator can be regarded as the integral form, ``equivalent'' to the differential form in the sense of a nonlocal interaction model. The variation of a nonlocal operator plays an equivalent role as the derivatives of the shape functions in the meshless methods or those of the finite element method. Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease. The nonlocal operator method is enhanced here also with an operator energy functional to satisfy the linear consistency of the field. A highlight of the present method is the functional derived based on the nonlocal operator can convert the construction of residual and stiffness matrix into a series of matrix multiplications using the predefined nonlocal operators. The nonlocal strong forms of different functionals can be obtained easily via the concept of support and dual-support. Several numerical examples of different types of PDEs are presented. In chapter 5, we extended the NOM to higher order scheme by using a higher order Taylor series expansion of the unknown field. Such a higher order scheme improves the original NOM in chapter 3 and chapter 4, which can only achieve one-order convergence. The higher order NOM obtains all partial derivatives with specified maximal order simultaneously without resorting to shape functions. The functional based on the nonlocal operators converts the construction of residual and stiffness matrix into a series of matrix multiplication on the nonlocal operator matrix. Several numerical examples solved by strong form or weak form are presented to show the capabilities of this method. In chapter 6, the NOM proposed as a particle-based method in chapter 3,4,5, has difficulty in imposing accurately the boundary conditions of various orders. In this paper, we converted the particle-based NOM into a scheme with interpolation property. The new scheme describes partial derivatives of various orders at a point by the nodes in the support and takes advantage of the background mesh for numerical integration. The boundary conditions are enforced via the modified variational principle. The particle-based NOM can be viewed a special case of NOM with interpolation property when nodal integration is used. The scheme based on numerical integration greatly improves the stability of the method, as a consequence, the operator energy functional in particle-based NOM is not required. We demonstrated the capabilities of current method by solving the gradient solid problems and comparing the numerical results with the available exact solutions. In chapter 7, we derived the DS-SPH in solid within the framework of variational principle. The tangent stiffness matrix of SPH can be obtained with ease, and can be served as the basis for the present implicit SPH. We proposed an hourglass energy functional, which allows the direct derivation of hourglass force and hourglass tangent stiffness matrix. The dual-support is {involved} in all derivations based on variational principles and is automatically satisfied in the assembling of stiffness matrix. The implementation of stiffness matrix comprises with two steps, the nodal assembly based on deformation gradient and global assembly on all nodes. Several numerical examples are presented to validate the method. KW - Peridynamik KW - Variational principle KW - weighted residual method KW - gradient elasticity KW - phase field fracture method KW - smoothed particle hydrodynamics KW - numerical methods KW - PDEs Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210412-44039 ER - TY - JOUR A1 - Rafiee, Roham A1 - Rabczuk, Timon A1 - Milani, Abbas S. A1 - Tserpes, Konstantinos I. T1 - Advances in Characterization and Modeling of Nanoreinforced Composites JF - JOURNAL OF NANOMATERIALS N2 - This special issue deals with a range of recently developed characterization and modeling techniques employed to better understand and predict the response of nanoreinforced composites at different scales. KW - Physikalische Eigenschaft KW - Werkstoff KW - nanoreinforced composites Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170411-31134 ER - TY - INPR A1 - Radmard Rahmani, Hamid A1 - Könke, Carsten T1 - Passive Control of Tall Buildings Using Distributed Multiple Tuned Mass Dampers N2 - The vibration control of the tall building during earthquake excitations is a challenging task due to their complex seismic behavior. This paper investigates the optimum placement and properties of the Tuned Mass Dampers (TMDs) in tall buildings, which are employed to control the vibrations during earthquakes. An algorithm was developed to spend a limited mass either in a single TMD or in multiple TMDs and distribute them optimally over the height of the building. The Non-dominated Sorting Genetic Algorithm (NSGA – II) method was improved by adding multi-variant genetic operators and utilized to simultaneously study the optimum design parameters of the TMDs and the optimum placement. The results showed that under earthquake excitations with noticeable amplitude in higher modes, distributing TMDs over the height of the building is more effective in mitigating the vibrations compared to the use of a single TMD system. From the optimization, it was observed that the locations of the TMDs were related to the stories corresponding to the maximum modal displacements in the lower modes and the stories corresponding to the maximum modal displacements in the modes which were highly activated by the earthquake excitations. It was also noted that the frequency content of the earthquake has significant influence on the optimum location of the TMDs. KW - Schwingungsdämpfer KW - Hochbau KW - tall buildings KW - passive control KW - genetic algorithm KW - tuned mass dampers Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190311-38597 UR - https://www.researchgate.net/publication/330508976_Seismic_Control_of_Tall_Buildings_Using_Distributed_Multiple_Tuned_Mass_Dampers ER - TY - THES A1 - Radmard Rahmani, Hamid T1 - Artificial Intelligence Approach for Seismic Control of Structures N2 - Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed. Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations. Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control. Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records. Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations. Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties. KW - Erdbeben KW - seismic control KW - tuned mass damper KW - reinforcement learning KW - earthquake KW - machine learning KW - Operante Konditionierung KW - structural control Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200417-41359 ER - TY - THES A1 - Rabizadeh, Ehsan T1 - Goal-oriented A Posteriori Error Estimation and Adaptive Mesh Refinement in 2D/3D Thermoelasticity Problems T1 - Zielorientierte a posteriori Fehlerabschätzung und adaptive Netzverfeinerung bei 2D- und 3Dthermoelastischen Problemen N2 - In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential. Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods. This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed. As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available. After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples. After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest. The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion. In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison. N2 - Einleitung und Motivation: 1- Im Laufe der letzten Jahrzehnte wurde den Mehrfeldproblemen und ihrer numerischen Analyse große Aufmerksamkeit gewidmet. Bei Mehrfeldproblemen wird die Wechselwirkung zwischen verschiedenen Feldern wie elastischen, elektrischen, magnetischen, chemischen oder thermischen Feldern untersucht. Eine wichtige Kategorie von Mehrfeldproblemen ist die Thermoelastizität. In der Thermoelastizität werden neben dem mechanischen Feld (Verschiebungen) auch das thermische Feld (Temperatur) und deren Auswirkungen aufeinander untersucht. 2- In fortgeschrittenen und sensible Anwendungen mit Temperaturänderung (z. B. LNG-, CNG- oder LPG-Speichertanks bei Sonnentemperatur im Sommer) ist die Elastizitätstheorie, die nur Verschiebungen berücksichtigt, nicht ausreichend. In diesen Fällen ist die Verwendung einer thermoelastischen Formulierung unumgänglich, um zuverlässige Ergebnisse zu erzielen. 3- Da eine analytische Lösung für thermoelastische Probleme sehr selten bestimmbar ist, wird sie durch numerische Methoden ersetzt. Allerdings sind die numerischen Ergebnisse nicht exakt und approximieren nur die exakte Lösung. Daher sind Fehler in den numerischen Ergebnissen unvermeidlich. 4- In jeder numerischen Simulation ist die Genauigkeit der Approximation das Hauptanliegen. Daher wurden verschiedene Fehlerschätzungstechniken entwickelt, um den Fehler der numerischen Lösung zu schätzen. Die herkömmlichen Fehlerschätzungsmethoden geben nur einen allgemeinen Überblick über die Gesamtgenauigkeit einer Näherungslösung. Bei vielen realen Problemen ist jedoch anstelle der Gesamtgenauigkeit die örtliche Genauigkeit (z. B. die Genauigkeit an einem bestimmten Punkt) von großem Interesse 5- Herkömmliche Fehlerschätzer berechnen Fehler in gewissen Normen. In der Ingenieurpraxis interessieren allerdings Fehler in anderen Zielgrößen, beispielsweise in der Last-Verformungs-Kurve oder in gewissen Spannungs-komponenten und speziellen Positionen. Dafür wurden sog. zielorientierte Fehlerschätzer entwickelt. 6- Die meisten numerischen Methoden unterteilen das Gebiet in kleine Teile (Element/Zelle), um das Problem zu lösen. Die Verwendung sehr feiner Elemente erhöht die Simulationsgenauigkeit, erhöht aber auch die Rechenzeit drastisch. Dieses Problem wird durch adaptive Methoden (AM) gelöst. AM können die Rechenzeit deutlich verringern. Bei adaptiven Methoden spielt die Fehlerschätzung eine Schlüsselrolle. Die Verfeinerung der Diskretisierung wird von einer Fehlerschätzung der Lösung kontrolliert und gesteuert (Elemente mit einem höheren geschätzten Fehler werden zur Verfeinerung/Aufteilung ausgewählt). Problemstellung und Zielsetzung der Arbeit 7- Die thermoelastischen Probleme können in zwei Hauptgruppen eingeteilt werden: Klassische Thermoelastizität (KTE) und klassische gekoppelte Thermoelastizität (KKTE). In jeder Gruppe werden verschiedene thermoelastische Probleme mit verschiedenen Geometrien, und Rand-/Anfangsbedingungen untersucht. In dieser Untersuchung werden die KTE- und KKTE-Probleme numerisch gelöst und alle numerischen Lösungen durch Fehlerschätzung bewertet. 8- In dieser Arbeit werden die Gesamtgenauigkeit der numerischen Lösung durch herkömmliche globale Fehlerschätzverfahren (auch als recovery-basierte Methoden bekannt) und die Genauigkeit der Lösung in bestimmten Punkten durch neue lokale Methoden (z. B. Dual-gewichtete Residuumsmethode oder DWR-Methode) bewertet. 9- Bei den dynamischen thermoelastischen Problemen ändern sich die Problembedin-gungen und anschließend die Lösung mit der Zeit. Daher werden die Fehler in jedem Zeitschritt geschätzt, um die Genauigkeit über die Zeit zu erhalten. 10- In dieser Dissertation wurde eine neue adaptive Gitter-Verfeinerung (AGV)-Technik entwickelt und für thermoelastische Probleme implementiert. Stand der Wissenschaft 11- Da die Thermoelastizität im Vergleich zu anderen mechanischen Bereichen wie der Elastizität nicht so umfangreich untersucht ist, wurden nur sehr begrenzte Untersuchungen durchgeführt, um die numerischen Fehler abzuschätzen und zu kontrollieren. Alle diese Untersuchungen konzentrierten sich auf die konventionellen Techniken, die nur den Gesamtfehler abschätzen können. Um die lokalen Fehler (wie punktweise Fehler oder Fehler an einem bestimmten Punkt) abzuschätzen, ist die Verwendung der zielorientierten Fehlerschätzungstechniken unvermeidlich. Die Implementierung der recovery-basierten zielorientierten Fehlerschätzung in der Thermoelastizität wird vor diesem Projekt nicht untersucht. 12- Viele numerische Analysen der dynamischen thermoelastischen Probleme basieren auf der Laplace-Transformationsmethode. Bei dieser Methode ist es praktisch nicht möglich, den Fehler in jedem Zeitschritt abzuschätzen. Daher wurden bisher die herkömmlichen globalen oder lokalen zielorientierten Fehlerschätzungsverfahren nicht in der dynamischen Thermoelastizität implementiert. 13- Eine der neuesten fortgeschrittenen zielorientierten Fehlerschätzungsmethoden ist die Dual-gewichtete Residuumsmethode (DWR-Methode). Die DWR-Methode, die punktweise Fehler (wie Verschiebungs-, mechanische Spannungs- oder Dehnungsfehler an einem bestimmten Punkt) abschätzen kann, wird bei elastischen Problemen angewendet. Es wurde jedoch kein Versuch unternommen, die DWR-Methode für die thermoelastischen Probleme zu formulieren. 14- In numerischen Simulationen sollte das Gitter verfeinert werden, um den Fehler zu verringern. Viele Verfeinerungstechniken basieren auf den globalen Fehlerschätzern, die versuchen, den Fehler der gesamten Lösung zu reduzieren. Daher sind diese Verfeinerungsmethoden zum reduzieren der lokalen Fehler nicht effizient. Wenn nur die Lösung an bestimmten Punkten interessiert ist und der Fehler dort reduziert werden will, sollten die zielorientierten Verfeinerungsmethoden angewendet werden, die vor dieser Untersuchung nicht in thermoelastischen Problemen entwickelt und implementiert wurden. 15- Die realen Probleme sind in der Regel 3D-Probleme, und die Simulation mit vereinfachten 2D-Fällen zeigt nicht alle Aspekte des Problems. Wie bereits erwähnt, sollten in der numerischen Simulation zur Erhöhung der Genauigkeit Gitterverfeinerungstechniken eingesetzt werden. Die konventionell verfeinerten Gitter, die durch gleichmäßige Aufteilung aller Elemente erreicht werden, erhöhen die Rechenzeit. Diese Simulationszeiterhöhung bei 3D-Problemen ist enorm. Dieses Problem wird durch die Verwendung der intelligenten Verfeinerung anstelle der globalen gleichmäßigen Verfeinerung gelöst. In diesem Projekt wurde erstmals die zielorientierte adaptive Gitterverfeinerung (AGV) bei thermoelastischen 3D-Problemen entwickelt und implementiert. Forschungsmethodik 16- In dieser Arbeit werden die beiden Haupttypen der thermoelastischen Probleme (KTE und KKTE) untersucht. Das System der partiellen Differentialgleichung der Thermoelastizität besteht aus zwei Hauptgleichungen: der herkömmlichen Gleichgewichtsgleichung und der Energiebilanzgleichung. 17- In diesem Projekt wird die Finite-Elemente-Methode (FEM) verwendet, um die Probleme numerisch zu simulieren. 18- Der Computercode zur Lösung von 2D- und 3D-Problemen wurde in den Program-miersprachen MATLAB bzw. C++ entwickelt. Um die Rechenzeit zu verkürzen und die Computerressourcen effizient zu nutzen, wurden Parallelprogrammierungs- und Optimierungsalgorithmen eingesetzt. 19- Nachdem die Probleme numerisch gelöst wurden, wurden zwei verschiedene Arten von globalen und lokalen Fehlerschätzungstechniken implementiert, um den Fehler zu schätzen und die Genauigkeit der Lösung zu messen. Der globale Typ ist die recovery-basierte zielorientierte Fehlerabschätzung, die wiederum in drei Unterkategorien von SPR-, L2-PR- und WSPR-Methoden unterteilt ist. Der lokale Typ ist die dual-gewichtete residuumsbasierte zielorientierte Fehlerabschätzung. Die Formulierung dieser Methoden wurde für thermoelastische Probleme entwickelt. 20- Schließlich wurde nach der Fehlerschätzung die entwickelte AGV-Methode implementiert. Wesentliche Ergebnisse und Schlussfolgerungen 21- In diesem Projekt wurde die Fehlerschätzung der Thermoelastizität in den folgenden drei Schritten untersucht: 1- Recovery-basierte Fehlerschätzung in statischen thermo Problemen (KTE), 2- Recovery-basierte Fehlerabschätzung in dynamischen thermo Problemen (KKTE), 3- Residuumsbasierte Fehlerschätzung in statischen thermo Problemen (KTE), 22- Im ersten Schritt, wurde das recovery-basierte Fehlerschätzverfahren auf mehrere stationäre thermoelastische Probleme angewendet. Einige der untersuchten Probleme verfügen über analytische Lösungen. Der Vergleich der numerischen Ergebnisse mit der analytischen (exakten) Lösung zeigt, dass die WSPR-Methode die genaueste unter den SPR, L2-PR und WSPR Techniken ist. 23- Darüber hinaus schließen wir aus den Ergebnissen des ersten Schritts, dass die zielorientierte Verfeinerung, im Vergleich zur herkömmlichen gleichmäßigen Total-Verfeinerungsmethode, nur ein Drittel der Unbekannten erfordert, um das Problem mit der gleichen Genauigkeit zu lösen. Daher benötigt die zielorientierte Adaptivität im Vergleich zu herkömmlichen Methoden viel weniger Rechenzeit, um die gleiche Genauigkeit zu erreichen. 24- Im zweiten Schritt, sind die Fehlerschätzungstechniken dieselben wie im ersten Schritt, aber die untersuchten Probleme sind dynamisch und nicht statisch. Der Vergleich der numerischen Ergebnisse mit den analytischen Ergebnissen in einem Benchmark-Problem bestätigt die Genauigkeit der verwendeten Methode. 25- Die Ergebnisse des zweiten Schritts zeigen, dass die geschätzten Fehler in allen gekoppelten Problemen niedriger sind als die ähnlichen ungekoppelten. Bei diesen Problemen reduziert die Implementierung der entwickelten adaptiven Methode den Fehler erheblich. 26- Im dritten Schritt wurde das residuumsbasierte Fehlerabschätzungsverfahren auf mehrere thermoelastische Probleme im stationären Zustand angewendet. In allen Beispielen wird die Genauigkeit der Methode durch analytische Lösungen überprüft. Die numerischen Ergebnisse zeigen eine sehr gute Übereinstimmung mit der analytischen Lösung sowohl bei 2D- als auch bei 3D-Problemen. 27- Im dritten Schritt werden die Ergebnisse der DWR-Verfeinerung mit Kelly-, W-Kelly- und gleichmäßigen Total-Verfeinerungstechniken verglichen. Die entwickelte DWR-Methode zeigt im Vergleich zu den anderen Methoden die beste Effizienz. Um beispielsweise die Fehlertoleranz von 10-6 zu erreichen, enthält das DWR-Gitter nur 2% unbekannte Parameter im Vergleich zu einem gleichmäßig verfeinerten Gitter. Die Verwendung des DWR-Verfahrens spart daher erhebliche Rechenzeit und Kosten. KW - Mesh Refinement KW - Thermoelastizität KW - Goal-oriented A Posteriori Error Estimation KW - 2D/3D Adaptive Mesh Refinement KW - Thermoelasticity KW - Deal ii C++ code KW - recovery-based and residual-based error estimators Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20201113-42864 ER - TY - JOUR A1 - Rabczuk, Timon A1 - Guo, Hongwei A1 - Zhuang, Xiaoying A1 - Chen, Pengwan A1 - Alajlan, Naif T1 - Stochastic deep collocation method based on neural architecture search and transfer learning for heterogeneous porous media JF - Engineering with Computers N2 - We present a stochastic deep collocation method (DCM) based on neural architecture search (NAS) and transfer learning for heterogeneous porous media. We first carry out a sensitivity analysis to determine the key hyper-parameters of the network to reduce the search space and subsequently employ hyper-parameter optimization to finally obtain the parameter values. The presented NAS based DCM also saves the weights and biases of the most favorable architectures, which is then used in the fine-tuning process. We also employ transfer learning techniques to drastically reduce the computational cost. The presented DCM is then applied to the stochastic analysis of heterogeneous porous material. Therefore, a three dimensional stochastic flow model is built providing a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. The performance of the presented NAS based DCM is verified in different dimensions using the method of manufactured solutions. We show that it significantly outperforms finite difference methods in both accuracy and computational cost. KW - Maschinelles Lernen KW - Neuronales Lernen KW - Fehlerabschätzung KW - deep learning KW - neural architecture search KW - randomized spectral representation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220209-45835 UR - https://link.springer.com/article/10.1007/s00366-021-01586-2 VL - 2022 SP - 1 EP - 26 PB - Springer CY - London ER - TY - CHAP A1 - Pham, Hoang Anh ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - ADAPTIVE EXCITATION FOR SELECTIVE SENSITIVITY-BASED STRUCTURAL IDENTIFICATION N2 - Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-30015 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - THES A1 - Oucif, Chahmi T1 - Analytical Modeling of Self-Healing and Super Healing in Cementitious Materials N2 - Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken. The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history. In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain. Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics. In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied. KW - Schaden KW - Beschädigung KW - Selbstheilung KW - Zementbeton KW - Damage KW - Healing KW - Concrete KW - Autonomous KW - Autogenous KW - Super Healing Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200831-42296 ER - TY - JOUR A1 - Ouaer, Hocine A1 - Hosseini, Amir Hossein A1 - Amar, Menad Nait A1 - Ben Seghier, Mohamed El Amine A1 - Ghriga, Mohammed Abdelfetah A1 - Nabipour, Narjes A1 - Andersen, Pål Østebø A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin T1 - Rigorous Connectionist Models to Predict Carbon Dioxide Solubility in Various Ionic Liquids JF - Applied Sciences N2 - Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201. KW - Maschinelles Lernen KW - Machine learning KW - OA-Publikationsfonds2020 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200107-40558 UR - https://www.mdpi.com/2076-3417/10/1/304 VL - 2020 IS - Volume 10, Issue 1, 304 PB - MDPI ER - TY - THES A1 - Nouri, Hamidreza T1 - Mechanical Behavior of two dimensional sheets and polymer compounds based on molecular dynamics and continuum mechanics approach N2 - Compactly, this thesis encompasses two major parts to examine mechanical responses of polymer compounds and two dimensional materials: 1- Molecular dynamics approach is investigated to study transverse impact behavior of polymers, polymer compounds and two dimensional materials. 2- Large deflection of circular and rectangular membranes is examined by employing continuum mechanics approach. Two dimensional materials (2D), including, Graphene and molybdenum disulfide (MoS2), exhibited new and promising physical and chemical properties, opening new opportunities to be utilized alone or to enhance the performance of conventional materials. These 2D materials have attracted tremendous attention owing to their outstanding physical properties, especially concerning transverse impact loading. Polymers, with the backbone of carbon (organic polymers) or do not include carbon atoms in the backbone (inorganic polymers) like polydimethylsiloxane (PDMS), have extraordinary characteristics particularly their flexibility leads to various easy ways of forming and casting. These simple shape processing label polymers as an excellent material often used as a matrix in composites (polymer compounds). In this PhD work, Classical Molecular Dynamics (MD) is implemented to calculate transverse impact loading of 2D materials as well as polymer compounds reinforced with graphene sheets. In particular, MD was adopted to investigate perforation of the target and impact resistance force . By employing MD approach, the minimum velocity of the projectile that could create perforation and passes through the target is obtained. The largest investigation was focused on how graphene could enhance the impact properties of the compound. Also the purpose of this work was to discover the effect of the atomic arrangement of 2D materials on the impact problem. To this aim, the impact properties of two different 2D materials, graphene and MoS2, are studied. The simulation of chemical functionalization was carried out systematically, either with covalently bonded molecules or with non-bonded ones, focusing the following efforts on the covalently bounded species, revealed as the most efficient linkers. To study transverse impact behavior by using classical MD approach , Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software, that is well-known among most researchers, is employed. The simulation is done through predefined commands in LAMMPS. Generally these commands (atom style, pair style, angle style, dihedral style, improper style, kspace style, read data, fix, run, compute and so on) are used to simulate and run the model for the desired outputs. Depends on the particles and model types, suitable inter-atomic potentials (force fields) are considered. The ensembles, constraints and boundary conditions are applied depends upon the problem definition. To do so, atomic creation is needed. Python codes are developed to generate particles which explain atomic arrangement of each model. Each atomic arrangement introduced separately to LAMMPS for simulation. After applying constraints and boundary conditions, LAMMPS also include integrators like velocity-Verlet integrator or Brownian dynamics or other types of integrator to run the simulation and finally the outputs are emerged. The outputs are inspected carefully to appreciate the natural behavior of the problem. Appreciation of natural properties of the materials assist us to design new applicable materials. In investigation on the large deflection of circular and rectangular membranes, which is related to the second part of this thesis, continuum mechanics approach is implemented. Nonlinear Föppl membrane theory, which carefully release nonlinear governing equations of motion, is considered to establish the non-linear partial differential equilibrium equations of the membranes under distributed and centric point loads. The Galerkin and energy methods are utilized to solve non-linear partial differential equilibrium equations of circular and rectangular plates respectively. Maximum deflection as well as stress through the film region, which are kinds of issue in many industrial applications, are obtained. T2 - Mechanisches Verhalten von zweidimensionalen Schichten und Polymerverbindungen basierend auf molekulardynamischer und kontinuumsmechanischem Ansatz KW - Molekulardynamik KW - Polymerverbindung KW - Auswirkung KW - Molecular Dynamics Simulation KW - Continuum Mechnics KW - Polymer compound KW - Impact Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220713-46700 ER - TY - JOUR A1 - Noori, Hamidreza A1 - Mortazavi, Bohayra A1 - Keshtkari, Leila A1 - Zhuang, Xiaoying A1 - Rabczuk, Timon T1 - Nanopore creation in MoS2 and graphene monolayers by nanoparticles impact: a reactive molecular dynamics study JF - Applied Physics A N2 - In this work, extensive reactive molecular dynamics simulations are conducted to analyze the nanopore creation by nano-particles impact over single-layer molybdenum disulfide (MoS2) with 1T and 2H phases. We also compare the results with graphene monolayer. In our simulations, nanosheets are exposed to a spherical rigid carbon projectile with high initial velocities ranging from 2 to 23 km/s. Results for three different structures are compared to examine the most critical factors in the perforation and resistance force during the impact. To analyze the perforation and impact resistance, kinetic energy and displacement time history of the projectile as well as perforation resistance force of the projectile are investigated. Interestingly, although the elasticity module and tensile strength of the graphene are by almost five times higher than those of MoS2, the results demonstrate that 1T and 2H-MoS2 phases are more resistive to the impact loading and perforation than graphene. For the MoS2nanosheets, we realize that the 2H phase is more resistant to impact loading than the 1T counterpart. Our reactive molecular dynamics results highlight that in addition to the strength and toughness, atomic structure is another crucial factor that can contribute substantially to impact resistance of 2D materials. The obtained results can be useful to guide the experimental setups for the nanopore creation in MoS2or other 2D lattices. KW - Nanomechanik KW - Molekülstruktur KW - Nanoporöser Stoff KW - MoS2 KW - molecular dynamics KW - Nanopore KW - Graphene Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210804-44756 UR - https://link.springer.com/article/10.1007/s00339-021-04693-5 VL - 2021 IS - volume 127, article 541 SP - 1 EP - 13 PB - Springer CY - Heidelberg ER - TY - GEN A1 - Nikulla, Susanne T1 - Untersuchung des dynamischen Verhaltens von Eisenbahnbrücken bei wechselnden Umweltbedingungen N2 - Im Zuge des Ausbaus von Eisenbahnstrecken für den Hochgeschwindigkeitsverkehr muss sichergestellt werden, dass keine Resonanz zwischen den periodisch einwirkenden Radlasten und den Brückeneigenfrequenzen entsteht. Bei der Untersuchung einzelner Bauwerke wurden teilweise recht große Schwankungen des dynamischen Verhaltens im Verlauf der Jahreszeiten festgestellt. Um diese Beobachtungen zu präzisieren, wurden an zwei ausgewählten Walzträger-in-Beton-Brücken über den Zeitraum von 15 Monaten Beschleunigungsmessungen durchgeführt. Die gewonnenen Daten wurden mit der Stochastic Subspace Methode, die im ersten Teil der Arbeit näher erläutert wird, ausgewertet. Es konnte für alle Eigenmoden ein Absinken der Eigenfrequenz bei steigender Temperatur beobachtet werden. Um die Ursachen hierfür genauer zu untersuchen, wurde für eine der beiden Brücken ein Finite-Elemente-Modell mit dem Programm SLang erstellt. Mittels einer Sensitivitätsanalyse wurden die für das Schwingverhalten maßgebenden Systemeigenschaften identifiziert. Die anschließend durchgeführte Strukturoptimierung unter Nutzung des genetischen Algorithmus sowie des adaptiven Antwortflächenverfahrens konnte die Temperaturabhängigkeit einzelner Materialparameter aufzeigen, die zumindest eine Ursache für Schwankungen der Eigenfrequenzen darstellen. KW - Dynamik KW - Systemidentifikation KW - Beschleunigungsmessung KW - Strukturoptimierung KW - Modalanalyse KW - Lufttemperatur KW - Zustandsraummodell KW - Stochastic Subspace Identification Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20081020-14324 ER - TY - THES A1 - Nickerson, Seth T1 - Thermo-Mechanical Behavior of Honeycomb, Porous, Microcracked Ceramics BT - Characterization and analysis of thermally induced stresses with specific consideration of synthetic, porous cordierite honeycomb substrates N2 - The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties. Primary novel factors of this work center on two aspects. 1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners. 2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions. Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis. This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,4 KW - Keramik KW - ceramics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190911-39753 ER - TY - CHAP A1 - Nguyen-Tuan, Long A1 - Lahmer, Tom A1 - Datcheva, Maria A1 - Stoimenova, Eugenia A1 - Schanz, Tom ED - Gürlebeck, Klaus ED - Lahmer, Tom T1 - PARAMETER IDENTIFICATION APPLYING IN COMPLEX THERMO-HYDRO-MECHANICAL PROBLEMS LIKE THE DESIGN OF BUFFER ELEMENTS T2 - Digital Proceedings, International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering : July 20 - 22 2015, Bauhaus-University Weimar N2 - This study contributes to the identification of coupled THM constitutive model parameters via back analysis against information-rich experiments. A sampling based back analysis approach is proposed comprising both the model parameter identification and the assessment of the reliability of identified model parameters. The results obtained in the context of buffer elements indicate that sensitive parameter estimates generally obey the normal distribution. According to the sensitivity of the parameters and the probability distribution of the samples we can provide confidence intervals for the estimated parameters and thus allow a qualitative estimation on the identified parameters which are in future work used as inputs for prognosis computations of buffer elements. These elements play e.g. an important role in the design of nuclear waste repositories. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Building Information Modeling KW - Computerunterstütztes Verfahren KW - Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28162 SN - 1611-4086 ER - TY - CHAP A1 - Nguyen-Thanh, Nhon A1 - Rabczuk, Timon ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - A SMOOTHED FINITE ELEMENT METHOD FOR THE STATIC AND FREE VIBRATION ANALYSIS OF SHELLS N2 - A four-node quadrilateral shell element with smoothed membrane-bending based on Mindlin-Reissner theory is proposed. The element is a combination of a plate bending and membrane element. It is based on mixed interpolation where the bending and membrane stiffness matrices are calculated on the boundaries of the smoothing cells while the shear terms are approximated by independent interpolation functions in natural coordinates. The proposed element is robust, computationally inexpensive and free of locking. Since the integration is done on the element boundaries for the bending and membrane terms, the element is more accurate than the MITC4 element for distorted meshes. This will be demonstrated for several numerical examples. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Architektur KW - Computerunterstütztes Verfahren KW - Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28777 UR - http://euklid.bauing.uni-weimar.de/ikm2009/paper.html SN - 1611-4086 ER - TY - THES A1 - Nariman, Nazim T1 - Numerical Methods for the Multi-Physical Analysis of Long Span Cable-Stayed Bridges N2 - The main categories of wind effects on long span bridge decks are buffeting, flutter, vortex-induced vibrations (VIV) which are often critical for the safety and serviceability of the structure. With the rapid increase of bridge spans, research on controlling wind-induced vibrations of long span bridges has been a problem of great concern.The developments of vibration control theories have led to the wide use of tuned mass dampers (TMDs) which has been proven to be effective for suppressing these vibrations both analytically and experimentally. Fire incidents are also of special interest in the stability and safety of long span bridges due to significant role of the complex phenomenon through triple interaction between the deck with the incoming wind flow and the thermal boundary of the surrounding air. This work begins with analyzing the buffeting response and flutter instability of three dimensional computational structural dynamics (CSD) models of a cable stayed bridge due to strong wind excitations using ABAQUS finite element commercial software. Optimization and global sensitivity analysis are utilized to target the vertical and torsional vibrations of the segmental deck through considering three aerodynamic parameters (wind attack angle, deck streamlined length and viscous damping of the stay cables). The numerical simulations results in conjunction with the frequency analysis results emphasized the existence of these vibrations and further theoretical studies are possible with a high level of accuracy. Model validation is performed by comparing the results of lift and moment coefficients between the created CSD models and two benchmarks from the literature (flat plate theory) and flat plate by (Xavier and co-authors) which resulted in very good agreements between them. Optimum values of the parameters have been identified. Global sensitivity analysis based on Monte Carlo sampling method was utilized to formulate the surrogate models and calculate the sensitivity indices. The rational effect and the role of each parameter on the aerodynamic stability of the structure were calculated and efficient insight has been constructed for the stability of the long span bridge. 2D computational fluid dynamics (CFD) models of the decks are created with the support of MATLAB codes to simulate and analyze the vortex shedding and VIV of the deck. Three aerodynamic parameters (wind speed, deck streamlined length and dynamic viscosity of the air) are dedicated to study their effects on the kinetic energy of the system and the vortices shapes and patterns. Two benchmarks from the literature (Von Karman) and (Dyrbye and Hansen) are used to validate the numerical simulations of the vortex shedding for the CFD models. A good consent between the results was detected. Latin hypercube experimental method is dedicated to generate the surrogate models for the kinetic energy of the system and the generated lift forces. Variance based sensitivity analysis is utilized to calculate the main sensitivity indices and the interaction orders for each parameter. The kinetic energy approach performed very well in revealing the rational effect and the role of each parameter in the generation of vortex shedding and predicting the early VIV and the critical wind speed. Both one-way fluid-structure interaction (one-way FSI) simulations and two-way fluid-structure interaction (two-way FSI) co-simulations for the 2D models of the deck are executed to calculate the shedding frequencies for the associated wind speeds in the lock-in region in addition to the lift and drag coefficients. Validation is executed with the results of (Simiu and Scanlan) and the results of flat plate theory compiled by (Munson and co-authors) respectively. High levels of agreements between all the results were detected. A decrease in the critical wind speed and the shedding frequencies considering (two-way FSI) was identified compared to those obtained in the (one-way FSI). The results from the (two-way FSI) approach predicted appreciable decrease in the lift and drag forces as well as prediction of earlier VIV for lower critical wind speeds and lock-in regions which exist at lower natural frequencies of the system. These conclusions help the designers to efficiently plan and consider for the design and safety of the long span bridge before and after construction. Multiple tuned mass dampers (MTMDs) system has been applied in the three dimensional CSD models of the cable stayed bridge to analyze their control efficiency in suppressing both wind -induced vertical and torsional vibrations of the deck by optimizing three design parameters (mass ratio, frequency ratio and damping ratio) for the (TMDs) supporting on actual field data and minimax optimization technique in addition to MATLAB codes and Fast Fourier Transform technique. The optimum values of each parameter were identified and validated with two benchmarks from the literature, first with (Wang and co-authors) and then with (Lin and co-authors). The validation procedure detected a good agreement between the results. Box-Behnken experimental method is dedicated to formulate the surrogate models to represent the control efficiency of the vertical and torsional vibrations. Sobol's sensitivity indices are calculated for the design parameters in addition to their interaction orders. The optimization results revealed better performance of the MTMDs in controlling both the vertical and the torsional vibrations for higher mode shapes. Furthermore, the calculated rational effect of each design parameter facilitates to increase the control efficiency of the MTMDs in conjunction with the support of the surrogate models which simplifies the process of analysis for vibration control to a great extent. A novel structural modification approach has been adopted to eliminate the early coupling between the bending and torsional mode shapes of the cable stayed bridge. Two lateral steel beams are added to the middle span of the structure. Frequency analysis is dedicated to obtain the natural frequencies of the first eight mode shapes of vibrations before and after the structural modification. Numerical simulations of wind excitations are conducted for the 3D model of the cable stayed bridge. Both vertical and torsional displacements are calculated at the mid span of the deck to analyze the bending and the torsional stiffness of the system before and after the structural modification. The results of the frequency analysis after applying lateral steel beams declared that the coupling between the vertical and torsional mode shapes of vibrations has been removed to larger natural frequencies magnitudes and higher rare critical wind speeds with a high factor of safety. Finally, thermal fluid-structure interaction (TFSI) and coupled thermal-stress analysis are utilized to identify the effects of transient and steady state heat-transfer on the VIV and fatigue of the deck due to fire incidents. Numerical simulations of TFSI models of the deck are dedicated to calculate the lift and drag forces in addition to determining the lock-in regions once using FSI models and another using TFSI models. Vorticity and thermal fields of three fire scenarios are simulated and analyzed. The benchmark of (Simiu and Scanlan) is used to validate the TFSI models, where a good agreement was manifested between the two results. Extended finite element method (XFEM) is adopted to create 3D models of the cable stayed bridge to simulate the fatigue of the deck considering three fire scenarios. The benchmark of (Choi and Shin) is used to validate the damaged models of the deck in which a good coincide was seen between them. The results revealed that the TFSI models and the coupled thermal-stress models are significant in detecting earlier vortex induced vibration and lock-in regions in addition to predicting damages and fatigue of the deck and identifying the role of wind-induced vibrations in speeding up the damage generation and the collapse of the structure in critical situations. KW - Stabilität KW - Brückenbau KW - Aerodynamic Stability KW - Vortex Induced Vibration KW - Fluid-Structure Interaction KW - Mass Tuned Damper KW - Thermal Fluid-Structure Interaction Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20171122-37105 ER - TY - THES A1 - Nanthakumar, S.S. T1 - Inverse and optimization problems in piezoelectric materials using Extended Finite Element Method and Level sets T1 - Inverse und Optimierungsprobleme für piezoelektrische Materialien mit der Extended Finite Elemente Methode und Level sets N2 - Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics. An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries. Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations. The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels. Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams. Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed . KW - Finite-Elemente-Methode KW - Piezoelectricity KW - Inverse problems KW - Optimization problems KW - Nanostructures KW - XFEM KW - level set method KW - Surface effects Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161128-27095 ER - TY - JOUR A1 - Nabipour, Narjes A1 - Mosavi, Amir A1 - Baghban, Alireza A1 - Shamshirband, Shahaboddin A1 - Felde, Imre T1 - Extreme Learning Machine-Based Model for Solubility Estimation of Hydrocarbon Gases in Electrolyte Solutions JF - Processes N2 - Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants. KW - Maschinelles Lernen KW - Machine learning KW - Deep learning Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200113-40624 UR - https://www.mdpi.com/2227-9717/8/1/92 VL - 2020 IS - Volume 8, Issue 1, 92 PB - MDPI ER - TY - JOUR A1 - Nabipour, Narjes A1 - Dehghani, Majid A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin T1 - Short-Term Hydrological Drought Forecasting Based on Different Nature-Inspired Optimization Algorithms Hybridized With Artificial Neural Networks JF - IEEE Access N2 - Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40. KW - Maschinelles Lernen KW - Machine learning KW - Deep learning KW - Hydrological drought KW - precipitation KW - hydrology Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200213-40796 UR - https://ieeexplore.ieee.org/document/8951168 VL - 2020 IS - volume 8 SP - 15210 EP - 15222 PB - IEEE ER - TY - THES A1 - Msekh, Mohammed Abdulrazzak T1 - Phase Field Modeling for Fracture with Applications to Homogeneous and Heterogeneous Materials N2 - The thesis presents an implementation including different applications of a variational-based approach for gradient type standard dissipative solids. Phase field model for brittle fracture is an application of the variational-based framework for gradient type solids. This model allows the prediction of different crack topologies and states. Of significant concern is the application of theoretical and numerical formulation of the phase field modeling into the commercial finite element software Abaqus in 2D and 3D. The fully coupled incremental variational formulation of phase field method is implemented by using the UEL and UMAT subroutines of Abaqus. The phase field method considerably reduces the implementation complexity of fracture problems as it removes the need for numerical tracking of discontinuities in the displacement field that are characteristic of discrete crack methods. This is accomplished by replacing the sharp discontinuities with a scalar damage phase field representing the diffuse crack topology wherein the amount of diffusion is controlled by a regularization parameter. The nonlinear coupled system consisting of the linear momentum equation and a diffusion type equation governing the phase field evolution is solved simultaneously via a Newton- Raphson approach. Post-processing of simulation results to be used as visualization module is performed via an additional UMAT subroutine implemented in the standard Abaqus viewer. In the same context, we propose a simple yet effective algorithm to initiate and propagate cracks in 2D geometries which is independent of both particular constitutive laws and specific element technology and dimension. It consists of a localization limiter in the form of the screened Poisson equation with, optionally, local mesh refinement. A staggered scheme for standard equilibrium and screened Cauchy equations is used. The remeshing part of the algorithm consists of a sequence of mesh subdivision and element erosion steps. Element subdivision is based on edge split operations using a given constitutive quantity (either damage or void fraction). Mesh smoothing makes use of edge contraction as function of a given constitutive quantity such as the principal stress or void fraction. To assess the robustness and accuracy of this algorithm, we use both quasi-brittle benchmarks and ductile tests. Furthermore, we introduce a computational approach regarding mechanical loading in microscale on an inelastically deforming composite material. The nanocomposites material of fully exfoliated clay/epoxy is shaped to predict macroscopic elastic and fracture related material parameters based on their fine–scale features. Two different configurations of polymer nanocomposites material (PNCs) have been studied. These configurations are fully bonded PNCs and PNCs with an interphase zone formation between the matrix and the clay reinforcement. The representative volume element of PNCs specimens with different clay weight contents, different aspect ratios, and different interphase zone thicknesses are generated by adopting Python scripting. Different constitutive models are employed for the matrix, the clay platelets, and the interphase zones. The brittle fracture behavior of the epoxy matrix and the interphase zones material are modeled using the phase field approach, whereas the stiff silicate clay platelets of the composite are designated as a linear elastic material. The comprehensive study investigates the elastic and fracture behavior of PNCs composites, in addition to predict Young’s modulus, tensile strength, fracture toughness, surface energy dissipation, and cracks surface area in the composite for different material parameters, geometry, and interphase zones properties and thicknesses. T2 - Phasenfeldmodellierung für Brüche mit Anwendungen auf homogene und heterogene Materialien KW - Finite-Elemente-Methode KW - Phase field model KW - Fracture KW - Abaqus KW - Finite Element Model Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170615-32291 ER - TY - JOUR A1 - Mousavi, Seyed Nasrollah A1 - Steinke Júnior, Renato A1 - Teixeira, Eder Daniel A1 - Bocchiola, Daniele A1 - Nabipour, Narjes A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin T1 - Predictive Modeling the Free Hydraulic Jumps Pressure through Advanced Statistical Methods JF - Mathematics N2 - Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability. KW - Maschinelles Lernen KW - Machine learning KW - mathematical modeling KW - extreme pressure KW - hydraulic jump KW - stilling basin KW - standard deviation of pressure fluctuations KW - statistical coeffcient of the probability distribution Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200402-41140 UR - https://www.mdpi.com/2227-7390/8/3/323 VL - 2020 IS - Volume 8, Issue 3, 323 PB - MDPI CY - Basel ER - TY - CHAP A1 - Most, Thomas A1 - Eckardt, Stefan A1 - Schrader, Kai A1 - Deckner, T. ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - AN IMPROVED COHESIVE CRACK MODEL FOR COMBINED CRACK OPENING AND SLIDING UNDER CYCLIC LOADING N2 - The modeling of crack propagation in plain and reinforced concrete structures is still a field for many researchers. If a macroscopic description of the cohesive cracking process of concrete is applied, generally the Fictitious Crack Model is utilized, where a force transmission over micro cracks is assumed. In the most applications of this concept the cohesive model represents the relation between the normal crack opening and the normal stress, which is mostly defined as an exponential softening function, independently from the shear stresses in tangential direction. The cohesive forces are then calculated only from the normal stresses. By Carol et al. 1997 an improved model was developed using a coupled relation between the normal and shear damage based on an elasto-plastic constitutive formulation. This model is based on a hyperbolic yield surface depending on the normal and the shear stresses and on the tensile and shear strength. This model also represents the effect of shear traction induced crack opening. Due to the elasto-plastic formulation, where the inelastic crack opening is represented by plastic strains, this model is limited for applications with monotonic loading. In order to enable the application for cases with un- and reloading the existing model is extended in this study using a combined plastic-damage formulation, which enables the modeling of crack opening and crack closure. Furthermore the corresponding algorithmic implementation using a return mapping approach is presented and the model is verified by means of several numerical examples. Finally an investigation concerning the identification of the model parameters by means of neural networks is presented. In this analysis an inverse approximation of the model parameters is performed by using a given set of points of the load displacement curves as input values and the model parameters as output terms. It will be shown, that the elasto-plastic model parameters could be identified well with this approach, but require a huge number of simulations. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-29933 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - CHAP A1 - Most, Thomas A1 - Bucher, Christian A1 - Macke, M. ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - A NATURAL NEIGHBOR BASED MOVING LEAST SQUARES APPROACH WITH INTERPOLATING WEIGHTING FUNCTION N2 - The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-29943 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - CHAP A1 - Most, Thomas A1 - Bucher, Christian ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - ADAPTIVE RESPONSE SURFACE APPROACH USING ARTIFICIAL NEURAL NETWORKS AND MOVING LEAST SQUARES N2 - In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-29922 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - JOUR A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin A1 - Esmaeilbeiki, Fatemeh A1 - Zarehaghi, Davoud A1 - Neyshabouri, Mohammadreza A1 - Samadianfard, Saeed A1 - Ghorbani, Mohammad Ali A1 - Nabipour, Narjes A1 - Chau, Kwok-Wing T1 - Comparative analysis of hybrid models of firefly optimization algorithm with support vector machines and multilayer perceptron for predicting soil temperature at different depths JF - Engineering Applications of Computational Fluid Mechanics N2 - This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models. KW - Bodentemperatur KW - Algorithmus KW - Maschinelles Lernen KW - Neuronales Netz KW - firefly optimization algorithm KW - soil temperature KW - artificial neural networks KW - hybrid machine learning KW - OA-Publikationsfonds2019 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200911-42347 UR - https://www.tandfonline.com/doi/full/10.1080/19942060.2020.1788644 VL - 2020 IS - Volume 14, Issue 1 SP - 939 EP - 953 ER - TY - JOUR A1 - Mosavi, Amir A1 - Najafi, Bahman A1 - Faizollahzadeh Ardabili, Sina A1 - Shamshirband, Shahaboddin A1 - Rabczuk, Timon T1 - An Intelligent Artificial Neural Network-Response Surface Methodology Method for Accessing the Optimum Biodiesel and Diesel Fuel Blending Conditions in a Diesel Engine from the Viewpoint of Exergy and Energy Analysis JF - Energies N2 - Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling. KW - Biodiesel KW - ANN modeling KW - biodiesel KW - Artificial Intelligence KW - diesel engines KW - energy, exergy KW - mathematical modeling KW - OA-Publikationsfonds2018 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180507-37467 UR - http://www.mdpi.com/1996-1073/11/4/860 VL - 2018 IS - 11, 4 PB - MDPI CY - Basel ER - TY - JOUR A1 - Mosavi, Amir Hosein A1 - Shokri, Manouchehr A1 - Mansor, Zulkefli A1 - Qasem, Sultan Noman A1 - Band, Shahab S. A1 - Mohammadzadeh, Ardashir T1 - Machine Learning for Modeling the Singular Multi-Pantograph Equations JF - Entropy N2 - In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost. KW - Fuzzy-Regelung KW - square root cubature calman filter KW - statistical analysis KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210122-43436 UR - https://www.mdpi.com/1099-4300/22/9/1041 VL - 2020 IS - volume 22, issue 9, article 1041 SP - 1 EP - 18 PB - MDPI CY - Basel ER - TY - JOUR A1 - Mosavi, Amir Hosein A1 - Qasem, Sultan Noman A1 - Shokri, Manouchehr A1 - Band, Shahab S. A1 - Mohammadzadeh, Ardashir T1 - Fractional-Order Fuzzy Control Approach for Photovoltaic/Battery Systems under Unknown Dynamics, Variable Irradiation and Temperature JF - Electronics N2 - For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated. KW - Fuzzy-Logik KW - Fotovoltaik KW - type-3 fuzzy systems KW - fractional-order control KW - battery KW - photovoltaic KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210122-43381 UR - https://www.mdpi.com/2079-9292/9/9/1455 VL - 2020 IS - Volume 9, issue 9, article 1455 SP - 1 EP - 19 PB - MDPI CY - Basel ER - TY - JOUR A1 - Mortazavi, Bohayra A1 - Pereira, Luiz Felipe C. A1 - Jiang, Jin-Wu A1 - Rabczuk, Timon T1 - Modelling heat conduction in polycrystalline hexagonal boron-nitride films JF - Scientific Reports N2 - We conducted extensive molecular dynamics simulations to investigate the thermal conductivity of polycrystalline hexagonal boron-nitride (h-BN) films. To this aim, we constructed large atomistic models of polycrystalline h-BN sheets with random and uniform grain configuration. By performing equilibrium molecular dynamics (EMD) simulations, we investigated the influence of the average grain size on the thermal conductivity of polycrystalline h-BN films at various temperatures. Using the EMD results, we constructed finite element models of polycrystalline h-BN sheets to probe the thermal conductivity of samples with larger grain sizes. Our multiscale investigations not only provide a general viewpoint regarding the heat conduction in h-BN films but also propose that polycrystalline h-BN sheets present high thermal conductivity comparable to monocrystalline sheets. KW - Wärmeleitfähigkeit KW - Bornitrid KW - Finite-Elemente-Methode Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170425-31534 ER - TY - JOUR A1 - Meng, Yinghui A1 - Noman Qasem, Sultan A1 - Shokri, Manouchehr A1 - Shamshirband, Shahaboddin T1 - Dimension Reduction of Machine Learning-Based Forecasting Models Employing Principal Component Analysis JF - Mathematics N2 - In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers. KW - Maschinelles Lernen KW - machine learning KW - dimensionality reduction KW - wavelet transform KW - water quality KW - principal component analysis KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200811-42125 UR - https://www.mdpi.com/2227-7390/8/8/1233 VL - 2020 IS - volume 8, issue 8, article 1233 PB - MDPI CY - Basel ER - TY - JOUR A1 - Meiabadi, Mohammad Saleh A1 - Moradi, Mahmoud A1 - Karamimoghadam, Mojtaba A1 - Ardabili, Sina A1 - Bodaghi, Mahdi A1 - Shokri, Manouchehr A1 - Mosavi, Amir Hosein T1 - Modeling the Producibility of 3D Printing in Polylactic Acid Using Artificial Neural Networks and Fused Filament Fabrication JF - polymers N2 - Polylactic acid (PLA) is a highly applicable material that is used in 3D printers due to some significant features such as its deformation property and affordable cost. For improvement of the end-use quality, it is of significant importance to enhance the quality of fused filament fabrication (FFF)-printed objects in PLA. The purpose of this investigation was to boost toughness and to reduce the production cost of the FFF-printed tensile test samples with the desired part thickness. To remove the need for numerous and idle printing samples, the response surface method (RSM) was used. Statistical analysis was performed to deal with this concern by considering extruder temperature (ET), infill percentage (IP), and layer thickness (LT) as controlled factors. The artificial intelligence method of artificial neural network (ANN) and ANN-genetic algorithm (ANN-GA) were further developed to estimate the toughness, part thickness, and production-cost-dependent variables. Results were evaluated by correlation coefficient and RMSE values. According to the modeling results, ANN-GA as a hybrid machine learning (ML) technique could enhance the accuracy of modeling by about 7.5, 11.5, and 4.5% for toughness, part thickness, and production cost, respectively, in comparison with those for the single ANN method. On the other hand, the optimization results confirm that the optimized specimen is cost-effective and able to comparatively undergo deformation, which enables the usability of printed PLA objects. KW - 3D-Druck KW - Polymere KW - Maschinelles Lernen KW - 3D printing KW - machine learning KW - fused filament fabrication KW - OA-Publikationsfonds2021 Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220110-45518 UR - https://www.mdpi.com/2073-4360/13/19/3219 VL - 2021 IS - Volume 13, issue 19, article 3219 SP - 1 EP - 21 PB - MDPI CY - Basel ER - TY - THES A1 - Mauludin, Luthfi Muhammad T1 - Computational Modeling of Fracture in Encapsulation-Based Self-Healing Concrete Using Cohesive Elements N2 - Encapsulation-based self-healing concrete has received a lot of attention nowadays in civil engineering field. These capsules are embedded in the cementitious matrix during concrete mixing. When the cracks appear, the embedded capsules which are placed along the path of incoming crack are fractured and then release of healing agents in the vicinity of damage. The materials of capsules need to be designed in a way that they should be able to break with small deformation, so the internal fluid can be released to seal the crack. This study focuses on computational modeling of fracture in encapsulation-based selfhealing concrete. The numerical model of 2D and 3D with randomly packed aggreates and capsules have been developed to analyze fracture mechanism that plays a significant role in the fracture probability of capsules and consequently the self-healing process. The capsules are assumed to be made of Poly Methyl Methacrylate (PMMA) and the potential cracks are represented by pre-inserted cohesive elements with tension and shear softening laws along the element boundaries of the mortar matrix, aggregates, capsules, and at the interfaces between these phases. The effects of volume fraction, core-wall thickness ratio, and mismatch fracture properties of capsules on the load carrying capacity of self-healing concrete and fracture probability of the capsules are investigated. The output of this study will become valuable tool to assist not only the experimentalists but also the manufacturers in designing an appropriate capsule material for self-healing concrete. KW - beton KW - Bruch KW - self healing concrete KW - cohesive elements KW - Fracture KW - Fracture Computational Model Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20211008-45204 ER - TY - THES A1 - Mai, Luu T1 - Structural Control Systems in High-speed Railway Bridges N2 - Structural vibration control of high-speed railway bridges using tuned mass dampers, semi-active tuned mass dampers, fluid viscous dampers and magnetorheological dampers to reduce resonant structural vibrations is studied. In this work, the addressed main issues include modeling of the dynamic interaction of the structures, optimization of the parameters of the dampers and comparison of their efficiency. A new approach to optimize multiple tuned mass damper systems on an uncertain model is proposed based on the H-infinity optimization criteria and the DK iteration procedure with norm-bounded uncertainties in frequency domain. The parameters of tuned mass dampers are optimized directly and simultaneously on different modes contributing significantly to the multi-resonant peaks to explore the different possible combinations of parameters. The effectiveness of the present method is also evaluated through comparison with a previous method. In the case of semi-active tuned mass dampers, an optimization algorithm is derived to control the magnetorheological damper in these semi-active damping systems. The use of the proposed algorithm can generate various combinations of control gains and state variables. This can lead to the improvement of the ability of MR dampers to track the desired control forces. An uncertain model to reduce detuning effects is also considered in this work. Next, for fluid viscous dampers, in order to tune the optimal parameters of fluid viscous dampers to the vicinity of the exact values, analytical formulae which can include structural damping are developed based on the perturbation method. The proposed formulae can also be considered as an improvement of the previous analytical formulae, especially for bridge beams with large structural damping. Finally, a new combination of magnetorheological dampers and a double-beam system to improve the performance of the primary structure vibration is proposed. An algorithm to control magnetorheological dampers in this system is developed by using standard linear matrix inequality techniques. Weight functions as a loop shaping procedure are also introduced in the feedback controllers to improve the tracking ability of magnetorheological damping forces. To this end, the effectiveness of magnetorheological dampers controlled by the proposed scheme, along with the effects of the uncertain and time-delay parameters on the models, are evaluated through numerical simulations. Additionally, a comparison of the dampers based on their performance is also considered in this work. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2014,3 KW - High-speed railway bridge KW - Control system KW - Passive damper KW - Semi-active damper KW - Railway bridges Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20141223-23391 SN - 1610-7381 ER - TY - CHAP A1 - Luther, Torsten A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - INVESTIGATION OF CRACK GROWTH IN POLYCRYSTALLINE MESOSTRUCTURES N2 - The design and application of high performance materials demands extensive knowledge of the materials damage behavior, which significantly depends on the meso- and microstructural complexity. Numerical simulations of crack growth on multiple length scales are promising tools to understand the damage phenomena in complex materials. In polycrystalline materials it has been observed that the grain boundary decohesion is one important mechanism that leads to micro crack initiation. Following this observation the paper presents a polycrystal mesoscale model consisting of grains with orthotropic material behavior and cohesive interfaces along grain boundaries, which is able to reproduce the crack initiation and propagation along grain boundaries in polycrystalline materials. With respect to the importance of modeling the geometry of the grain structure an advanced Voronoi algorithm is proposed to generate realistic polycrystalline material structures based on measured grain size distribution. The polycrystal model is applied to investigate the crack initiation and propagation in statically loaded representative volume elements of aluminum on the mesoscale without the necessity of initial damage definition. Future research work is planned to include the mesoscale model into a multiscale model for the damage analysis in polycrystalline materials. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-29886 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - THES A1 - Luther, Torsten T1 - Adaptation of atomistic and continuum methods for multiscale simulation of quasi-brittle intergranular damage N2 - The numerical simulation of damage using phenomenological models on the macroscale was state of the art for many decades. However, such models are not able to capture the complex nature of damage, which simultaneously proceeds on multiple length scales. Furthermore, these phenomenological models usually contain damage parameters, which are physically not interpretable. Consequently, a reasonable experimental determination of these parameters is often impossible. In the last twenty years, the ongoing advance in computational capacities provided new opportunities for more and more detailed studies of the microstructural damage behavior. Today, multiphase models with several million degrees of freedom enable for the numerical simulation of micro-damage phenomena in naturally heterogeneous materials. Therewith, the application of multiscale concepts for the numerical investigation of the complex nature of damage can be realized. The presented thesis contributes to a hierarchical multiscale strategy for the simulation of brittle intergranular damage in polycrystalline materials, for example aluminum. The numerical investigation of physical damage phenomena on an atomistic microscale and the integration of these physically based information into damage models on the continuum meso- and macroscale is intended. Therefore, numerical methods for the damage analysis on the micro- and mesoscale including the scale transfer are presented and the transition to the macroscale is discussed. The investigation of brittle intergranular damage on the microscale is realized by the application of the nonlocal Quasicontinuum method, which fully describes the material behavior by atomistic potential functions, but reduces the number of atomic degrees of freedom by introducing kinematic couplings. Since this promising method is applied only by a limited group of researchers for special problems, necessary improvements have been realized in an own parallelized implementation of the 3D nonlocal Quasicontinuum method. The aim of this implementation was to develop and combine robust and efficient algorithms for a general use of the Quasicontinuum method, and therewith to allow for the atomistic damage analysis in arbitrary grain boundary configurations. The implementation is applied in analyses of brittle intergranular damage in ideal and nonideal grain boundary models of FCC aluminum, considering arbitrary misorientations. From the microscale simulations traction separation laws are derived, which describe grain boundary decohesion on the mesoscale. Traction separation laws are part of cohesive zone models to simulate the brittle interface decohesion in heterogeneous polycrystal structures. 2D and 3D mesoscale models are presented, which are able to reproduce crack initiation and propagation along cohesive interfaces in polycrystals. An improved Voronoi algorithm is developed in 2D to generate polycrystal material structures based on arbitrary distribution functions of grain size. The new model is more flexible in representing realistic grain size distributions. Further improvements of the 2D model are realized by the implementation and application of an orthotropic material model with Hill plasticity criterion to grains. The 2D and 3D polycrystal models are applied to analyze crack initiation and propagation in statically loaded samples of aluminum on the mesoscale without the necessity of initial damage definition. N2 - Strukturmechanische Ermüdungs- und Lebensdaueranalysen basieren meist auf der Anwendung phänomenologischer Modelle der Schädigungs- und Bruchmechanik zur numerischen Simulationen des makroskopischen Schädigungsverhaltens. Ausgehend von einer definierten Anfangsschädigung sind diese Modelle nicht in der Lage, die tatsächlichen Vorgänge der Rissinitiierung und unterschiedlichen Rissausbreitung zu erfassen. Eine physikalische Interpretation der phänomenologisch eingeführten Schädigungsparameter ist oftmals nicht möglich und deren experimentelle Bestimmung schwierig. Die Berücksichtigung des mikrostrukturellen Aufbaus von Materialien in numerischen Modellen der Schädigungs- und Bruchmechanik bietet neue Möglichkeiten, die für die Rissinitiierung und Rissausbreitung ursächlichen physikalischen Phänomene abzubilden. Zunehmende Erkenntnisse über gleichzeitig auftretende Mikro- und Makroschädigungsvorgänge resultieren in verbesserten numerischen Modellen, mit denen aufwändige und kostenintensive Experimente in der Materialentwicklung zum Teil ersetzt werden können. In Kenntnis einer Vielfalt von unterschiedlichen Schädigungsphänomenen in technischen Materialien fokussiert die vorliegende Dissertation auf die Entwicklung und Verbesserung numerischer Methoden der Atomistik und der Kontinuumsmechanik zur Mehrskalenuntersuchung quasi-spröder Korngrenzenschädigung in polykristallinen Werkstoffen, z.B. Aluminium. Die kombinierte Anwendung dieser Methoden ist Teil eines hierarchischen Mehrskalenansatzes zur Integration des physikalisch beschriebenen Materialverhaltens der Atomistik in ein ingenieurmäßiges Kontinuumsschädigungsmodell. Ziel der Dissertation ist die Entwicklung einer Methodik, die es erlaubt, den Verlust atomarer Bindungen als physikalische Ursache spröder Schädigung zu simulieren und Ergebnisse aus diesen atomistischen Mikroskalen-Simulationen zur Parametrisierung von kohäsiven Materialmodellen der Kontinuumsmechanik zu nutzen. Diese beschreiben den intergranularen Sprödbruch in heterogenen Polykristallmodellen der Mesoskala. Der Einfluss der Heterogenität wird in nichtlinearen Finite-Elemente-Simulationen durch explizite Abbildung der Kornstruktur im mesoskopischen Polykristallmodell berücksichtigt. Durch den Einsatz des kohäsiven Interface-Gesetzes erlaubt das auf der Mesoskala angewandte Kontinuumsmodell die Simulation spröder Korngrenzenschädigung in statisch belasteten 2D und 3D Modellen ohne die Notwendigkeit der Definition einer Anfangsschädigung, wie dies in klassischen Modellen der linear-elastischen Bruchmechanik notwendig ist. Zur effizienten Realisierung der atomistischen Mikroskalen-Simulationen wird eine Implementation der nichtlokalen 3D Quasikontinuumsmethode angewandt. Diese Methode basiert auf einem atomistischen Ansatz und beschreibt das Materialverhalten auf Grundlage atomarer Bindungskräfte. In Modellgebieten mit gleichmäßigem Verformungsfeld werden kinematische Kopplungen atomarer Freiheitsgrade eingeführt, sodass sich die Zahl unabhängiger Freiheitsgrade stark reduziert. Deren effizienter Einsatz erlaubt Simulationen an größeren Modellen ohne Kopplung mit kontinuumsmechanischen Methoden. Eine verbesserte Vernetzung, ein robuster Optimierungsalgorithmus und die vorgenommene Parallelisierung machen die implementierte nichtlokale 3D Quasikontinuumsmethode zu einem effizienten Werkzeug für die robuste Simulation von physikalischen Schädigungsphänomenen in beliebigen atomistischen Konfigurationen. In quasistatischen Simulationen wird eine deutliche Beschleunigung gegenüber der Methode der Gitterstatik bei vergleichbarer Qualität der Ergebnisse erreicht. T2 - Weiterentwicklung numerischer Methoden der Atomistik und Kontinuumsmechanik zur Multiskalen-Simulation quasi-spröder intergranularer Schädigung T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2010,2 KW - Mechanik KW - Computersimulation KW - Mikro-Scale KW - Meso-Scale KW - Polykristall KW - intergranular damage KW - atomistic simulation methods KW - continuum mechanics KW - quasicontinuum method KW - scale transition Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20101101-15245 ER - TY - JOUR A1 - Lizarazu, Jorge A1 - Harirchian, Ehsan A1 - Shaik, Umar Arif A1 - Shareef, Mohammed A1 - Antoni-Zdziobek, Annie A1 - Lahmer, Tom T1 - Application of machine learning-based algorithms to predict the stress-strain curves of additively manufactured mild steel out of its microstructural characteristics JF - Results in Engineering N2 - The study presents a Machine Learning (ML)-based framework designed to forecast the stress-strain relationship of arc-direct energy deposited mild steel. Based on microstructural characteristics previously extracted using microscopy and X-ray diffraction, approximately 1000 new parameter sets are generated by applying the Latin Hypercube Sampling Method (LHSM). For each parameter set, a Representative Volume Element (RVE) is synthetically created via Voronoi Tessellation. Input raw data for ML-based algorithms comprises these parameter sets or RVE-images, while output raw data includes their corresponding stress-strain relationships calculated after a Finite Element (FE) procedure. Input data undergoes preprocessing involving standardization, feature selection, and image resizing. Similarly, the stress-strain curves, initially unsuitable for training traditional ML algorithms, are preprocessed using cubic splines and occasionally Principal Component Analysis (PCA). The later part of the study focuses on employing multiple ML algorithms, utilizing two main models. The first model predicts stress-strain curves based on microstructural parameters, while the second model does so solely from RVE images. The most accurate prediction yields a Root Mean Squared Error of around 5 MPa, approximately 1% of the yield stress. This outcome suggests that ML models offer precise and efficient methods for characterizing dual-phase steels, establishing a framework for accurate results in material analysis. KW - Maschinelles Lernen KW - Baustahl KW - Spannungs-Dehnungs-Beziehung KW - Arc-direct energy deposition KW - Mild steel KW - Dual phase steel KW - Stress-strain curve KW - OA-Publikationsfonds2023 Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20231207-65028 UR - https://www.sciencedirect.com/science/article/pii/S2590123023007144 VL - 2023 IS - Volume 20 (2023) SP - 1 EP - 12 PB - Elsevier CY - Amsterdam ER - TY - THES A1 - Liu, Bokai T1 - Stochastic multiscale modeling of polymeric nanocomposites using Data-driven techniques N2 - In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks. Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis. While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models. As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling. This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components: -Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS). -Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks. -Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,3 KW - Polymere KW - Nanoverbundstruktur KW - multiscale KW - nanocomposite KW - stochastic KW - Data-driven Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220503-46379 ER - TY - JOUR A1 - Lashkar-Ara, Babak A1 - Kalantari, Niloofar A1 - Sheikh Khozani, Zohreh A1 - Mosavi, Amir T1 - Assessing Machine Learning versus a Mathematical Model to Estimate the Transverse Shear Stress Distribution in a Rectangular Channel JF - Mathematics N2 - One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel. To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations. KW - Maschinelles Lernen KW - smooth rectangular channel KW - Tsallis entropy KW - genetic programming KW - artificial intelligence KW - machine learning KW - big data KW - computational hydraulics Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210504-44197 UR - https://www.mdpi.com/2227-7390/9/6/596 VL - 2021 IS - Volume 9, Issue 6, Article 596 PB - MDPI CY - Basel ER - TY - JOUR A1 - Kumari, Vandana A1 - Harirchian, Ehsan A1 - Lahmer, Tom A1 - Rasulzade, Shahla T1 - Evaluation of Machine Learning and Web-Based Process for Damage Score Estimation of Existing Buildings JF - Buildings N2 - The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency KW - Maschinelles Lernen KW - rapid assessment KW - Machine learning KW - Vulnerability assessment KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220509-46387 UR - https://www.mdpi.com/2075-5309/12/5/578 VL - 2022 IS - Volume 12, issue 5, article 578 SP - 1 EP - 23 PB - MDPI CY - Basel ER - TY - INPR A1 - Khosravi, Khabat A1 - Sheikh Khozani, Zohreh A1 - Mao, Luka T1 - A comparison between advanced hybrid machine learning algorithms and empirical equations applied to abutment scour depth prediction N2 - Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing. KW - maschinelles Lernen Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210311-43889 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/abs/pii/S0022169421001475?via%3Dihub ; https://doi.org/10.1016/j.jhydrol.2021.126100 ER - TY - INPR A1 - Khosravi, Khabat A1 - Sheikh Khozani, Zohreh A1 - Cooper, James R. T1 - Predicting stable gravel-bed river hydraulic geometry: A test of novel, advanced, hybrid data mining algorithms N2 - Accurate prediction of stable alluvial hydraulic geometry, in which erosion and sedimentation are in equilibrium, is one of the most difficult but critical topics in the field of river engineering. Data mining algorithms have been gaining more attention in this field due to their high performance and flexibility. However, an understanding of the potential for these algorithms to provide fast, cheap, and accurate predictions of hydraulic geometry is lacking. This study provides the first quantification of this potential. Using at-a-station field data, predictions of flow depth, water-surface width and longitudinal water surface slope are made using three standalone data mining techniques -, Instance-based Learning (IBK), KStar, Locally Weighted Learning (LWL) - along with four types of novel hybrid algorithms in which the standalone models are trained with Vote, Attribute Selected Classifier (ASC), Regression by Discretization (RBD), and Cross-validation Parameter Selection (CVPS) algorithms (Vote-IBK, Vote-Kstar, Vote-LWL, ASC-IBK, ASC-Kstar, ASC-LWL, RBD-IBK, RBD-Kstar, RBD-LWL, CVPSIBK, CVPS-Kstar, CVPS-LWL). Through a comparison of their predictive performance and a sensitivity analysis of the driving variables, the results reveal: (1) Shield stress was the most effective parameter in the prediction of all geometry dimensions; (2) hybrid models had a higher prediction power than standalone data mining models, empirical equations and traditional machine learning algorithms; (3) Vote-Kstar model had the highest performance in predicting depth and width, and ASC-Kstar in estimating slope, each providing very good prediction performance. Through these algorithms, the hydraulic geometry of any river can potentially be predicted accurately and with ease using just a few, readily available flow and channel parameters. Thus, the results reveal that these models have great potential for use in stable channel design in data poor catchments, especially in developing nations where technical modelling skills and understanding of the hydraulic and sediment processes occurring in the river system may be lacking. KW - Maschinelles Lernen KW - Künstliche Intelligenz KW - Data Mining KW - Hydraulic geometry KW - Gravel-bed rivers Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20211004-44998 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/abs/pii/S1364815221002085 ; https://doi.org/10.1016/j.envsoft.2021.105165 VL - 2021 ER - TY - THES A1 - Khademi Zahedi, Reza T1 - Stress Distribution in Buried Defective PE Pipes and Crack Propagation in Nanosheets N2 - Buried PE pipelines are the main choice for transporting hazardous hydrocarbon fluids and are used in urban gas distribution networks. Molecular dynamics (MD) simulations used to investigate material behavior at nanoscale. KW - Gasleitung KW - gas pipes KW - Riss KW - Defekt KW - defects KW - nanosheets KW - crack KW - maximum stress Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210803-44814 ER - TY - THES A1 - Keßler, Andrea T1 - Matrix-free voxel-based finite element method for materials with heterogeneous microstructures T1 - Matrixfreie voxelbasierte Finite-Elemente-Methode für Materialien mit komplizierter Mikrostruktur N2 - Modern image detection techniques such as micro computer tomography (μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis. However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm. This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained. N2 - Moderne bildgebende Verfahren wie Mikro-Computertomographie (μCT), Magnetresonanztomographie (MRT) und Rasterelektronenmikroskopie (SEM) liefern nicht-invasiv hochauflösende Bilder der Mikrostruktur von Materialien. Sie bilden die Grundlage der geometrischen Modelle der hochauflösenden bildbasierten Analysis. Allerdings erreichen vor allem in 3D die Diskretisierungen dieser Modelle leicht die Größe von 100 Mill. Freiheitsgraden und erfordern umfangreiche Hardware-Ressourcen in Bezug auf Hauptspeicher und Rechenleistung, um das numerische Modell zu lösen. Der Fokus dieser Arbeit liegt daher darin, numerische Lösungsmethoden zu kombinieren und anzupassen, um den Speicherplatzbedarf und die Rechenzeit zu reduzieren und damit eine Ausführung der bildbasierten Analyse auf modernen Computer-Desktops zu ermöglichen. Daher ist als numerisches Modell eine einfache Gitterdiskretisierung der voxelbasierten (Pixel mit der Tiefe als dritten Dimension) Geometrie gewählt, die die Oberflächenerstellung weglässt und eine reduzierte Speicherung der finiten Elementen und einen matrixfreien Lösungsalgorithmus ermöglicht. Dies wiederum verringert den Aufwand von fast allen angewandten gitterbasierten Lösungsverfahren und führt zu Speichereffizienz und numerisch stabilen Algorithmen für die Mikrostrukturmodelle. Es werden zwei Varianten der Anpassung der matrixfreien Lösung präsentiert, die Element-für-Element Methode und eine Knoten-Kanten-Variante. Die Methode der konjugierten Gradienten in Kombination mit dem Mehrgitterverfahren als sehr effizienten Vorkonditionierer wird für den matrixfreien Lösungsalgorithmus adaptiert. Der stufige Verlauf der Materialgrenzen durch die voxelbasierte Diskretisierung wird durch Elemente geglättet, die am Integrationspunkt unterschiedliche Materialinformationen enthalten und über Teilzellen integriert werden (embedded boundary elements). Die Effizienz der matrixfreien Verfahren bleibt erhalten. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2018,7 KW - Dissertation KW - Finite-Elemente-Methode KW - Konjugierte-Gradienten-Methode KW - Mehrgitterverfahren KW - conjugate gradient method KW - multigrid method KW - grid-based KW - finite element method KW - matrix-free Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190116-38448 ER - TY - JOUR A1 - Karimimoshaver, Mehrdad A1 - Hajivaliei, Hatameh A1 - Shokri, Manouchehr A1 - Khalesro, Shakila A1 - Aram, Farshid A1 - Shamshirband, Shahaboddin T1 - A Model for Locating Tall Buildings through a Visual Analysis Approach JF - Applied Sciences N2 - Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized. KW - Gebäude KW - Energieeffizienz KW - Sustainability KW - Infrastructures KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210122-43350 UR - https://www.mdpi.com/2076-3417/10/17/6072 VL - 2020 IS - Volume 10, issue 17, article 6072 SP - 1 EP - 25 PB - MDPI CY - Basel ER - TY - JOUR A1 - Kargar, Katayoun A1 - Samadianfard, Saeed A1 - Parsa, Javad A1 - Nabipour, Narjes A1 - Shamshirband, Shahaboddin A1 - Mosavi, Amir A1 - Chau, Kwok-Wing T1 - Estimating longitudinal dispersion coefficient in natural streams using empirical models and machine learning algorithms JF - Engineering Applications of Computational Fluid Mechanics N2 - The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers. KW - Maschinelles Lernen KW - Gaussian process regression KW - longitudinal dispersion coefficient KW - M5 model tree KW - random forest KW - support vector regression KW - rivers Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200128-40775 UR - https://www.tandfonline.com/doi/full/10.1080/19942060.2020.1712260 VL - 2020 IS - Volume 14, No. 1 SP - 311 EP - 322 PB - Taylor & Francis ER - TY - JOUR A1 - Jilte, Ravindra A1 - Ahmadi, Mohammad Hossein A1 - Kumar, Ravinder A1 - Kalamkar, Vilas A1 - Mosavi, Amir T1 - Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids JF - Nanomaterials N2 - Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction. KW - Nanostrukturiertes Material KW - Kühlkörper KW - Nasskühlung KW - nanofluid KW - Nanomaterials KW - Machine learning KW - heat sink Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200401-41241 UR - https://www.mdpi.com/2079-4991/10/4/647 VL - 2020 IS - Volume 10, Issue 4, 647 PB - MDPI CY - Basel ER - TY - JOUR A1 - Jiang, Jin-Wu A1 - Zhuang, Xiaoying A1 - Rabczuk, Timon T1 - Orientation dependent thermal conductance in single-layer MoS 2 JF - Scientific Reports N2 - We investigate the thermal conductivity in the armchair and zigzag MoS2 nanoribbons, by combining the non-equilibrium Green's function approach and the first-principles method. A strong orientation dependence is observed in the thermal conductivity. Particularly, the thermal conductivity for the armchair MoS2 nanoribbon is about 673.6 Wm−1 K−1 in the armchair nanoribbon, and 841.1 Wm−1 K−1 in the zigzag nanoribbon at room temperature. By calculating the Caroli transmission, we disclose the underlying mechanism for this strong orientation dependence to be the fewer phonon transport channels in the armchair MoS2 nanoribbon in the frequency range of [150, 200] cm−1. Through the scaling of the phonon dispersion, we further illustrate that the thermal conductivity calculated for the MoS2 nanoribbon is esentially in consistent with the superior thermal conductivity found for graphene. KW - Mechanische Eigenschaft KW - Wärmeleitfähigkeit KW - Nanoribbons, thermal conductivity Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170418-31417 ER - TY - THES A1 - Jia, Yue T1 - Methods based on B-splines for model representation, numerical analysis and image registration N2 - The thesis consists of inter-connected parts for modeling and analysis using newly developed isogeometric methods. The main parts are reproducing kernel triangular B-splines, extended isogeometric analysis for solving weakly discontinuous problems, collocation methods using superconvergent points, and B-spline basis in image registration applications. Each topic is oriented towards application of isogeometric analysis basis functions to ease the process of integrating the modeling and analysis phases of simulation. First, we develop reproducing a kernel triangular B-spline-based FEM for solving PDEs. We review the triangular B-splines and their properties. By definition, the triangular basis function is very flexible in modeling complicated domains. However, instability results when it is applied for analysis. We modify the triangular B-spline by a reproducing kernel technique, calculating a correction term for the triangular kernel function from the chosen surrounding basis. The improved triangular basis is capable to obtain the results with higher accuracy and almost optimal convergence rates. Second, we propose an extended isogeometric analysis for dealing with weakly discontinuous problems such as material interfaces. The original IGA is combined with XFEM-like enrichments which are continuous functions themselves but with discontinuous derivatives. Consequently, the resulting solution space can approximate solutions with weak discontinuities. The method is also applied to curved material interfaces, where the inverse mapping and the curved triangular elements are considered. Third, we develop an IGA collocation method using superconvergent points. The collocation methods are efficient because no numerical integration is needed. In particular when higher polynomial basis applied, the method has a lower computational cost than Galerkin methods. However, the positions of the collocation points are crucial for the accuracy of the method, as they affect the convergent rate significantly. The proposed IGA collocation method uses superconvergent points instead of the traditional Greville abscissae points. The numerical results show the proposed method can have better accuracy and optimal convergence rates, while the traditional IGA collocation has optimal convergence only for even polynomial degrees. Lastly, we propose a novel dynamic multilevel technique for handling image registration. It is application of the B-spline functions in image processing. The procedure considered aims to align a target image from a reference image by a spatial transformation. The method starts with an energy function which is the same as a FEM-based image registration. However, we simplify the solving procedure, working on the energy function directly. We dynamically solve for control points which are coefficients of B-spline basis functions. The new approach is more simple and fast. Moreover, it is also enhanced by a multilevel technique in order to prevent instabilities. The numerical testing consists of two artificial images, four real bio-medical MRI brain and CT heart images, and they show our registration method is accurate, fast and efficient, especially for large deformation problems. KW - Finite-Elemente-Methode KW - isogeometric methods Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20151210-24849 ER - TY - THES A1 - Jenabidehkordi, Ali T1 - An Efficient Adaptive PD Formulation for Complex Microstructures N2 - The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridy- namic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dy- namic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena. This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature. New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three dis- tinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions. KW - Peridynamik KW - Numerical Simulations KW - Peridynamics KW - Numerical Simulations Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221124-47422 ER - TY - THES A1 - Jenabidehkordi, Ali T1 - An efficient adaptive PD formulation for complex microstructures N2 - The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridynamic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dynamic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena. This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature. New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three distinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions. KW - Peridynamik KW - Peridynamics KW - Numerical Simulation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221116-47389 UR - https://e-pub.uni-weimar.de/opus4/frontdoor/index/index/docId/4742 ER - TY - CHAP A1 - Jaouadi, Zouhour A1 - Lahmer, Tom ED - Gürlebeck, Klaus ED - Lahmer, Tom T1 - Topology optimization of structures subjected to multiple load cases by introducing the Epsilon constraint method T2 - Digital Proceedings, International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering : July 20 - 22 2015, Bauhaus-University Weimar N2 - A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Building Information Modeling KW - Computerunterstütztes Verfahren KW - Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28042 SN - 1611-4086 ER - TY - JOUR A1 - Işık, Ercan A1 - Büyüksaraç, Aydın A1 - Levent Ekinci, Yunus A1 - Aydın, Mehmet Cihan A1 - Harirchian, Ehsan T1 - The Effect of Site-Specific Design Spectrum on Earthquake-Building Parameters: A Case Study from the Marmara Region (NW Turkey) JF - Applied Sciences N2 - The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location. KW - Erdbeben KW - earthquake KW - site-specific spectrum KW - Marmara Region KW - seismic hazard analysis KW - adaptive pushover KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20201022-42758 UR - https://www.mdpi.com/2076-3417/10/20/7247 VL - 2020 IS - Volume 10, issue 20, article 7247 PB - MDPI CY - Basel ER - TY - CHAP A1 - Itam, Zarina ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - NUMERICAL SIMULATION OF THERMO-HYGRAL ALKALI-SILICA REACTION MODEL IN CONCRETE AT THE MESOSCALE N2 - This research aims to model Alkali-Silica Reaction gel expansion in concrete under the influence of hygral and thermal loading, based on experimental results. ASR provokes a heterogeneous expansion in concrete leading to dimensional changes and eventually the premature failure of the concrete structure. This can result in map cracking on the concrete surface which will decrease the concrete stiffness. Factors that influence ASR are parameters such as the cement alkalinity, the number of deleterious silica from the aggregate used, concrete porosity, and external factors like temperature, humidity and external source of alkali from ingression of deicing salts. Uncertainties of the influential factors make ASR a difficult phenomenon to solve; hence my approach to this matter is to solve the problem using stochastic modelling, where a numerical simulation of concrete cross-section with integration of experimental results from Finger-Institute for Building Materials Science at the Bauhaus-Universität Weimar. The problem is formulated as a multi-field problem, combining heat transfer, fluid transfer and the reaction rate model with the mechanical stress field. Simulation is performed as a mesoscale model considering aggregates and mortar matrix. The reaction rate model will be conducted using experimental results from concrete expansions due to ASR gained from concrete prism tests. Expansive strains values for transient environmental conditions due to the reaction rate will be determined from calculation based on the reaction rate model. Results from these models will be able to predict the rate of ASR expansion and the cracking propagation that may arise. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Architektur KW - Computerunterstütztes Verfahren KW - Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28536 UR - http://euklid.bauing.uni-weimar.de/ikm2009/paper.html SN - 1611-4086 ER - TY - JOUR A1 - Ilyani Akmar, A.B. A1 - Kramer, O. A1 - Rabczuk, Timon T1 - Multi-objective evolutionary optimization of sandwich structures: An evaluation by elitist non-dominated sorting evolution strategy JF - American Journal of Engineering and Applied Sciences N2 - In this study, an application of evolutionary multi-objective optimization algorithms on the optimization of sandwich structures is presented. The solution strategy is known as Elitist Non-Dominated Sorting Evolution Strategy (ENSES) wherein Evolution Strategies (ES) as Evolutionary Algorithm (EA) in the elitist Non-dominated Sorting Genetic algorithm (NSGA-II) procedure. Evolutionary algorithm seems a compatible approach to resolve multi-objective optimization problems because it is inspired by natural evolution, which closely linked to Artificial Intelligence (AI) techniques and elitism has shown an important factor for improving evolutionary multi-objective search. In order to evaluate the notion of performance by ENSES, the well-known study case of sandwich structures are reconsidered. For Case 1, the goals of the multi-objective optimization are minimization of the deflection and the weight of the sandwich structures. The length, the core and skin thicknesses are the design variables of Case 1. For Case 2, the objective functions are the fabrication cost, the beam weight and the end deflection of the sandwich structures. There are four design variables i.e., the weld height, the weld length, the beam depth and the beam width in Case 2. Numerical results are presented in terms of Paretooptimal solutions for both evaluated cases. KW - Optimierung KW - Stahlbau KW - Multi-objective Evolutionary Optimization, Elitist Non- Dominated Sorting Evolution Strategy (ENSES), Sandwich Structure, Pareto-Optimal Solutions, Evolutionary Algorithm Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170418-31402 SP - 185 EP - 201 ER - TY - CHAP A1 - Häfner, Stefan A1 - Vogel, Frank A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - FINITE ELEMENT ANALYSIS OF TORSION FOR ARBITRARY CROSS-SECTIONS N2 - The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions. KW - Angewandte Informatik KW - Angewandte Mathematik KW - Architektur KW - Computerunterstütztes Verfahren KW - Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170314-28483 UR - http://euklid.bauing.uni-weimar.de/ikm2009/paper.html SN - 1611-4086 ER - TY - CHAP A1 - Häfner, Stefan A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - DAMAGE SIMULATION OF HETEROGENEOUS SOLIDS BY NONLOCAL FORMULATIONS ON ORTHOGONAL GRIDS N2 - The present paper is part of a comprehensive approach of grid-based modelling. This approach includes geometrical modelling by pixel or voxel models, advanced multiphase B-spline finite elements of variable order and fast iterative solver methods based on the multigrid method. So far, we have only presented these grid-based methods in connection with linear elastic analysis of heterogeneous materials. Damage simulation demands further considerations. The direct stress solution of standard bilinear finite elements is severly defective, especially along material interfaces. Besides achieving objective constitutive modelling, various nonlocal formulations are applied to improve the stress solution. Such a corrective data processing can either refer to input data in terms of Young's modulus or to the attained finite element stress solution, as well as to a combination of both. A damage-controlled sequentially linear analysis is applied in connection with an isotropic damage law. Essentially by a high resolution of the heterogeneous solid, local isotropic damage on the material subscale allows to simulate complex damage topologies such as cracks. Therefore anisotropic degradation of a material sample can be simulated. Based on an effectively secantial global stiffness the analysis is numerically stable. The iteration step size is controlled for an adequate simulation of the damage path. This requires many steps, but in the iterative solution process each new step starts with the solution of the prior step. Therefore this method is quite effective. The present paper provides an introduction of the proposed concept for a stable simulation of damage in heterogeneous solids. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-29638 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER - TY - CHAP A1 - Häfner, Stefan A1 - Könke, Carsten ED - Gürlebeck, Klaus ED - Könke, Carsten T1 - MULTIGRID PRECONDITIONED CONJUGATE GRADIENT METHOD IN THE MECHANICAL ANALYSIS OF HETEROGENEOUS SOLIDS N2 - A fast solver method called the multigrid preconditioned conjugate gradient method is proposed for the mechanical analysis of heterogeneous materials on the mesoscale. Even small samples of a heterogeneous material such as concrete show a complex geometry of different phases. These materials can be modelled by projection onto a uniform, orthogonal grid of elements. As one major problem the possible resolution of the concrete specimen is generally restricted due to (a) computation times and even more critical (b) memory demand. Iterative solvers can be based on a local element-based formulation while orthogonal grids consist of geometrical identical elements. The element-based formulation is short and transparent, and therefore efficient in implementation. A variation of the material properties in elements or integration points is possible. The multigrid method is a fast iterative solver method, where ideally the computational effort only increases linear with problem size. This is an optimal property which is almost reached in the implementation presented here. In fact no other method is known which scales better than linear. Therefore the multigrid method gains in importance the larger the problem becomes. But for heterogeneous models with very large ratios of Young's moduli the multigrid method considerably slows down by a constant factor. Such large ratios occur in certain heterogeneous solids, as well as in the damage analysis of solids. As solution to this problem the multigrid preconditioned conjugate gradient method is proposed. A benchmark highlights the multigrid preconditioned conjugate gradient method as the method of choice for very large ratio's of Young's modulus. A proposed modified multigrid cycle shows good results, in the application as stand-alone solver or as preconditioner. KW - Architektur KW - CAD KW - Computerunterstütztes Verfahren Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20170327-29626 UR - http://euklid.bauing.uni-weimar.de/ikm2006/index.php_lang=de&what=papers.html ER -