TY - JOUR A1 - Chakraborty, Ayan A1 - Anitescu, Cosmin A1 - Zhuang, Xiaoying A1 - Rabczuk, Timon T1 - Domain adaptation based transfer learning approach for solving PDEs on complex geometries JF - Engineering with Computers N2 - In machine learning, if the training data is independently and identically distributed as the test data then a trained model can make an accurate predictions for new samples of data. Conventional machine learning has a strong dependence on massive amounts of training data which are domain specific to understand their latent patterns. In contrast, Domain adaptation and Transfer learning methods are sub-fields within machine learning that are concerned with solving the inescapable problem of insufficient training data by relaxing the domain dependence hypothesis. In this contribution, this issue has been addressed and by making a novel combination of both the methods we develop a computationally efficient and practical algorithm to solve boundary value problems based on nonlinear partial differential equations. We adopt a meshfree analysis framework to integrate the prevailing geometric modelling techniques based on NURBS and present an enhanced deep collocation approach that also plays an important role in the accuracy of solutions. We start with a brief introduction on how these methods expand upon this framework. We observe an excellent agreement between these methods and have shown that how fine-tuning a pre-trained network to a specialized domain may lead to an outstanding performance compare to the existing ones. As proof of concept, we illustrate the performance of our proposed model on several benchmark problems. KW - Maschinelles Lernen KW - NURBS KW - Transfer learning KW - Domain Adaptation KW - NURBS geometry KW - Navier–Stokes equations Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220811-46776 UR - https://link.springer.com/article/10.1007/s00366-022-01661-2 VL - 2022 SP - 1 EP - 20 ER - TY - THES A1 - Schrader, Kai T1 - Hybrid 3D simulation methods for the damage analysis of multiphase composites T1 - Hybride 3D Simulationsmethoden zur Abbildung der Schädigungsvorgänge in Mehrphasen-Verbundwerkstoffen N2 - Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis. Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time. In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations. Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2013,2 KW - high-performance computing KW - finite element method KW - heterogeneous material KW - domain decomposition KW - scalable smeared crack analysis KW - FEM KW - multiphase KW - damage KW - HPC KW - solver Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20131021-20595 ER - TY - JOUR A1 - Guo, Hongwei A1 - Zhuang, Xiaoying A1 - Chen, Pengwan A1 - Alajlan, Naif A1 - Rabczuk, Timon T1 - Analysis of three-dimensional potential problems in non-homogeneous media with physics-informed deep collocation method using material transfer learning and sensitivity analysis JF - Engineering with Computers N2 - In this work, we present a deep collocation method (DCM) for three-dimensional potential problems in non-homogeneous media. This approach utilizes a physics-informed neural network with material transfer learning reducing the solution of the non-homogeneous partial differential equations to an optimization problem. We tested different configurations of the physics-informed neural network including smooth activation functions, sampling methods for collocation points generation and combined optimizers. A material transfer learning technique is utilized for non-homogeneous media with different material gradations and parameters, which enhance the generality and robustness of the proposed method. In order to identify the most influential parameters of the network configuration, we carried out a global sensitivity analysis. Finally, we provide a convergence proof of our DCM. The approach is validated through several benchmark problems, also testing different material variations. KW - Deep learning KW - Kollokationsmethode KW - Collocation method KW - Potential problem KW - Activation function KW - Transfer learning Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220811-46764 UR - https://link.springer.com/article/10.1007/s00366-022-01633-6 VL - 2022 SP - 1 EP - 22 ER - TY - THES A1 - Nanthakumar, S.S. T1 - Inverse and optimization problems in piezoelectric materials using Extended Finite Element Method and Level sets T1 - Inverse und Optimierungsprobleme für piezoelektrische Materialien mit der Extended Finite Elemente Methode und Level sets N2 - Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics. An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries. Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations. The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels. Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams. Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed . KW - Finite-Elemente-Methode KW - Piezoelectricity KW - Inverse problems KW - Optimization problems KW - Nanostructures KW - XFEM KW - level set method KW - Surface effects Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161128-27095 ER - TY - THES A1 - Ghasemi, Hamid T1 - Stochastic optimization of fiber reinforced composites considering uncertainties N2 - Briefly, the two basic questions that this research is supposed to answer are: 1. Howmuch fiber is needed and how fibers should be distributed through a fiber reinforced composite (FRC) structure in order to obtain the optimal and reliable structural response? 2. How do uncertainties influence the optimization results and reliability of the structure? Giving answer to the above questions a double stage sequential optimization algorithm for finding the optimal content of short fiber reinforcements and their distribution in the composite structure, considering uncertain design parameters, is presented. In the first stage, the optimal amount of short fibers in a FRC structure with uniformly distributed fibers is conducted in the framework of a Reliability Based Design Optimization (RBDO) problem. Presented model considers material, structural and modeling uncertainties. In the second stage, the fiber distribution optimization (with the aim to further increase in structural reliability) is performed by defining a fiber distribution function through a Non-Uniform Rational BSpline (NURBS) surface. The advantages of using the NURBS surface as a fiber distribution function include: using the same data set for the optimization and analysis; high convergence rate due to the smoothness of the NURBS; mesh independency of the optimal layout; no need for any post processing technique and its non-heuristic nature. The output of stage 1 (the optimal fiber content for homogeneously distributed fibers) is considered as the input of stage 2. The output of stage 2 is the Reliability Index (b ) of the structure with the optimal fiber content and distribution. First order reliability method (in order to approximate the limit state function) as well as different material models including Rule of Mixtures, Mori-Tanaka, energy-based approach and stochastic multi-scales are implemented in different examples. The proposed combined model is able to capture the role of available uncertainties in FRC structures through a computationally efficient algorithm using all sequential, NURBS and sensitivity based techniques. The methodology is successfully implemented for interfacial shear stress optimization in sandwich beams and also for optimization of the internal cooling channels in a ceramic matrix composite. Finally, after some changes and modifications by combining Isogeometric Analysis, level set and point wise density mapping techniques, the computational framework is extended for topology optimization of piezoelectric / flexoelectric materials. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2016,1 KW - Optimization KW - Fiber Reinforced Composite KW - Finite Element Method KW - Isogeometric Analysis KW - Flexoelectricity KW - Finite-Elemente-Methode KW - Optimierung Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20161117-27042 ER - TY - JOUR A1 - Alalade, Muyiwa A1 - Reichert, Ina A1 - Köhn, Daniel A1 - Wuttke, Frank A1 - Lahmer, Tom ED - Qu, Chunxu ED - Gao, Chunxu ED - Zhang, Rui ED - Jia, Ziguang ED - Li, Jiaxiang T1 - A Cyclic Multi-Stage Implementation of the Full-Waveform Inversion for the Identification of Anomalies in Dams JF - Infrastructures N2 - For the safe and efficient operation of dams, frequent monitoring and maintenance are required. These are usually expensive, time consuming, and cumbersome. To alleviate these issues, we propose applying a wave-based scheme for the location and quantification of damages in dams. To obtain high-resolution “interpretable” images of the damaged regions, we drew inspiration from non-linear full-multigrid methods for inverse problems and applied a new cyclic multi-stage full-waveform inversion (FWI) scheme. Our approach is less susceptible to the stability issues faced by the standard FWI scheme when dealing with ill-posed problems. In this paper, we first selected an optimal acquisition setup and then applied synthetic data to demonstrate the capability of our approach in identifying a series of anomalies in dams by a mixture of reflection and transmission tomography. The results had sufficient robustness, showing the prospects of application in the field of non-destructive testing of dams. KW - Damm KW - Defekt KW - inverse analysis KW - damage identification KW - full-waveform inversion KW - dams KW - wave propagation KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221201-48396 UR - https://www.mdpi.com/2412-3811/7/12/161 VL - 2022 IS - Volume 7, issue 12, article 161 PB - MDPI CY - Basel ER - TY - JOUR A1 - Chowdhury, Sharmistha A1 - Zabel, Volkmar T1 - Influence of loading sequence on wind induced fatigue assessment of bolts in TV-tower connection block JF - Results in Engineering N2 - Bolted connections are widely employed in structures like transmission poles, wind turbines, and television (TV) towers. The behaviour of bolted connections is often complex and plays a significant role in the overall dynamic characteristics of the structure. The goal of this work is to conduct a fatigue lifecycle assessment of such a bolted connection block of a 193 m tall TV tower, for which 205 days of real measurement data have been obtained from the installed monitoring devices. Based on the recorded data, the best-fit stochastic wind distribution for 50 years, the decisive wind action, and the locations to carry out the fatigue analysis have been decided. A 3D beam model of the entire tower is developed to extract the nodal forces corresponding to the connection block location under various mean wind speeds, which is later coupled with a detailed complex finite element model of the connection block, with over three million degrees of freedom, for acquiring stress histories on some pre-selected bolts. The random stress histories are analysed using the rainflow counting algorithm (RCA) and the damage is estimated using Palmgren-Miner's damage accumulation law. A modification is proposed to integrate the loading sequence effect into the RCA, which otherwise is ignored, and the differences between the two RCAs are investigated in terms of the accumulated damage. KW - Schadensakkumulation KW - Lebenszyklus KW - Fatigue life KW - Damage accumulation KW - Wind load KW - Rainflow counting algorithm KW - Loading sequence KW - Windlast KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221028-47303 UR - https://www.sciencedirect.com/science/article/pii/S2590123022002730?via%3Dihub VL - 2022 IS - Volume 16, article 100603 SP - 1 EP - 18 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Kumari, Vandana A1 - Harirchian, Ehsan A1 - Lahmer, Tom A1 - Rasulzade, Shahla T1 - Evaluation of Machine Learning and Web-Based Process for Damage Score Estimation of Existing Buildings JF - Buildings N2 - The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency KW - Maschinelles Lernen KW - rapid assessment KW - Machine learning KW - Vulnerability assessment KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220509-46387 UR - https://www.mdpi.com/2075-5309/12/5/578 VL - 2022 IS - Volume 12, issue 5, article 578 SP - 1 EP - 23 PB - MDPI CY - Basel ER - TY - JOUR A1 - Faizollahzadeh Ardabili, Sina A1 - Najafi, Bahman A1 - Alizamir, Meysam A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin A1 - Rabczuk, Timon T1 - Using SVM-RSM and ELM-RSM Approaches for Optimizing the Production Process of Methyl and Ethyl Esters JF - Energies N2 - The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data. KW - Biodiesel KW - Optimierung KW - extreme learning machine KW - machine learning KW - response surface methodology KW - support vector machine KW - OA-Publikationsfonds2018 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20181025-38170 UR - https://www.mdpi.com/1996-1073/11/11/2889 IS - 11, 2889 SP - 1 EP - 20 PB - MDPI CY - Basel ER - TY - THES A1 - Nickerson, Seth T1 - Thermo-Mechanical Behavior of Honeycomb, Porous, Microcracked Ceramics BT - Characterization and analysis of thermally induced stresses with specific consideration of synthetic, porous cordierite honeycomb substrates N2 - The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties. Primary novel factors of this work center on two aspects. 1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners. 2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions. Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis. This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2019,4 KW - Keramik KW - ceramics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190911-39753 ER - TY - THES A1 - Hossain, Md Naim T1 - Isogeometric analysis based on Geometry Independent Field approximaTion (GIFT) and Polynomial Splines over Hierarchical T-meshes N2 - This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines). In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required. The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost. T2 - Die isogeometrische Analysis basierend auf der geometrieunabhängigen Feldnäherung (GIFT)und polynomialen Splines über hierarchischen T-Netzen KW - Finite-Elemente-Methode KW - Isogeometrc Analysis KW - Geometry Independent Field Approximation KW - Polynomial Splines over Hierarchical T-meshes KW - Recovery Based Error Estimator Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20191129-40376 ER - TY - THES A1 - Wang, Jiasheng T1 - Lebensdauerabschätzung von Bauteilen aus globularem Grauguss auf der Grundlage der lokalen gießprozessabhängigen Werkstoffzustände N2 - Das Ziel der Arbeit ist, eine mögliche Verbesserung der Güte der Lebensdauervorhersage für Gusseisenwerkstoffe mit Kugelgraphit zu erreichen, wobei die Gießprozesse verschiedener Hersteller berücksichtigt werden. Im ersten Schritt wurden Probenkörper aus GJS500 und GJS600 von mehreren Gusslieferanten gegossen und daraus Schwingproben erstellt. Insgesamt wurden Schwingfestigkeitswerte der einzelnen gegossenen Proben sowie der Proben des Bauteils von verschiedenen Gussherstellern weltweit entweder durch direkte Schwingversuche oder durch eine Sammlung von Betriebsfestigkeitsversuchen bestimmt. Dank der metallografischen Arbeit und Korrelationsanalyse konnten drei wesentliche Parameter zur Bestimmung der lokalen Dauerfestigkeit festgestellt werden: 1. statische Festigkeit, 2. Ferrit- und Perlitanteil der Mikrostrukturen und 3. Kugelgraphitanzahl pro Flächeneinheit. Basierend auf diesen Erkenntnissen wurde ein neues Festigkeitsverhältnisdiagramm (sogenanntes Sd/Rm-SG-Diagramm) entwickelt. Diese neue Methodik sollte vor allem ermöglichen, die Bauteildauerfestigkeit auf der Grundlage der gemessenen oder durch eine Gießsimulation vorhersagten lokalen Zugfestigkeitswerte sowie Mikrogefügenstrukturen besser zu prognostizieren. Mithilfe der Versuche sowie der Gießsimulation ist es gelungen, unterschiedliche Methoden der Lebensdauervorhersage unter Berücksichtigung der Herstellungsprozesse weiterzuentwickeln. KW - Grauguss KW - Lebensdauerabschätzung KW - Werkstoffprüfung Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220111-45542 ER - TY - THES A1 - Ren, Huilong T1 - Dual-horizon peridynamics and Nonlocal operator method N2 - In the last two decades, Peridynamics (PD) attracts much attention in the field of fracture mechanics. One key feature of PD is the nonlocality, which is quite different from the ideas in conventional methods such as FEM and meshless method. However, conventional PD suffers from problems such as constant horizon, explicit algorithm, hourglass mode. In this thesis, by examining the nonlocality with scrutiny, we proposed several new concepts such as dual-horizon (DH) in PD, dual-support (DS) in smoothed particle hydrodynamics (SPH), nonlocal operators and operator energy functional. The conventional PD (SPH) is incorporated in the DH-PD (DS-SPH), which can adopt an inhomogeneous discretization and inhomogeneous support domains. The DH-PD (DS-SPH) can be viewed as some fundamental improvement on the conventional PD (SPH). Dual formulation of PD and SPH allows h-adaptivity while satisfying the conservations of linear momentum, angular momentum and energy. By developing the concept of nonlocality further, we introduced the nonlocal operator method as a generalization of DH-PD. Combined with energy functional of various physical models, the nonlocal forms based on dual-support concept are derived. In addition, the variation of the energy functional allows implicit formulation of the nonlocal theory. At last, we developed the higher order nonlocal operator method which is capable of solving higher order partial differential equations on arbitrary domain in higher dimensional space. Since the concepts are developed gradually, we described our findings chronologically. In chapter 2, we developed a DH-PD formulation that includes varying horizon sizes and solves the "ghost force" issue. The concept of dual-horizon considers the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly with arbitrary particle discretization. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. A simple adaptive refinement procedure (h-adaptivity) is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method. In chapter 3, a nonlocal operator method (NOM) based on the variational principle is proposed for the solution of waveguide problem in computational electromagnetic field. Common differential operators as well as the variational forms are defined within the context of nonlocal operators. The present nonlocal formulation allows the assembling of the tangent stiffness matrix with ease, which is necessary for the eigenvalue analysis of the waveguide problem. The present formulation is applied to solve 1D Schrodinger equation, 2D electrostatic problem and the differential electromagnetic vector wave equations based on electric fields. In chapter 4, a general nonlocal operator method is proposed which is applicable for solving partial differential equations (PDEs) of mechanical problems. The nonlocal operator can be regarded as the integral form, ``equivalent'' to the differential form in the sense of a nonlocal interaction model. The variation of a nonlocal operator plays an equivalent role as the derivatives of the shape functions in the meshless methods or those of the finite element method. Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease. The nonlocal operator method is enhanced here also with an operator energy functional to satisfy the linear consistency of the field. A highlight of the present method is the functional derived based on the nonlocal operator can convert the construction of residual and stiffness matrix into a series of matrix multiplications using the predefined nonlocal operators. The nonlocal strong forms of different functionals can be obtained easily via the concept of support and dual-support. Several numerical examples of different types of PDEs are presented. In chapter 5, we extended the NOM to higher order scheme by using a higher order Taylor series expansion of the unknown field. Such a higher order scheme improves the original NOM in chapter 3 and chapter 4, which can only achieve one-order convergence. The higher order NOM obtains all partial derivatives with specified maximal order simultaneously without resorting to shape functions. The functional based on the nonlocal operators converts the construction of residual and stiffness matrix into a series of matrix multiplication on the nonlocal operator matrix. Several numerical examples solved by strong form or weak form are presented to show the capabilities of this method. In chapter 6, the NOM proposed as a particle-based method in chapter 3,4,5, has difficulty in imposing accurately the boundary conditions of various orders. In this paper, we converted the particle-based NOM into a scheme with interpolation property. The new scheme describes partial derivatives of various orders at a point by the nodes in the support and takes advantage of the background mesh for numerical integration. The boundary conditions are enforced via the modified variational principle. The particle-based NOM can be viewed a special case of NOM with interpolation property when nodal integration is used. The scheme based on numerical integration greatly improves the stability of the method, as a consequence, the operator energy functional in particle-based NOM is not required. We demonstrated the capabilities of current method by solving the gradient solid problems and comparing the numerical results with the available exact solutions. In chapter 7, we derived the DS-SPH in solid within the framework of variational principle. The tangent stiffness matrix of SPH can be obtained with ease, and can be served as the basis for the present implicit SPH. We proposed an hourglass energy functional, which allows the direct derivation of hourglass force and hourglass tangent stiffness matrix. The dual-support is {involved} in all derivations based on variational principles and is automatically satisfied in the assembling of stiffness matrix. The implementation of stiffness matrix comprises with two steps, the nodal assembly based on deformation gradient and global assembly on all nodes. Several numerical examples are presented to validate the method. KW - Peridynamik KW - Variational principle KW - weighted residual method KW - gradient elasticity KW - phase field fracture method KW - smoothed particle hydrodynamics KW - numerical methods KW - PDEs Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210412-44039 ER - TY - THES A1 - Yousefi, Hassan T1 - Discontinuous propagating fronts: linear and nonlinear systems N2 - The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales. In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis. Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems). The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended. At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied. KW - Partielle Differentialgleichung KW - Adaptives System KW - Wavelet KW - Tichonov-Regularisierung KW - Hyperbolic PDEs KW - Adaptive central high resolution schemes KW - Wavelet based adaptation KW - Tikhonov regularization Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220922-47178 ER - TY - JOUR A1 - Lizarazu, Jorge A1 - Harirchian, Ehsan A1 - Shaik, Umar Arif A1 - Shareef, Mohammed A1 - Antoni-Zdziobek, Annie A1 - Lahmer, Tom T1 - Application of machine learning-based algorithms to predict the stress-strain curves of additively manufactured mild steel out of its microstructural characteristics JF - Results in Engineering N2 - The study presents a Machine Learning (ML)-based framework designed to forecast the stress-strain relationship of arc-direct energy deposited mild steel. Based on microstructural characteristics previously extracted using microscopy and X-ray diffraction, approximately 1000 new parameter sets are generated by applying the Latin Hypercube Sampling Method (LHSM). For each parameter set, a Representative Volume Element (RVE) is synthetically created via Voronoi Tessellation. Input raw data for ML-based algorithms comprises these parameter sets or RVE-images, while output raw data includes their corresponding stress-strain relationships calculated after a Finite Element (FE) procedure. Input data undergoes preprocessing involving standardization, feature selection, and image resizing. Similarly, the stress-strain curves, initially unsuitable for training traditional ML algorithms, are preprocessed using cubic splines and occasionally Principal Component Analysis (PCA). The later part of the study focuses on employing multiple ML algorithms, utilizing two main models. The first model predicts stress-strain curves based on microstructural parameters, while the second model does so solely from RVE images. The most accurate prediction yields a Root Mean Squared Error of around 5 MPa, approximately 1% of the yield stress. This outcome suggests that ML models offer precise and efficient methods for characterizing dual-phase steels, establishing a framework for accurate results in material analysis. KW - Maschinelles Lernen KW - Baustahl KW - Spannungs-Dehnungs-Beziehung KW - Arc-direct energy deposition KW - Mild steel KW - Dual phase steel KW - Stress-strain curve KW - OA-Publikationsfonds2023 Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20231207-65028 UR - https://www.sciencedirect.com/science/article/pii/S2590123023007144 VL - 2023 IS - Volume 20 (2023) SP - 1 EP - 12 PB - Elsevier CY - Amsterdam ER - TY - THES A1 - Alkam, Feras T1 - Vibration-based Monitoring of Concrete Catenary Poles using Bayesian Inference N2 - This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level. KW - Parameteridentifikation KW - Bayesian Inference, Uncertainty Quantification KW - Inverse Problems KW - Damage Identification KW - Concrete catenary pole KW - SHM KW - Inverse Probleme KW - Bayes’schen Inferenz KW - Unschärfequantifizierung KW - Schadenerkennung KW - Oberleitungsmasten Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210526-44338 UR - https://asw-verlage.de/katalog/vibration_based_monitoring_of_co-2363.html VL - 2021 PB - Bauhaus-Universitätsverlag CY - Weimar ER - TY - THES A1 - Goswami, Somdatta T1 - Phase field modeling of fracture with isogeometric analysis and machine learning methods N2 - This thesis presents the advances and applications of phase field modeling in fracture analysis. In this approach, the sharp crack surface topology in a solid is approximated by a diffusive crack zone governed by a scalar auxiliary variable. The uniqueness of phase field modeling is that the crack paths are automatically determined as part of the solution and no interface tracking is required. The damage parameter varies continuously over the domain. But this flexibility comes with associated difficulties: (1) a very fine spatial discretization is required to represent sharp local gradients correctly; (2) fine discretization results in high computational cost; (3) computation of higher-order derivatives for improved convergence rates and (4) curse of dimensionality in conventional numerical integration techniques. As a consequence, the practical applicability of phase field models is severely limited. The research presented in this thesis addresses the difficulties of the conventional numerical integration techniques for phase field modeling in quasi-static brittle fracture analysis. The first method relies on polynomial splines over hierarchical T-meshes (PHT-splines) in the framework of isogeometric analysis (IGA). An adaptive h-refinement scheme is developed based on the variational energy formulation of phase field modeling. The fourth-order phase field model provides increased regularity in the exact solution of the phase field equation and improved convergence rates for numerical solutions on a coarser discretization, compared to the second-order model. However, second-order derivatives of the phase field are required in the fourth-order model. Hence, at least a minimum of C1 continuous basis functions are essential, which is achieved using hierarchical cubic B-splines in IGA. PHT-splines enable the refinement to remain local at singularities and high gradients, consequently reducing the computational cost greatly. Unfortunately, when modeling complex geometries, multiple parameter spaces (patches) are joined together to describe the physical domain and there is typically a loss of continuity at the patch boundaries. This decrease of smoothness is dictated by the geometry description, where C0 parameterizations are normally used to deal with kinks and corners in the domain. Hence, the application of the fourth-order model is severely restricted. To overcome the high computational cost for the second-order model, we develop a dual-mesh adaptive h-refinement approach. This approach uses a coarser discretization for the elastic field and a finer discretization for the phase field. Independent refinement strategies have been used for each field. The next contribution is based on physics informed deep neural networks. The network is trained based on the minimization of the variational energy of the system described by general non-linear partial differential equations while respecting any given law of physics, hence the name physics informed neural network (PINN). The developed approach needs only a set of points to define the geometry, contrary to the conventional mesh-based discretization techniques. The concept of `transfer learning' is integrated with the developed PINN approach to improve the computational efficiency of the network at each displacement step. This approach allows a numerically stable crack growth even with larger displacement steps. An adaptive h-refinement scheme based on the generation of more quadrature points in the damage zone is developed in this framework. For all the developed methods, displacement-controlled loading is considered. The accuracy and the efficiency of both methods are studied numerically showing that the developed methods are powerful and computationally efficient tools for accurately predicting fractures. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2021,1 KW - Phasenfeldmodell KW - Neuronales Netz KW - Sprödbruch KW - Isogeometric Analysis KW - Physics informed neural network KW - phase field KW - deep neural network KW - brittle fracture Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210304-43841 ER - TY - JOUR A1 - Lashkar-Ara, Babak A1 - Kalantari, Niloofar A1 - Sheikh Khozani, Zohreh A1 - Mosavi, Amir T1 - Assessing Machine Learning versus a Mathematical Model to Estimate the Transverse Shear Stress Distribution in a Rectangular Channel JF - Mathematics N2 - One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel. To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations. KW - Maschinelles Lernen KW - smooth rectangular channel KW - Tsallis entropy KW - genetic programming KW - artificial intelligence KW - machine learning KW - big data KW - computational hydraulics Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210504-44197 UR - https://www.mdpi.com/2227-7390/9/6/596 VL - 2021 IS - Volume 9, Issue 6, Article 596 PB - MDPI CY - Basel ER - TY - JOUR A1 - Harirchian, Ehsan A1 - Lahmer, Tom A1 - Rasulzade, Shahla T1 - Earthquake Hazard Safety Assessment of Existing Buildings Using Optimized Multi-Layer Perceptron Neural Network JF - Energies N2 - The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs. KW - Erdbeben KW - Maschinelles Lernen KW - earthquake damage KW - seismic vulnerability KW - artificial neural network KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200504-41575 UR - https://www.mdpi.com/1996-1073/13/8/2060/htm VL - 2020 IS - Volume 13, Issue 8, 2060 PB - MDPI CY - Basel ER - TY - THES A1 - Hossein Nezhad Shirazi, Ali T1 - Multi-Scale Modeling of Lithium ion Batteries: a thermal management approach and molecular dynamic studies N2 - Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low. Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs. Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite. It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs. KW - Akkumulator KW - Battery KW - Batterie Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200214-40986 ER - TY - JOUR A1 - Harirchian, Ehsan A1 - Lahmer, Tom A1 - Buddhiraju, Sreekanth A1 - Mohammad, Kifaytullah A1 - Mosavi, Amir T1 - Earthquake Safety Assessment of Buildings through Rapid Visual Screening JF - Buildings N2 - Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively. KW - Maschinelles Lernen KW - Machine learning KW - Erdbeben KW - buildings KW - earthquake safety assessment KW - earthquake KW - extreme events KW - seismic assessment KW - natural hazard KW - mitigation KW - rapid visual screening Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200331-41153 UR - https://www.mdpi.com/2075-5309/10/3/51 VL - 2020 IS - Volume 10, Issue 3 PB - MDPI ER - TY - JOUR A1 - Harirchian, Ehsan A1 - Lahmer, Tom T1 - Improved Rapid Visual Earthquake Hazard Safety Evaluation of Existing Buildings Using a Type-2 Fuzzy Logic Model JF - Applied Sciences N2 - Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey. KW - Fuzzy-Logik KW - Erdbeben KW - Fuzzy Logic KW - Rapid Visual Screening KW - Vulnerability assessment KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200331-41161 UR - https://www.mdpi.com/2076-3417/10/7/2375 VL - 2020 IS - Volume 10, Issue 3, 2375 PB - MDPI CY - Basel ER - TY - THES A1 - Radmard Rahmani, Hamid T1 - Artificial Intelligence Approach for Seismic Control of Structures N2 - Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed. Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations. Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control. Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records. Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations. Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties. KW - Erdbeben KW - seismic control KW - tuned mass damper KW - reinforcement learning KW - earthquake KW - machine learning KW - Operante Konditionierung KW - structural control Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200417-41359 ER - TY - THES A1 - Rabizadeh, Ehsan T1 - Goal-oriented A Posteriori Error Estimation and Adaptive Mesh Refinement in 2D/3D Thermoelasticity Problems T1 - Zielorientierte a posteriori Fehlerabschätzung und adaptive Netzverfeinerung bei 2D- und 3Dthermoelastischen Problemen N2 - In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential. Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods. This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed. As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available. After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples. After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest. The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion. In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison. N2 - Einleitung und Motivation: 1- Im Laufe der letzten Jahrzehnte wurde den Mehrfeldproblemen und ihrer numerischen Analyse große Aufmerksamkeit gewidmet. Bei Mehrfeldproblemen wird die Wechselwirkung zwischen verschiedenen Feldern wie elastischen, elektrischen, magnetischen, chemischen oder thermischen Feldern untersucht. Eine wichtige Kategorie von Mehrfeldproblemen ist die Thermoelastizität. In der Thermoelastizität werden neben dem mechanischen Feld (Verschiebungen) auch das thermische Feld (Temperatur) und deren Auswirkungen aufeinander untersucht. 2- In fortgeschrittenen und sensible Anwendungen mit Temperaturänderung (z. B. LNG-, CNG- oder LPG-Speichertanks bei Sonnentemperatur im Sommer) ist die Elastizitätstheorie, die nur Verschiebungen berücksichtigt, nicht ausreichend. In diesen Fällen ist die Verwendung einer thermoelastischen Formulierung unumgänglich, um zuverlässige Ergebnisse zu erzielen. 3- Da eine analytische Lösung für thermoelastische Probleme sehr selten bestimmbar ist, wird sie durch numerische Methoden ersetzt. Allerdings sind die numerischen Ergebnisse nicht exakt und approximieren nur die exakte Lösung. Daher sind Fehler in den numerischen Ergebnissen unvermeidlich. 4- In jeder numerischen Simulation ist die Genauigkeit der Approximation das Hauptanliegen. Daher wurden verschiedene Fehlerschätzungstechniken entwickelt, um den Fehler der numerischen Lösung zu schätzen. Die herkömmlichen Fehlerschätzungsmethoden geben nur einen allgemeinen Überblick über die Gesamtgenauigkeit einer Näherungslösung. Bei vielen realen Problemen ist jedoch anstelle der Gesamtgenauigkeit die örtliche Genauigkeit (z. B. die Genauigkeit an einem bestimmten Punkt) von großem Interesse 5- Herkömmliche Fehlerschätzer berechnen Fehler in gewissen Normen. In der Ingenieurpraxis interessieren allerdings Fehler in anderen Zielgrößen, beispielsweise in der Last-Verformungs-Kurve oder in gewissen Spannungs-komponenten und speziellen Positionen. Dafür wurden sog. zielorientierte Fehlerschätzer entwickelt. 6- Die meisten numerischen Methoden unterteilen das Gebiet in kleine Teile (Element/Zelle), um das Problem zu lösen. Die Verwendung sehr feiner Elemente erhöht die Simulationsgenauigkeit, erhöht aber auch die Rechenzeit drastisch. Dieses Problem wird durch adaptive Methoden (AM) gelöst. AM können die Rechenzeit deutlich verringern. Bei adaptiven Methoden spielt die Fehlerschätzung eine Schlüsselrolle. Die Verfeinerung der Diskretisierung wird von einer Fehlerschätzung der Lösung kontrolliert und gesteuert (Elemente mit einem höheren geschätzten Fehler werden zur Verfeinerung/Aufteilung ausgewählt). Problemstellung und Zielsetzung der Arbeit 7- Die thermoelastischen Probleme können in zwei Hauptgruppen eingeteilt werden: Klassische Thermoelastizität (KTE) und klassische gekoppelte Thermoelastizität (KKTE). In jeder Gruppe werden verschiedene thermoelastische Probleme mit verschiedenen Geometrien, und Rand-/Anfangsbedingungen untersucht. In dieser Untersuchung werden die KTE- und KKTE-Probleme numerisch gelöst und alle numerischen Lösungen durch Fehlerschätzung bewertet. 8- In dieser Arbeit werden die Gesamtgenauigkeit der numerischen Lösung durch herkömmliche globale Fehlerschätzverfahren (auch als recovery-basierte Methoden bekannt) und die Genauigkeit der Lösung in bestimmten Punkten durch neue lokale Methoden (z. B. Dual-gewichtete Residuumsmethode oder DWR-Methode) bewertet. 9- Bei den dynamischen thermoelastischen Problemen ändern sich die Problembedin-gungen und anschließend die Lösung mit der Zeit. Daher werden die Fehler in jedem Zeitschritt geschätzt, um die Genauigkeit über die Zeit zu erhalten. 10- In dieser Dissertation wurde eine neue adaptive Gitter-Verfeinerung (AGV)-Technik entwickelt und für thermoelastische Probleme implementiert. Stand der Wissenschaft 11- Da die Thermoelastizität im Vergleich zu anderen mechanischen Bereichen wie der Elastizität nicht so umfangreich untersucht ist, wurden nur sehr begrenzte Untersuchungen durchgeführt, um die numerischen Fehler abzuschätzen und zu kontrollieren. Alle diese Untersuchungen konzentrierten sich auf die konventionellen Techniken, die nur den Gesamtfehler abschätzen können. Um die lokalen Fehler (wie punktweise Fehler oder Fehler an einem bestimmten Punkt) abzuschätzen, ist die Verwendung der zielorientierten Fehlerschätzungstechniken unvermeidlich. Die Implementierung der recovery-basierten zielorientierten Fehlerschätzung in der Thermoelastizität wird vor diesem Projekt nicht untersucht. 12- Viele numerische Analysen der dynamischen thermoelastischen Probleme basieren auf der Laplace-Transformationsmethode. Bei dieser Methode ist es praktisch nicht möglich, den Fehler in jedem Zeitschritt abzuschätzen. Daher wurden bisher die herkömmlichen globalen oder lokalen zielorientierten Fehlerschätzungsverfahren nicht in der dynamischen Thermoelastizität implementiert. 13- Eine der neuesten fortgeschrittenen zielorientierten Fehlerschätzungsmethoden ist die Dual-gewichtete Residuumsmethode (DWR-Methode). Die DWR-Methode, die punktweise Fehler (wie Verschiebungs-, mechanische Spannungs- oder Dehnungsfehler an einem bestimmten Punkt) abschätzen kann, wird bei elastischen Problemen angewendet. Es wurde jedoch kein Versuch unternommen, die DWR-Methode für die thermoelastischen Probleme zu formulieren. 14- In numerischen Simulationen sollte das Gitter verfeinert werden, um den Fehler zu verringern. Viele Verfeinerungstechniken basieren auf den globalen Fehlerschätzern, die versuchen, den Fehler der gesamten Lösung zu reduzieren. Daher sind diese Verfeinerungsmethoden zum reduzieren der lokalen Fehler nicht effizient. Wenn nur die Lösung an bestimmten Punkten interessiert ist und der Fehler dort reduziert werden will, sollten die zielorientierten Verfeinerungsmethoden angewendet werden, die vor dieser Untersuchung nicht in thermoelastischen Problemen entwickelt und implementiert wurden. 15- Die realen Probleme sind in der Regel 3D-Probleme, und die Simulation mit vereinfachten 2D-Fällen zeigt nicht alle Aspekte des Problems. Wie bereits erwähnt, sollten in der numerischen Simulation zur Erhöhung der Genauigkeit Gitterverfeinerungstechniken eingesetzt werden. Die konventionell verfeinerten Gitter, die durch gleichmäßige Aufteilung aller Elemente erreicht werden, erhöhen die Rechenzeit. Diese Simulationszeiterhöhung bei 3D-Problemen ist enorm. Dieses Problem wird durch die Verwendung der intelligenten Verfeinerung anstelle der globalen gleichmäßigen Verfeinerung gelöst. In diesem Projekt wurde erstmals die zielorientierte adaptive Gitterverfeinerung (AGV) bei thermoelastischen 3D-Problemen entwickelt und implementiert. Forschungsmethodik 16- In dieser Arbeit werden die beiden Haupttypen der thermoelastischen Probleme (KTE und KKTE) untersucht. Das System der partiellen Differentialgleichung der Thermoelastizität besteht aus zwei Hauptgleichungen: der herkömmlichen Gleichgewichtsgleichung und der Energiebilanzgleichung. 17- In diesem Projekt wird die Finite-Elemente-Methode (FEM) verwendet, um die Probleme numerisch zu simulieren. 18- Der Computercode zur Lösung von 2D- und 3D-Problemen wurde in den Program-miersprachen MATLAB bzw. C++ entwickelt. Um die Rechenzeit zu verkürzen und die Computerressourcen effizient zu nutzen, wurden Parallelprogrammierungs- und Optimierungsalgorithmen eingesetzt. 19- Nachdem die Probleme numerisch gelöst wurden, wurden zwei verschiedene Arten von globalen und lokalen Fehlerschätzungstechniken implementiert, um den Fehler zu schätzen und die Genauigkeit der Lösung zu messen. Der globale Typ ist die recovery-basierte zielorientierte Fehlerabschätzung, die wiederum in drei Unterkategorien von SPR-, L2-PR- und WSPR-Methoden unterteilt ist. Der lokale Typ ist die dual-gewichtete residuumsbasierte zielorientierte Fehlerabschätzung. Die Formulierung dieser Methoden wurde für thermoelastische Probleme entwickelt. 20- Schließlich wurde nach der Fehlerschätzung die entwickelte AGV-Methode implementiert. Wesentliche Ergebnisse und Schlussfolgerungen 21- In diesem Projekt wurde die Fehlerschätzung der Thermoelastizität in den folgenden drei Schritten untersucht: 1- Recovery-basierte Fehlerschätzung in statischen thermo Problemen (KTE), 2- Recovery-basierte Fehlerabschätzung in dynamischen thermo Problemen (KKTE), 3- Residuumsbasierte Fehlerschätzung in statischen thermo Problemen (KTE), 22- Im ersten Schritt, wurde das recovery-basierte Fehlerschätzverfahren auf mehrere stationäre thermoelastische Probleme angewendet. Einige der untersuchten Probleme verfügen über analytische Lösungen. Der Vergleich der numerischen Ergebnisse mit der analytischen (exakten) Lösung zeigt, dass die WSPR-Methode die genaueste unter den SPR, L2-PR und WSPR Techniken ist. 23- Darüber hinaus schließen wir aus den Ergebnissen des ersten Schritts, dass die zielorientierte Verfeinerung, im Vergleich zur herkömmlichen gleichmäßigen Total-Verfeinerungsmethode, nur ein Drittel der Unbekannten erfordert, um das Problem mit der gleichen Genauigkeit zu lösen. Daher benötigt die zielorientierte Adaptivität im Vergleich zu herkömmlichen Methoden viel weniger Rechenzeit, um die gleiche Genauigkeit zu erreichen. 24- Im zweiten Schritt, sind die Fehlerschätzungstechniken dieselben wie im ersten Schritt, aber die untersuchten Probleme sind dynamisch und nicht statisch. Der Vergleich der numerischen Ergebnisse mit den analytischen Ergebnissen in einem Benchmark-Problem bestätigt die Genauigkeit der verwendeten Methode. 25- Die Ergebnisse des zweiten Schritts zeigen, dass die geschätzten Fehler in allen gekoppelten Problemen niedriger sind als die ähnlichen ungekoppelten. Bei diesen Problemen reduziert die Implementierung der entwickelten adaptiven Methode den Fehler erheblich. 26- Im dritten Schritt wurde das residuumsbasierte Fehlerabschätzungsverfahren auf mehrere thermoelastische Probleme im stationären Zustand angewendet. In allen Beispielen wird die Genauigkeit der Methode durch analytische Lösungen überprüft. Die numerischen Ergebnisse zeigen eine sehr gute Übereinstimmung mit der analytischen Lösung sowohl bei 2D- als auch bei 3D-Problemen. 27- Im dritten Schritt werden die Ergebnisse der DWR-Verfeinerung mit Kelly-, W-Kelly- und gleichmäßigen Total-Verfeinerungstechniken verglichen. Die entwickelte DWR-Methode zeigt im Vergleich zu den anderen Methoden die beste Effizienz. Um beispielsweise die Fehlertoleranz von 10-6 zu erreichen, enthält das DWR-Gitter nur 2% unbekannte Parameter im Vergleich zu einem gleichmäßig verfeinerten Gitter. Die Verwendung des DWR-Verfahrens spart daher erhebliche Rechenzeit und Kosten. KW - Mesh Refinement KW - Thermoelastizität KW - Goal-oriented A Posteriori Error Estimation KW - 2D/3D Adaptive Mesh Refinement KW - Thermoelasticity KW - Deal ii C++ code KW - recovery-based and residual-based error estimators Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20201113-42864 ER - TY - THES A1 - Salavati, Mohammad T1 - Multi-Scale Modeling of Mechanical and Electrochemical Properties of 1D and 2D Nanomaterials, Application in Battery Energy Storage Systems N2 - Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods. Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method. Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD. Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA. The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries. Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials. KW - Batterie KW - Modellierung KW - Nanostrukturiertes Material KW - Mechanical properties KW - Multi-scale modeling KW - Energiespeichersystem KW - Elektrodenmaterial KW - Elektrode KW - Mechanische Eigenschaft KW - Elektrochemische Eigenschaft KW - Electrochemical properties KW - Battery development KW - Nanomaterial Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200623-41830 ER - TY - INPR A1 - Khosravi, Khabat A1 - Sheikh Khozani, Zohreh A1 - Cooper, James R. T1 - Predicting stable gravel-bed river hydraulic geometry: A test of novel, advanced, hybrid data mining algorithms N2 - Accurate prediction of stable alluvial hydraulic geometry, in which erosion and sedimentation are in equilibrium, is one of the most difficult but critical topics in the field of river engineering. Data mining algorithms have been gaining more attention in this field due to their high performance and flexibility. However, an understanding of the potential for these algorithms to provide fast, cheap, and accurate predictions of hydraulic geometry is lacking. This study provides the first quantification of this potential. Using at-a-station field data, predictions of flow depth, water-surface width and longitudinal water surface slope are made using three standalone data mining techniques -, Instance-based Learning (IBK), KStar, Locally Weighted Learning (LWL) - along with four types of novel hybrid algorithms in which the standalone models are trained with Vote, Attribute Selected Classifier (ASC), Regression by Discretization (RBD), and Cross-validation Parameter Selection (CVPS) algorithms (Vote-IBK, Vote-Kstar, Vote-LWL, ASC-IBK, ASC-Kstar, ASC-LWL, RBD-IBK, RBD-Kstar, RBD-LWL, CVPSIBK, CVPS-Kstar, CVPS-LWL). Through a comparison of their predictive performance and a sensitivity analysis of the driving variables, the results reveal: (1) Shield stress was the most effective parameter in the prediction of all geometry dimensions; (2) hybrid models had a higher prediction power than standalone data mining models, empirical equations and traditional machine learning algorithms; (3) Vote-Kstar model had the highest performance in predicting depth and width, and ASC-Kstar in estimating slope, each providing very good prediction performance. Through these algorithms, the hydraulic geometry of any river can potentially be predicted accurately and with ease using just a few, readily available flow and channel parameters. Thus, the results reveal that these models have great potential for use in stable channel design in data poor catchments, especially in developing nations where technical modelling skills and understanding of the hydraulic and sediment processes occurring in the river system may be lacking. KW - Maschinelles Lernen KW - Künstliche Intelligenz KW - Data Mining KW - Hydraulic geometry KW - Gravel-bed rivers Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20211004-44998 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/abs/pii/S1364815221002085 ; https://doi.org/10.1016/j.envsoft.2021.105165 VL - 2021 ER - TY - INPR A1 - Rezakazemi, Mashallah A1 - Mosavi, Amir A1 - Shirazian, Saeed T1 - ANFIS pattern for molecular membranes separation optimization N2 - In this work, molecular separation of aqueous-organic was simulated by using combined soft computing-mechanistic approaches. The considered separation system was a microporous membrane contactor for separation of benzoic acid from water by contacting with an organic phase containing extractor molecules. Indeed, extractive separation is carried out using membrane technology where complex of solute-organic is formed at the interface. The main focus was to develop a simulation methodology for prediction of concentration distribution of solute (benzoic acid) in the feed side of the membrane system, as the removal efficiency of the system is determined by concentration distribution of the solute in the feed channel. The pattern of Adaptive Neuro-Fuzzy Inference System (ANFIS) was optimized by finding the optimum membership function, learning percentage, and a number of rules. The ANFIS was trained using the extracted data from the CFD simulation of the membrane system. The comparisons between the predicted concentration distribution by ANFIS and CFD data revealed that the optimized ANFIS pattern can be used as a predictive tool for simulation of the process. The R2 of higher than 0.99 was obtained for the optimized ANFIS model. The main privilege of the developed methodology is its very low computational time for simulation of the system and can be used as a rigorous simulation tool for understanding and design of membrane-based systems. Highlights are, Molecular separation using microporous membranes. Developing hybrid model based on ANFIS-CFD for the separation process, Optimization of ANFIS structure for prediction of separation process KW - Fluid KW - Simulation KW - Molecular Liquids KW - optimization KW - machine learning KW - Membrane contactors KW - CFD Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20181122-38212 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/pii/S0167732218345008, which has been published in final form at https://doi.org/10.1016/j.molliq.2018.11.017. VL - 2018 SP - 1 EP - 20 ER - TY - INPR A1 - Sheikh Khozani, Zohreh A1 - Kumbhakar, Manotosh T1 - Discussion of “Estimation of one-dimensional velocity distribution by measuring velocity at two points” by Yeganeh and Heidari (2020) N2 - The concept of information entropy together with the principle of maximum entropy to open channel flow is essentially based on some physical consideration of the problem under consideration. This paper is a discussion on Yeganeh and Heidari (2020)’s paper, who proposed a new approach for measuring vertical distribution of streamwise velocity in open channels. The discussers argue that their approach is conceptually incorrect and thus leads to a physically unrealistic situation. In addition, the discussers found some wrong mathematical expressions (which are assumed to be typos) written in the paper, and also point out that the authors did not cite some of the original papers on the topic. KW - Geschwindigkeit KW - Entropie Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210216-43663 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/pii/S0955598621000017 ; https://doi.org/10.1016/j.flowmeasinst.2021.101886 ER - TY - INPR A1 - Khosravi, Khabat A1 - Sheikh Khozani, Zohreh A1 - Mao, Luka T1 - A comparison between advanced hybrid machine learning algorithms and empirical equations applied to abutment scour depth prediction N2 - Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing. KW - maschinelles Lernen Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210311-43889 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/abs/pii/S0022169421001475?via%3Dihub ; https://doi.org/10.1016/j.jhydrol.2021.126100 ER - TY - THES A1 - Udrea, Mihai-Andrei T1 - Assessment of Data from Dynamic Bridge Monitoring N2 - The focus of the thesis is to process measurements acquired from a continuous monitoring system at a railway bridge. Temperature, strain and ambient vibration records are analysed and two main directions of investigation are pursued. The first and the most demanding task is to develop processing routines able to extract modal parameters from ambient vibration measurements. For this purpose, reliable experimental models are achieved on the basis of a stochastic system identification(SSI) procedure. A fully automated algorithm based on a three-stage clustering is implemented to perform a modal parameter estimation for every single measurement. After selecting a baseline of modal parameters, the evolution of eigenfrequencies is studied and correlated to environmental and operational factors. The second aspect deals with the structural response to passing trains. Corresponding triggered records of strain and temperature are processed and their assessment is accomplished using the average strains induced by each train as the reference parameter. Three influences due to speed, temperature and loads are distinguished and treated individually. An attempt to estimate the maximum response variation due to each factor is also carried out. KW - automatic modal analysis KW - stochastic subspace identification KW - modal tracking KW - modal parameter estimation KW - clustering KW - Messtechnik Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20140429-21742 ER - TY - THES A1 - Zafar, Usman T1 - Probabilistic Reliability Analysis of Wind Turbines N2 - Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines. KW - Windturbine KW - Windenergie KW - Wind Turbines KW - Wind Energy KW - Reliability Analysis KW - Zuverlässigkeitsanalyse Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20240507-39773 ER - TY - THES A1 - Zhang, Yongzheng T1 - A Nonlocal Operator Method for Quasi-static and Dynamic Fracture Modeling N2 - Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons. Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form. The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows: -The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method. -A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation. -A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,9 KW - Variationsprinzip KW - Partial Differential Equations KW - Taylor Series Expansion KW - Peridynamics KW - Variational principle KW - Phase field method KW - Peridynamik KW - Phasenfeldmodell KW - Partielle Differentialgleichung KW - Nichtlokale Operatormethode Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221026-47321 ER - TY - THES A1 - Zacharias, Christin T1 - Numerical Simulation Models for Thermoelastic Damping Effects N2 - Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment. This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy. The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping. Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation. The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential. The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes. N2 - Die Finite-Elemente Simulation von dynamisch angeregten Strukturen wird im Wesentlich durch die Steifigkeits-, Massen- und Dämpfungseigenschaften des Systems sowie durch die äußere Belastung bestimmt. Die Vorhersagequalität von dynamischen Simulationen schwingungsanfälliger Bauteile hängt wesentlich von der Verwendung geeigneter Dämpfungsmodelle ab. Dämpfungsphänomene haben einen wesentlichen Einfluss auf die Schwingungsamplitude, die Frequenz und teilweise sogar die Existenz von Vibrationen. Allerdings ist die Entwicklung von realitätsnahen Dämpfungsmodellen oft schwierig, da eine Vielzahl von physikalischen Effekten zur Energiedissipation während eines Schwingungsvorgangs führt. Beispiele hierfür sind die Materialdämpfung, verschiedene Formen der Reibung sowie vielfältige Wechselwirkungen mit dem umgebenden Medium. Diese Dissertation befasst sich mit thermoelastischer Dämpfung, die in homogenen Materialien die dominante Ursache der Materialdämpfung darstellt. Der thermoelastische Effekt wird ausgelöst durch eine Temperaturänderung aufgrund mechanischer Spannungen. In der schwingenden Struktur entstehen während der Deformation Temperaturgradienten zwischen benachbarten Regionen unter Zug- und Druckbelastung. In Abhängigkeit von der Vibrationsfrequenz führen diese zu Wärmeströmen und irreversibler Umwandlung mechanischer in thermische Energie. Die Zielstellung dieser Arbeit besteht in der Entwicklung recheneffizienter Simulationsmethoden, um thermoelastische Dämpfung in zeitabhängigen Finite-Elemente Analysen, die auf modaler Superposition beruhen, zu integrieren. Der thermoelastische Verlustfaktor wird auf der Grundlage der mechanischen Eigenformen und -frequenzen bestimmt. In nachfolgenden Analysen im Zeit- und Frequenzbereich wird er als modaler Dämpfungsgrad verwendet. Zwei Ansätze werden entwickelt, um den thermoelastischen Verlustfaktor in dünn-wandigen Plattenstrukturen, sowie in dreidimensionalen Volumenbauteilen zu simulieren. Die realitätsnahe Vorhersage der Energiedissipation wird durch die Verifizierung an experimentellen Daten bestätigt. Dafür wird ein Versuchsaufbau entwickelt, der eine Messung von Materialdämpfung unter Ausschluss anderer Dissipationsquellen ermöglicht. Für den Fall der Volumenbauteile wird ein Ansatz verwendet, der auf der Berechnung der Entropieänderung und damit der erzeugte Wärmeenergie während eines Schwingungszyklus beruht. Im Verhältnis zur Formänderungsenergie ist dies ein Maß für die thermoelastische Dämpfung. Für dünne Plattenstrukturen wird der Anteil an Biegeenergie in der Eigenform bestimmt und im sogenannten modalen Biegefaktor (MBF) zusammengefasst. Der maximale Grad an thermoelastischer Dämpfung kann im Zustand reiner Biegung auftreten, sodass der MBF eine quantitative Klassifikation der Eigenformen hinsichtlich ihres thermoelastischen Dämpfungspotentials zulässt. Die Ergebnisse der entwickelten Simulationsmethoden stimmen sehr gut mit den experimentellen Daten überein und sind geeignet, um thermoelastische Dämpfungsgrade vorherzusagen. Beide Ansätze basieren auf modaler Superposition und ermöglichen damit zeitabhängige Simulationen mit einer hohen Recheneffizienz. Insgesamt stellt die Modellierung der thermoelastischen Dämpfung einen Baustein in einem umfassenden Dämpfungsmodell dar, welches zur realitätsnahen Simulation von Schwingungsvorgängen notwendig ist. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,8 KW - Werkstoffdämpfung KW - Finite-Elemente-Methode KW - Strukturdynamik KW - Thermoelastic damping KW - modal damping KW - decay experiments KW - energy dissipation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221116-47352 ER - TY - THES A1 - Liu, Bokai T1 - Stochastic multiscale modeling of polymeric nanocomposites using Data-driven techniques N2 - In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks. Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis. While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models. As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling. This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components: -Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS). -Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks. -Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,3 KW - Polymere KW - Nanoverbundstruktur KW - multiscale KW - nanocomposite KW - stochastic KW - Data-driven Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220503-46379 ER - TY - THES A1 - Valizadeh, Navid T1 - Developments in Isogeometric Analysis and Application to High-Order Phase-Field Models of Biomembranes N2 - Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics. As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects. As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models. Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,1 KW - Phasenfeldmodell KW - Vesikel KW - Hydrodynamik KW - Multiphysics KW - Isogeometrische Analyse KW - Isogeometric Analysis KW - Vesicle dynamics KW - Phase-field modeling KW - Geometric Partial Differential Equations KW - Residual-based variational multiscale method Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220114-45658 ER -