### Refine

#### Document Type

- Article (49)
- Conference Proceeding (10)
- Preprint (2)

#### Institute

- Institut für Strukturmechanik (51)
- Juniorprofessur Stochastik und Optimierung (43)
- Graduiertenkolleg 1462 (4)
- Bauhaus-Institut für zukunftsweisende Infrastruktursysteme (2)
- Professur Angewandte Mathematik (2)
- Professur Informatik im Bauwesen (2)
- Professur Modellierung und Simulation - Konstruktion (1)

#### Keywords

- Angewandte Mathematik (50)
- Stochastik (41)
- Strukturmechanik (41)
- Angewandte Informatik (9)
- Computerunterstütztes Verfahren (9)
- Building Information Modeling (6)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (6)
- Erdbeben (5)
- rapid visual screening (4)
- Maschinelles Lernen (3)

The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.

The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.

Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.

The assessment of wind-induced vibrations is considered vital for the design of long-span bridges. The aim of this research is to develop a methodological framework for robust and efficient prediction strategies for complex aerodynamic phenomena using hybrid models that employ numerical analyses as well as meta-models. Here, an approach to predict motion-induced aerodynamic forces is developed using artificial neural network (ANN). The ANN is implemented in the classical formulation and trained with a comprehensive dataset which is obtained from computational fluid dynamics forced vibration simulations. The input to the ANN is the response time histories of a bridge section, whereas the output is the motion-induced forces. The developed ANN has been tested for training and test data of different cross section geometries which provide promising predictions. The prediction is also performed for an ambient response input with multiple frequencies. Moreover, the trained ANN for aerodynamic forcing is coupled with the structural model to perform fully-coupled fluid--structure interaction analysis to determine the aeroelastic instability limit. The sensitivity of the ANN parameters to the model prediction quality and the efficiency has also been highlighted. The proposed methodology has wide application in the analysis and design of long-span bridges.

Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.

Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.

A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)

Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.

In the field of engineering, surrogate models are commonly used for approximating the behavior of a physical phenomenon in order to reduce the computational costs. Generally, a surrogate model is created based on a set of training data, where a typical method for the statistical design is the Latin hypercube sampling (LHS). Even though a space filling distribution of the training data is reached, the sampling process takes no information on the underlying behavior of the physical phenomenon into account and new data cannot be sampled in the same distribution if the approximation quality is not sufficient. Therefore, in this study we present a novel adaptive sampling method based on a specific surrogate model, the least-squares support vector regresson. The adaptive sampling method generates training data based on the uncertainty in local prognosis capabilities of the surrogate model - areas of higher uncertainty require more sample data. The approach offers a cost efficient calculation due to the properties of the least-squares support vector regression. The opportunities of the adaptive sampling method are proven in comparison with the LHS on different analytical examples. Furthermore, the adaptive sampling method is applied to the calculation of global sensitivity values according to Sobol, where it shows faster convergence than the LHS method. With the applications in this paper it is shown that the presented adaptive sampling method improves the estimation of global sensitivity values, hence reducing the overall computational costs visibly.

The fire resistance of concrete members is controlled by the temperature distribution of the considered cross section. The thermal analysis can be performed with the advanced temperature dependent physical properties provided by 5EN6 1992-1-2. But the recalculation of laboratory tests on columns from 5TU6 Braunschweig shows, that there are deviations between the calculated and measured temperatures. Therefore it can be assumed, that the mathematical formulation of these thermal properties could be improved. A sensitivity analysis is performed to identify the governing parameters of the temperature calculation and a nonlinear optimization method is used to enhance the formulation of the thermal properties. The proposed simplified properties are partly validated by the recalculation of measured temperatures of concrete columns. These first results show, that the scatter of the differences from the calculated to the measured temperatures can be reduced by the proposed simple model for the thermal analysis of concrete.

The current study attempts to recognise an adequate classification for a semi-rigid beam-to-column connection by investigating strength, stiffness and ductility. For this purpose, an experimental test was carried out to investigate the moment-rotation (M-theta) features of flush end-plate (FEP) connections including variable parameters like size and number of bolts, thickness of end-plate, and finally, size of beams and columns. The initial elastic stiffness and ultimate moment capacity of connections were determined by an extensive analytical procedure from the proposed method prescribed by ANSI/AISC 360-10, and Eurocode 3 Part 1-8 specifications. The behaviour of beams with partially restrained or semi-rigid connections were also studied by incorporating classical analysis methods. The results confirmed that thickness of the column flange and end-plate substantially govern over the initial rotational stiffness of of flush end-plate connections. The results also clearly showed that EC3 provided a more reliable classification index for flush end-plate (FEP) connections. The findings from this study make significant contributions to the current literature as the actual response characteristics of such connections are non-linear. Therefore, such semirigid behaviour should be used to for an analysis and design method.

From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.

This study contributes to the identification of coupled THM constitutive model parameters via back analysis against information-rich experiments. A sampling based back analysis approach is proposed comprising both the model parameter identification and the assessment of the reliability of identified model parameters. The results obtained in the context of buffer elements indicate that sensitive parameter estimates generally obey the normal distribution. According to the sensitivity of the parameters and the probability distribution of the samples we can provide confidence intervals for the estimated parameters and thus allow a qualitative estimation on the identified parameters which are in future work used as inputs for prognosis computations of buffer elements. These elements play e.g. an important role in the design of nuclear waste repositories.

In construction engineering, a schedule’s input data, which is usually not exactly known in the planning phase, is considered deterministic when generating the schedule. As a result, construction schedules become unreliable and deadlines are often not met. While the optimization of construction schedules with respect to costs and makespan has been a matter of research in the past decades, the optimization of the robustness of construction schedules has received little attention. In this paper, the effects of uncertainties inherent to the input data of construction schedules are discussed. Possibilities are investigated to improve the reliability of construction schedules by considering alternative processes for certain tasks and by identifying the combination of processes generating the most robust schedule with respect to the makespan of a construction project.

A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.

In order to minimize the probability of foundation failure resulting from cyclic action on structures, researchers have developed various constitutive models to simulate the foundation response and soil interaction as a result of these complex cyclic loads. The efficiency and effectiveness of these model is majorly influenced by the cyclic constitutive parameters. Although a lot of research is being carried out on these relatively new models, little or no details exist in literature about the model based identification of the cyclic constitutive parameters. This could be attributed to the difficulties and complexities of the inverse modeling of such complex phenomena. A variety of optimization strategies are available for the solution of the sum of least-squares problems as usually done in the field of model calibration. However for the back analysis (calibration) of the soil response to oscillatory load functions, this paper gives insight into the model calibration challenges and also puts forward a method for the inverse modeling of cyclic loaded foundation response such that high quality solutions are obtained with minimum computational effort. Therefore model responses are produced which adequately describes what would otherwise be experienced in the laboratory or field.

Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.

We present an extended finite element formulation for dynamic fracture of piezo-electric materials. The method is developed in the context of linear elastic fracture mechanics. It is applied to mode I and mixed mode-fracture for quasi-steady cracks. An implicit time integration scheme is exploited. The results are compared to results obtained with the boundary element method and show excellent agreement.

Safety operation of important civil structures such as bridges can be estimated by using fracture analysis. Since the analytical methods are not capable of solving many complicated engineering problems, numerical methods have been increasingly adopted. In this paper, a part of isotropic material which contains a crack is considered as a partial model and the proposed model quality is evaluated. EXtended IsoGeometric Analysis (XIGA) is a new developed numerical approach [1, 2] which benefits from advantages of its origins: eXtended Finite Element Method (XFEM) and IsoGeometric Analysis (IGA). It is capable of simulating crack propagation problems with no remeshing necessity and capturing singular field at the crack tip by using the crack tip enrichment functions. Also, exact representation of geometry is possible using only few elements. XIGA has also been successfully applied for fracture analysis of cracked orthotropic bodies [3] and for simulation of curved cracks [4]. XIGA applies NURBS functions for both geometry description and solution field approximation. The drawback of NURBS functions is that local refinement cannot be defined regarding that it is based on tensorproduct constructs unless multiple patches are used which has also some limitations. In this contribution, the XIGA is further developed to make the local refinement feasible by using Tspline basis functions. Adopting a recovery based error estimator in the proposed approach for evaluation of the model quality and performing the adaptive processes is in progress. Finally, some numerical examples with available analytical solutions are investigated by the developed scheme.

Many structures in different engineering applications suffer from cracking. In order to make reliable prognosis about the serviceability of those structures it is of utmost importance to identify cracks as precisely as possible by non-destructive testing. A novel approach (XIGA), which combines the Isogeometric Analysis (IGA) and the Extended Finite Element Method (XFEM) is used for the forward problem, namely the analysis of a cracked material, see [1]. Applying the NURBS (Non-Uniform Rational B-Spline) based approach from IGA together with the XFEM allows to describe effectively arbitrarily shaped cracks and avoids the necessity of remeshing during the crack identification problem. We want to exploit these advantages for the inverse problem of detecting existing cracks by non-destructive testing, see e.g. [2]. The quality of the reconstructed cracks however depends on two major issues, namely the quality of the measured data (measurement error) and the discretization of the crack model. The first one will be taken into account by applying regularizing methods with a posteriori stopping criteria. The second one is critical in the sense that too few degrees of freedom, i.e. the number of control points of the NURBS, do not allow for a precise description of the crack. An increased number of control points, however, increases the number of unknowns in the inverse analysis and intensifies the ill-posedness. The trade-off between accuracy and stability is aimed to be found by applying an inverse multilevel algorithm [3, 4] where the identification is started with short knot vectors which successively will be enlarged during the identification process.

In this paper we present an inverse method which is capable of identifying system components in a hydro-mechanically coupled system, i.e. for fluid flow in porous media. As an example we regard water dams that were constructed more than hundred years ago but which are still in use. Over the time ageing processes have changed the condition of these dams. Within the dams fissures might have grown. The proposed method is designed to locate these fissures out of combined mechanical and hydraulic measurements. In a numerical example the fissures or damaged zones are described by a smeared crack model. The task is now to identify simultaneously the spatial distribution of Young’s modulus and the hydraulic permeability due to the fact, that in regions where damages are present, the mechanical stiffness of the system is reduced and the permeability increased. The inversion is shown to be an ill-posed problem. As a consequence regularizing methods have to be applied, where the nonlinear Landweber method (a gradient type method combined with a discrepancy principle) has proven to be an efficient choice.

We propose an enhanced iterative scheme for the precise reconstruction of piezoelectric material parameters from electric impedance and mechanical displacement measurements. It is based on finite-element simulations of the full three-dimensional piezoelectric equations, combined with an inexact Newton or nonlinear Landweber iterative inversion scheme. We apply our method to two piezoelectric materials and test its performance. For the first material, the manufacturer provides a full data set; for the second one, no material data set is available. For both cases, our inverse scheme, using electric impedance measurements as input data, performs well.