### Refine

#### Document Type

- Conference Proceeding (32)
- Doctoral Thesis (5)
- Article (3)
- Master's Thesis (1)

#### Institute

- Graduiertenkolleg 1462 (41) (remove)

#### Keywords

- Angewandte Informatik (31)
- Angewandte Mathematik (31)
- Computerunterstütztes Verfahren (31)
- Architektur <Informatik> (12)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (12)
- Building Information Modeling (4)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (4)
- Kriechen (2)
- ANSYS (1)
- Affecting factors; Measurement uncertainty; Materials testing; Quantitative comparison; Strain comparison; Tensile test (1)

A numerical analysis of the mode of deformation of the main load-bearing components of a typical frame sloping shaft headgear was performed. The analysis was done by a design model consisting of plane and solid finite elements, which were modeled in the program «LIRA». Due to the numerical results, the regularities of local stress distribution under a guide pulley bearing were revealed and parameters of a plane stress under both emergency and normal working loads were determined. In the numerical simulation, the guidelines to improve the construction of the joints of guide pulleys resting on sub-pulley frame-type structures were established. Overall, the results obtained are the basis for improving the engineering procedures of designing steel structures of shaft sloping headgear.

Bridge vibration due to traffic loading has been a subject of extensive research in the last decades. The focus of such research has been to develop solution algorithms and investigate responses or behaviors of interest. However, proving the quality and reliability of the model output in structural engineering has become a topic of increasing importance. Therefore, this study is an attempt to extend concepts of uncertainty and sensitivity analyses to assess the dynamic response of a coupled model in bridge engineering considering time-dependent vehicular loading. A setting for the sensitivity analysis is proposed, which enables performing the sensitivity analysis considering random stochastic processes. The classical and proposed sensitivity settings are used to identify the relevant input parameters and models that have the most influence on the variance of the dynamic response. The sensitivity analysis exercises the model itself and extracts results without the need for measurements or reference solutions; however, it does not offer a means of ranking the coupled models studied. Therefore, concepts of total uncertainty are employed to rank the coupled models studied according to their fitness in describing the dynamic problem.
The proposed procedures are applied in two examples to assess the output of coupled subsystems and coupled partial models in bridge engineering considering the passage of a heavy vehicle at various speeds.

Numerical simulations in the general field of civil engineering are common for the design process of structures and/or the assessment of existing buildings. The behaviour of these structures is analytically unknown and is approximated with numerical simulation methods like the Finite Element Method (FEM). Therefore the real structure is transferred into a global model (GM, e.g. concrete bridge) with a wide range of sub models (partial models PM, e.g. material modelling, creep). These partial models are coupled together to predict the behaviour of the observed structure (GM) under different conditions. The engineer needs to decide which models are suitable for computing realistically and efficiently the physical processes determining the structural behaviour. Theoretical knowledge along with the experience from prior design processes will influence this model selection decision. It is thus often a qualitative selection of different models. The goal of this paper is to present a quantitative evaluation of the global model quality according to the simulation of a bridge subject to direct loading (dead load, traffic) and indirect loading (temperature), which induce restraint effects. The model quality can be separately investigated for each partial model and also for the coupled partial models in a global structural model. Probabilistic simulations are necessary for the evaluation of these model qualities by using Uncertainty and Sensitivity Analysis. The method is applied to the simulation of a semi-integral concrete bridge with a monolithic connection between the superstructure and the piers, and elastomeric bearings at the abutments. The results show that the evaluation of global model quality is strongly dependent on the sensitivity of the considered partial models and their related quantitative prediction quality. This method is not only a relative comparison between different models, but also a quantitative representation of model quality using probabilistic simulation methods, which can support the process of model selection for numerical simulations in research and practice.

Strain measurement is important in mechanical testing. A wide variety of techniques exists for measuring strain in the tensile test; namely the strain gauge, extensometer, stress and strain determined by machine crosshead motion, Geometric Moire technique, optical strain measurement techniques and others. Each technique has its own advantages and disadvantages. The purpose of this study is to quantitatively compare the strain measurement techniques. To carry out the tensile test experiments for S 235, sixty samples were cut from the web of the I-profile in longitudinal and transverse directions in four different dimensions. The geometry of samples are analysed by 3D scanner and vernier caliper. In addition, the strain values were determined by using strain gauge, extensometer and machine crosshead motion. Three techniques of strain measurement are compared in quantitative manner based on the calculation of mechanical properties (modulus of elasticity, yield strength, tensile strength, percentage elongation at maximum force) of structural steel. A statistical information was used for evaluating the results. It is seen that the extensometer and strain gauge provided reliable data, however the extensometer offers several advantages over the strain gauge and crosshead motion for testing structural steel in tension. Furthermore, estimation of measurement uncertainty is presented for the basic material parameters extracted through strain measurement.

Ziel dieser Arbeit ist die Entwicklung von Methoden, mit denen die Prognosequalität von Kriechmodellen des Betons bestimmt werden kann. Die Methoden werden in zwei Ausgangsszenarien unterschieden: die Bewertung ohne und die Bewertung mit Verwendung von spezifischen Versuchsdaten zum Kriechverhalten des Betons. Die Modellqualität wird anhand der Gesamtunsicherheit der prognostizierten Kriechnachgiebigkeit quantifiziert. Die Unsicherheit wird für die Kriechprognose ohne Versuchsdaten über eine Unsicherheitsanalyse unter Berücksichtigung korrelierter Eingangsparameter ermittelt. Bei der Verwendung experimenteller Daten werden die stochastischen Eigenschaften der Modellparameter mittels Bayesian Updating bestimmt. Die Bewertung erfolgt erneut basierend auf einer Unsicherheitsanalyse sowie alternativ mittels Modellselektion nach Bayes.
Weiterhin wird eine auf Graphentheorie und Sensitivitätsanalysen basierende Methode zur Bewertung von gekoppelten Partialmodellen entwickelt. Damit wird der Einfluss eines Partialmodells auf das Verhalten einer globalen Tragstruktur quantifiziert, Interaktionen von Partialmodellen festgestellt und ein Maß für die Qualität eines Gesamtmodells ermittelt.

Tests on Polymer Modified Cement Concrete (PCC) have shown significant large creep deformation. The reasons for that as well as additional material phenomena are explained in the following paper. Existing creep models developed for standard concrete are studied to determine the time-dependent deformations of PCC. These models are: model B3 by Bažant and Bajewa, the models according to Model Code 90 and ACI 209 as well as model GL2000 by Gardner and Lockman. The calculated creep strains are compared to existing experimental data of PCC and the differences are pointed out. Furthermore, an optimization of the model parameters is performed to fit the models to the experimental data to achieve a better model prognosis.

This study is focused on finite element analysis of a model comprising femur into which a femoral component of a total hip replacement was implanted. The considered prosthesis is fabricated from a functionally graded material (FGM) comprising a layer of a titanium alloy bonded to a layer of hydroxyapatite. The elastic modulus of the FGM was adjusted in the radial, longitudinal, and longitudinal-radial directions by altering the volume fraction gradient exponent. Four cases were studied, involving two different methods of anchoring the prosthesis to the spongy bone and two cases of applied loading. The results revealed that the FG prostheses provoked more SED to the bone. The FG prostheses carried less stress, while more stress was induced to the bone and cement. Meanwhile, less shear interface stress was stimulated to the prosthesis-bone interface in the noncemented FG prostheses. The cement-bone interface carried more stress compared to the prosthesis-cement interface. Stair climbing induced more harmful effects to the implanted femur components compared to the normal walking by causing more stress. Therefore, stress shielding, developed stresses, and interface stresses in the THR components could be adjusted through the controlling stiffness of the FG prosthesis by managing volume fraction gradient exponent.

The analysis of the response of complex structural systems requires the description of the material constitutive relations by means of an appropriate material model. The level of abstraction of such model may strongly affect the quality of the prognosis of the whole structure. In context to this fact, it is necessary to describe the material in a convenient sense as exact but as simple as possible. All material phenomena of crystalline materials e.g. steel, affecting the behavior of the structure, rely on physical effects which are interacting over spatial scales from subatomic to macroscopic range. Nevertheless, if the material is microscopically heterogenic, it might be appropriate to use phenomenological models for the purpose of civil engineering. Although constantly applied, these models are insufficient for steel materials with microscopic characteristics such as texture, typically occurring in hot rolled steel members or heat affected zones of welded joints. Hence, texture is manifested in crystalline materials as a regular crystallographic structure and crystallite orientation, influencing macroscopic material properties. The analysis of structural response of material with texture (e.g. rolled steel or heat affected zone of a welded joint) obliges the extension of the phenomenological material description of macroscopic scale by means of microscopic information. This paper introduces an enrichment approach for material models based on a hierarchical multiscale methodology. This has been done by describing the grain texture on a mesoscopic scale and coupling it with macroscopic constitutive relations by means of homogenization. Due to a variety of available homogenization methods, the question of an assessment of coupling quality arises. The applicability of the method and the effect of the coupling method on the reliability of the response are presented on an example.

CONSTITUTIVE MODELS FOR SUBSOIL IN THE CONTEXT OF STRUCTURAL ANALYSIS IN CONSTRUCTION ENGINEERING
(2010)

Parameters of constitutive models are obtained generally comparing the results of forward numerical simulations to measurement data. Mostly the parameter values are varied by trial-and-error in order to reach an improved fit and obtain plausible results. However, the description of complex soil behavior requires advanced constitutive models where the rising complexity of these models mainly increases the number of unknown constitutive parameters. Thus an efficient identification "by hand" becomes quite difficult for most practical geotechnical problems. The main focus of this article is on finding a vector of parameters in a given search space which minimizes discrepancy between measurements and the associated numerical result. Classically, the parameter values are estimated from laboratory tests on small samples (triaxial tests or oedometer tests). For this purpose an automatic population-based approach is present to determine the material parameters for reconstituted and natural Bothkennar Clay. After the identification a statistical assessment is carried out of numerical results to evaluate different constitutive models. On the other side a geotechnical problem, stone columns under an embankment, is treated in a well instrumented field trial in Klagenfurt, Austria. For the identification purpose there are measurements from multilevel-piezometers, multilevel-extensometers and horizontal inclinometer. Based on the simulation of the stone columns in a FE-Model the identification of the constitutive parameters is similar to the experimental tests by minimizing the absolute error between measurement and numerical curves.

Buildings can be divided into various types and described by a huge number of parameters. Within the life cycle of a building, especially during the design and construction phases, a lot of engineers with different points of view, proprietary applications and data formats are involved. The collaboration of all participating engineers is characterised by a high amount of communication. Due to these aspects, a homogeneous building model for all engineers is not feasible. The status quo of civil engineering is the segmentation of the complete model into partial models. Currently, the interdependencies of these partial models are not in the focus of available engineering solutions. This paper addresses the problem of coupling partial models in civil engineering. According to the state-of-the-art, applications and partial models are formulated by the object-oriented method. Although this method solves basic communication problems like subclass coupling directly it was found that many relevant coupling problems remain to be solved. Therefore, it is necessary to analyse and classify the relevant coupling types in building modelling. Coupling in computer science refers to the relationship between modules and their mutual interaction and can be divided into different coupling types. The coupling types differ on the degree by which the coupled modules rely upon each other. This is exemplified by a general reference example from civil engineering. A uniform formulation of coupling patterns is described analogously to design patterns, which are a common methodology in software engineering. Design patterns are templates for describing a general reusable solution to a commonly occurring problem. A template is independent of the programming language and the operating system. These coupling patterns are selected according to the specific problems of building modelling. A specific meta-model for coupling problems in civil engineering is introduced. In our meta-model the coupling patterns are a semantic description of a specific coupling design.

The planning process in civil engineering is highly complex and not manageable in its entirety.
The state of the art decomposes complex tasks into smaller, manageable sub-tasks. Due to the close interrelatedness of the sub-tasks, it is essential to couple them. However, from a software engineering point of view, this is quite challenging to do because of the numerous incompatible software applications on the market. This study is concerned with two main objectives: The first is the generic formulation of coupling strategies in order to support engineers in the implementation and selection of adequate coupling strategies. This has been achieved by the use of a coupling pattern language combined with a four-layered, metamodel architecture, whose applicability has been performed on a real coupling scenario. The second one is the quality assessment of coupled software. This has been developed based on the evaluated schema mapping. This approach has been described using mathematical expressions derived from the set theory and graph theory by taking the various mapping patterns into account. Moreover, the coupling quality has been evaluated within the formalization process by considering the uncertainties that arise during mapping and has resulted in global quality values, which can be used by the user to assess the exchange. Finally, the applicability of the proposed approach has been shown using an engineering case study.

Different types of data provide different type of information. The present research analyzes the error on prediction obtained under different data type availability for calibration. The contribution of different measurement types to model calibration and prognosis are evaluated. A coupled 2D hydro-mechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are identified by scaled sensitivities. Then, Particle Swarm Optimization is applied to determine the optimal parameter values and finally, the error in prognosis is determined. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain the actual prediction errors. The analyses presented here were performed calibrating the hydro-mechanical model to 31 data sets of 100 observations of varying data types. The prognosis results improve when using diversified information for calibration. However, when using several types of information, the number of observations has to be increased to be able to cover a representative part of the model domain. For an analysis with constant number of observations, a compromise between data type availability and domain coverage proves to be the best solution. Which type of calibration information contributes to the best prognoses could not be determined in advance. The error in model prognosis does not depend on the error in calibration, but on the parameter error, which unfortunately cannot be determined in inverse problems since we do not know its real value. The best prognoses were obtained independent of calibration fit. However, excellent calibration fits led to an increase in prognosis error variation. In the case of excellent fits; parameters' values came near the limits of reasonable physical values more often. To improve the prognoses reliability, the expected value of the parameters should be considered as prior information on the optimization algorithm.

ESTIMATING UNCERTAINTIES FROM INACCURATE MEASUREMENT DATA USING MAXIMUM ENTROPY DISTRIBUTIONS
(2010)

Modern engineering design often considers uncertainties in geometrical and material parameters and in the loading conditions. Based on initial assumptions on the stochastic properties as mean values, standard deviations and the distribution functions of these uncertain parameters a probabilistic analysis is carried out. In many application fields probabilities of the exceedance of failure criteria are computed. The out-coming failure probability is strongly dependent on the initial assumptions on the random variable properties. Measurements are always more or less inaccurate data due to varying environmental conditions during the measurement procedure. Furthermore the estimation of stochastic properties from a limited number of realisation also causes uncertainties in these quantities. Thus the assumption of exactly known stochastic properties by neglecting these uncertainties may not lead to very useful probabilistic measures in a design process. In this paper we assume the stochastic properties of a random variable as uncertain quantities caused by so-called epistemic uncertainties. Instead of predefined distribution types we use the maximum entropy distribution which enables the description of a wide range of distribution functions based on the first four stochastic moments. These moments are taken again as random variables to model the epistemic scatter in the stochastic assumptions. The main point of this paper is the discussion on the estimation of these uncertain stochastic properties based on inaccurate measurements. We investigate the bootstrap algorithm for its applicability to quantify the uncertainties in the stochastic properties considering imprecise measurement data. Based on the obtained estimates we apply standard stochastic analysis on a simple example to demonstrate the difference and the necessity of the proposed approach.

The aim of this study is to show an application of model robustness measures for soilstructure interaction (henceforth written as SSI) models. Model robustness defines a measure for the ability of a model to provide useful model answers for input parameters which typically have a wide range in geotechnical engineering. The calculation of SSI is a major problem in geotechnical engineering. Several different models exist for the estimation of SSI. These can be separated into analytical, semi-analytical and numerical methods. This paper focuses on the numerical models of SSI specific macro-element type models and more advanced finite element method models using contact description as continuum or interface elements. A brief description of the models used is given in the paper. Following this description, the applied SSI problem is introduced. The observed event is a static loaded shallow foundation with an inclined load. The different partial models to consider the SSI effects are assessed using different robustness measures during numerical application. The paper shows the investigation of the capability to use these measures for the assessment of the model quality of SSI partial models. A variance based robustness and a mathematical robustness approaches are applied. These different robustness measures are used in a framework which allows also the investigation of computational time consuming models. Finally the result shows that the concept of using robustness approaches combined with other model–quality indicators (e.g. model sensitivity or model reliability) can lead to unique model–quality assessment for SSI models.

By the use of numerical methods and the rapid development of computer technology in the recent years, a large variety, complexity, refinement and capability of partial models have been achieved. This can be noticed in the evaluation of the reliability of structures, e.g. the increased use of spatial structural systems. For the different fields of civil engineering, well developed partial models already exist. Because these partial models are most often used separately, the general view is not entirely illustrated. Until now, there has been no common methodology for evaluating the efficiency of models; the trust in the prediction of a special engineering model has generally relied on the engineer’s experience. In this paper the basics of evaluation of simple models and coupled partial models of frame structures will be discussed using sustainable numerical methods. Furthermore, quality classes (levels) of design tasks will be defined based on their practical relevance. In addition, analysis methods will be systemized. After analysis of different published assessment methods, it may be noted, that the Efficiency Indicator Method (EWM) is most suitable for the observed evaluation problem. Therefore, the EWM was modified to the Model Efficiency Analysis (MEA) for the purpose of a holistic evaluation. The criteria are characterized by two groups, benefit and expenditure, and it is possible by calculating the quotient (benefit/expenditure) to make a statement about the efficiency of the observed models. Presently, the expenditure value is not a subject of investigation, and so the model efficiency is calculated only by the benefit value. This paper also contains the associated criteria catalog, different normalization methods, as well as weighting possibilities.

Polymer-modified cement concrete (PCC) is a heterogeneous building material with a hierarchically organized microstructure. Therefore, continuum micromechanics-based multiscale models represent a promising method to estimate the mechanical properties. By means of a bottom-up approach, homogenized properties at the macroscopic scale are derived considering microstructural characteristics. The extension of existing multiscale models for the application to PCC is the main objective of this work. For that, cross-scale experimental studies are required. Both macroscopic and microscopic mechanical tests are performed to characterize the elastic and viscoelastic properties of different PCC. The comparison between experiment and model prediction illustrates the success of the modeling approach.

The present research analyses the error on prediction obtained under different data availability scenarios to determine which measurements contribute to an improvement of model prognosis and which not. A fully coupled 2D hydromechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are ranked by scaled sensitivities, Particle Swarm Optimization is applied to determine the optimal parameter values and model validation is performed to determine the magnitude of error forecast. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain actual prediction errors.
The analyses presented here were performed to 31 data sets of 100 observations of varying data types. Calibrating with multiple information types instead of only one sort, brings better calibration results and improvement in model prognosis. However, when using several types of information the number of observations have to be increased to be able to cover a representative part of the model domain; otherwise a compromise between data availability and domain
coverage prove best. Which type of information for calibration contributes to the best prognoses, could not be determined in advance. For the error in model prognosis does not depends on the error in calibration, but on the parameter error, which unfortunately can not be determined in reality since we do not know its real value. Excellent calibration fits with parameters’ values near the limits of reasonable physical values, provided the highest prognosis errors. While models which included excess pore pressure values for calibration provided the best prognosis, independent of the calibration fit.

Nonlinear analyses are characterised by approximations of the fundamental equations in different quality. Starting with a general description of nonlinear finite element formulation the fundamental equations are derived for plane truss elements. Special emphasis is placed on the determination of internal and external system energy as well as influence of different quality approaches for the displacement-strain relationship on solution quality. To simplify the solution procedure the nonlinear function describing the kinematics is expanded into a Taylor series and truncated after the n-th series term. The different kinematics influence speed of convergence as well as exactness of solution. On a simple truss structure this influence is shown. To assess the quality of different formulations concerning the nonlinear kinematic equation three approaches are discussed. First the overall internal and external energy is compared for different kinematical models. In a second step the energy content related to single terms describing displacement-strain relationship is investigated and used for quality control following two different paths. Based on single ε-terms an adaptive scheme is used to change the kinematical model depending on increasing nonlinearity of the structure. The solution quality has turned out satisfactory compared to the exact result. More detailed investigations are necessary to find criteria for the threshold values for the iterative process as well as for decision on number and step size of incremental load steps.

In this paper the influence of changes in the mean wind velocity, the wind profile power-law coefficient, the drag coefficient of the terrain and the structural stiffness are investigated on different complex structural models. This paper gives a short introduction to wind profile models and to the approach by Davenport A. G. to compute the structural reaction of wind induced vibrations. Firstly with help of a simple example (a skyscraper) this approach is shown. Using this simple example gives the reader the possibility to study the variance differences when changing one of the above mentioned parameters on this very easy example and see the influence of different complex structural models on the result. Furthermore an approach for estimation of the needed discretization level is given. With the help of this knowledge the structural model design methodology can be base on deeper understanding of the different behavior of the single models.

Long-span cable supported bridges are prone to aerodynamic instabilities caused by wind and this phenomenon is usually a major design criterion. If the wind speed exceeds the critical flutter speed of the bridge, this constitutes an Ultimate Limit State. The prediction of the flutter boundary therefore requires accurate and robust models. This paper aims at studying various combinations of models to predict the flutter phenomenon.
Since flutter is a coupling of aerodynamic forcing with a structural dynamics problem, different types and classes of models can be combined to study the interaction. Here, both numerical approaches and analytical models are utilised and coupled in different ways to assess the prediction quality of the hybrid model. Models for aerodynamic forces employed are the analytical Theodorsen expressions for the motion-enduced aerodynamic forces of a flat plate and Scanlan derivatives as a Meta model. Further, Computational Fluid Dynamics (CFD) simulations using the Vortex Particle Method (VPM) were used to cover numerical models.
The structural representations were dimensionally reduced to two degree of freedom section models calibrated from global models as well as a fully three-dimensional Finite Element (FE) model. A two degree of freedom system was analysed analytically as well as numerically.
Generally, all models were able to predict the flutter phenomenon and relatively close agreement was found for the particular bridge. In conclusion, the model choice for a given practical analysis scenario will be discussed in the context of the analysis findings.

Civil engineers take advantage of models to design reliable structures. In order to fulfill the design goal with a certain amount of confidence, the utilized models should be able to predict the probable structural behavior under the expected loading schemes. Therefore, a major challenge is to find models which provide less uncertain and more robust responses. The problem gets even twofold when the model to be studied is a global model comprised of different interacting partial models. This study aims at model quality evaluation of global models with a focus on frame-wall systems as the case study. The paper, presents the results of the first step taken toward accomplishing this goal. To start the model quality evaluation of the global frame-wall system, the main element (i.e. the wall) was studied through nonlinear static and dynamic analysis using two different modeling approaches. The two selected models included the fiber section model and the Multiple-Vertical-Line-Element-Model (MVLEM). The influence of the wall aspect ratio (H=L) and the axial load on the response of the models was studied. The results from nonlinear static and dynamic analysis of both models are presented and compared. The models resulted in quite different responses in the range of low aspect ratio walls under large axial loads due to different contribution of the shear deformations to the top displacement. In the studied cases, the results implied that careful attention should be paid to the model quality evaluation of the wall models specifically when they are supposed to be coupled to other partial models such as a moment frame or a soil-footing substructure which their response is sensitive to shear deformations. In this case, even a high quality wall model would not result in a high quality coupled system since it fails to interact properly with the rest of the system.

The evident advances of the computational power of the digital computers enable the modeling of the total system of structures. Such modeling demands compatible representations of the couplings of different structural subsystems. Therefore, models of dynamic interaction between the vehicle and the bridge and models of a bridge bearing, a coupling element between the bridge's superstructure and substructure, are of interest and discussed within this paper. The vehicle-bridge interaction may be described as a function connecting two sets of behavior. In this case, the coupling is embodied by mutual parameters that affect both systems, such as the frequency content of the bridge and the vehicle. Whereas the bridge bearings are elements used specifically to couple, in such elements the deformation and the transferred loads are used in characterizing the coupling The nature of these couplings and their influence on the bridge response is different. However, the need to assess the amount of dynamic response transferred by or within these couplings is a common argument.

There are many different approaches to simulate the mechanical behavior of RC−Frames with masonry infills. In this paper, selected modeling techniques for masonry infills and reinforced concrete frame members will be discussed − stressing the attention on the damaging effects of the individual members and the entire system under quasi−static horizontal loading. The effect of the infill walls on the surrounding frame members is studied using equivalent strut elements. The implemented model consider in−plane failure modes for the infills, such as bed joint sliding and corner crushing. These frame member models differ with respect to their stress state. Finally, examples are provided and compared with experimental data from a real size test executed on a three story RC−Frame with and without infills. The quality of the model is evaluated on the basis of load−displacement relationships as well as damage progression.

The application of partly decoupled approach by means of continuum mechanics facilitates the calculation of structural responses due to welding. The numerical results demonstrate the ability of a qualitative prediction of welded connections. As it is intended to integrate the local effects of a joint in structural analysis of steel constructions, it is necessary to meet higher approaches towards quality. The wide array of material parameters are presented, which are affecting the thermal, metallurgical and mechanical behavior, and which have to be identified. For that purpose further investigations are necessary to analyze the sensitivity of the models towards different material properties. The experimental determination of every material parameter is not possible due to the extraordinary laborious efforts needed. Besides that, experimentally identified parameters can be applied only for the tested steel quality for measured temperature-time regimes. For that reason alternative approaches for identification of material parameters, such as optimization strategies, have to be applied. After a definition of material parameters a quantitative prediction of welded connections will also be possible. Numerical results show the effect of phase transformation, activated by welding process, on residual stress state. As these phenomena occur in local areas in the range of crystal and grain sizes, the description of microscopic phenomena and their propagation on a macroscopic level due to approaches of homogenization might be expedient. Nevertheless, one should bear in mind, the increasing number of material parameters as well as the complexity of their experimental determination. Thus the microscopic approach should always be investigated under the scope of ability and efficiency of a required prediction. Under certain circumstances a step backwards, adopting a phenomenological approach, also can be beneficial.

Building information modeling offers a huge potential for increasing the productivity and quality of construction planning processes. Despite its promising concept, this approach has not found widespread use. One of the reasons is the insufficient coupling of the structural models with the general building model. Instead, structural engineers usually set up a structural model that is independent from the building model and consists of mechanical models of reduced dimension. An automatic model generation, which would be valuable in case of model revisions is therefore not possible. This can be overcome by a volumetric formulation of the problem. A recent approach employed the p-version of the finite element method to this problem. This method, in conjunction with a volumetric formulation is suited to simulate the structural behaviour of both „thick“ solid bodies and thin-walled structures. However, there remains a notable discretization error in the numerical models. This paper therefore proposes a new approach for overcoming this situation. It sugggests the combination of the Isogeometric analysis together with the volumetric models in order to integrate the structural design into the digital, building model-centered planning process and reduce the discretization error. The concept of the isogeometric analysis consists, roughly, in the application of NURBS functions to represent the geometry and the shape functions of the elements. These functions possess some beneficial properties regarding numerical simulation. Their use, however, leads to some intricacies related to the setup of the stiffness matrix. This paper describes some of these properties.

Kopplungen von Softwareanwendungen im Bereich des Bauingenieurwesens sollen einen möglichst fehlerfreien Austausch von Informationen realisieren. Daher spielt die Bewertung der Qualität dieser Kopplungen eine wichtige Rolle. Diese Arbeit beschäftigt sich mit der Bewertung einer Kopplung zwischen dem Standard IFC und der Software Ansys. Es werden anhand des Standes der Technik Theorien zur Kopplungsbewertung vorgestellt. Unter Verwendung von Schema-Analyse und Schema-Mapping wird eine solche Kopplung entwickelt und anhand eines Referenzmodells getestet. Schließlich erfolgt die qualitative Bewertung der Kopplung unter verschiedenen Aspekten.

Known as a sophisticated phenomenon in civil engineering problems, soil structure interaction has been under deep investigations in the field of Geotechnics. On the other hand, advent of powerful computers has led to development of numerous numerical methods to deal with this phenomenon, resulting in a wide variety of methods trying to simulate the behavior of the soil stratum. This survey studies two common approaches to model the soil’s behavior in a system consisting of a structure with two degrees of freedom, representing a two-storey frame structure made of steel, with the column resting on a pile embedded into sand in laboratory scale. The effect of soil simulation technique on the dynamic behavior of the structure is of major interest in the study. Utilized modeling approaches are the so-called Holistic method, and substitution of soil with respective impedance functions.

Methods for model quality assessment are aiming to find the most appropriate model with respect to accuracy and computational effort for a structural system under investigation. Model error estimation techniques can be applied for this purpose when kinematical models are investigated. They are counted among the class of white box models, which means that the model hierarchy and therewith the best model is known. This thesis gives an overview of discretisation error estimators. Deduced from these, methods for model error estimation are presented. Their general goal is to make a prediction of the inaccuracies that are introduced using the simpler model without knowing the solution of a more complex model. This information can be used to steer an adaptive process. Techniques for linear and non-linear problems as well as global and goal-oriented errors are introduced. The estimation of the error in local quantities is realised by solving a dual problem, which serves as a weight for the primal error. So far, such techniques have mainly been applied in
material modelling and for dimensional adaptivity. Within the scope of this thesis, available model error estimators are adapted for an application to kinematical models. Their applicability is tested regarding the question of whether a geometrical non-linear calculation is necessary or not. The analysis is limited to non-linear estimators due to the structure of the underlying differential equations. These methods often involve simplification, e.g linearisations. It is investigated to which extent such assumptions lead to meaningful results, when applied to kinematical models.

The process of analysis and design in structural engineering requires the consideration of different partial models, for example loading, structural materials, structural elements, and analysis types. The various partial models are combined by coupling several of their components. Due to the large number of available partial models describing similar phenomena, many different model combinations are possible to simulate the same aspects of a structure. The challenging task of an engineer is to select a model combination that ensures a sufficient, reliable prognosis. In order to achieve this reliable prognosis of the overall structural behavior, a high individual quality of the partial models and an adequate coupling of the partial models is required. Several methodologies have been proposed to evaluate the quality of partial models for their intended application, but a detailed study of the coupling quality is still lacking. This paper proposes a new approach to assess the coupling quality of partial models in a quantitative manner. The approach is based on the consistency of the coupled data and applies for uni- and bidirectional coupled partial models. Furthermore, the influence of the coupling quality on the output quantities of the partial models is considered. The functionality of the algorithm and the effect of the coupling quality are demonstrated using an example of coupled partial models in structural engineering.

Reinforced concrete walls are commonly selected as the lateral resisting systems in seismic design of buildings. The design procedure requires reliable/robust models to predict the wall response. Many researchers, thus, have focused on using the available experimental data to be able to comment on the quality of models at hand. What is missing though is an uncertain attitude towards the experimental data since such data can be affected by different sources of uncertainty. In this paper, we introduce the database created for model quality evaluation purposes considering the uncertainties in the experimental data. This is the first step of a larger study on experience-based model quality evaluation of reinforced concrete walls. Here, we briefly present the database as well as six sample validations of the developed numerical model (the quality of which is to be assessed). The database contains the information on nearly 300 wall specimens from about 50 sources. Both the database and the numerical model, built for uncertainty/sensitivity analysis purposes, are mainly based on ten parameters. These include geometry, material, reinforcement layout and loading properties. The validation results prove that the model is able to predict the wall response satisfactorily. Consequently, the validated numerical model could be used in further quality evaluation studies.

The topic of structural robustness is covered extensively in current literature in structural engineering. A few evaluation methods already exist. Since these methods are based on different evaluation approaches, the comparison is difficult. But all the approaches have one in common, they need a structural model which represents the structure to be evaluated. As the structural model is the basis of the robustness evaluation, there is the question if the quality of the chosen structural model is influencing the estimation of the structural robustness index. This paper shows what robustness in structural engineering means and gives an overview of existing assessment methods. One is the reliability based robustness index, which uses the reliability indices of a intact and a damaged structure. The second one is the risk based robustness index, which estimates the structural robustness by the usage of direct and indirect risk. The paper describes how these approaches for the evaluation of structural robustness works and which parameters will be used. Since both approaches needs a structural model for the estimation of the structural behavior and the probability of failure, it is necessary to think about the quality of the chosen structural model. Nevertheless, the chosen model has to represent the structure, the input factors and reflect the damages which occur. On the example of two different model qualities, it will be shown, that the model choice is really influencing the quality of the robustness index.

In spite of the extensive research in dynamic soil-structure interaction (SSI), there still exist miscon-ceptions concerning the role of SSI in the seismic performance of structures, especially the ones founded on soft soil. This is due to the fact that current analytical SSI models that are used to evaluate the influence of soil on the overall structural behavior are approximate models and may involve creeds and practices that are not always precise. This is especially true in the codified approaches which in-clude substantial approximations to provide simple frameworks for the design. As the direct numerical analysis requires a high computational effort, performing an analysis considering SSI is computationally uneconomical for regular design applications. This paper outlines the set up some milestones for evaluating SSI models. This will be achieved by investigating the different assumptions and involved factors, as well as varying the configurations of R/C moment-resisting frame structures supported by single footings which are subject to seismic excita-tions. It is noted that the scope of this paper is to highlight, rather than fully resolve, the above subject. A rough draft of the proposed approach is presented in this paper, whereas a thorough illustration will be carried out throughout the presentation in the course of the conference.

With the advances of the computer technology, structural optimization has become a prominent field in structural engineering. In this study an unconventional approach of structural optimization is presented which utilize the Energy method with Integral Material behaviour (EIM), based on the Lagrange’s principle of minimum potential energy. The equilibrium condition with the EIM, as an alternative method for nonlinear analysis, is secured through minimization of the potential energy as an optimization problem. Imposing this problem as an additional constraint on a higher cost function of a structural property, a bilevel programming problem is formulated. The nested strategy of solution of the bilevel problem is used, treating the energy and the upper objective function as separate optimization problems. Utilizing the convexity of the potential energy, gradient based algorithms are employed for its minimization and the upper cost function is minimized using the gradient free algorithms, due to its unknown properties. Two practical examples are considered in order to prove the efficiency of the method. The first one presents a sizing problem of I steel section within encased composite cross section, utilizing the material nonlinearity. The second one is a discrete shape optimization of a steel truss bridge, which is compared to a previous study based on the Finite Element Method.

Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.

Safety operation of important civil structures such as bridges can be estimated by using fracture analysis. Since the analytical methods are not capable of solving many complicated engineering problems, numerical methods have been increasingly adopted. In this paper, a part of isotropic material which contains a crack is considered as a partial model and the proposed model quality is evaluated. EXtended IsoGeometric Analysis (XIGA) is a new developed numerical approach [1, 2] which benefits from advantages of its origins: eXtended Finite Element Method (XFEM) and IsoGeometric Analysis (IGA). It is capable of simulating crack propagation problems with no remeshing necessity and capturing singular field at the crack tip by using the crack tip enrichment functions. Also, exact representation of geometry is possible using only few elements. XIGA has also been successfully applied for fracture analysis of cracked orthotropic bodies [3] and for simulation of curved cracks [4]. XIGA applies NURBS functions for both geometry description and solution field approximation. The drawback of NURBS functions is that local refinement cannot be defined regarding that it is based on tensorproduct constructs unless multiple patches are used which has also some limitations. In this contribution, the XIGA is further developed to make the local refinement feasible by using Tspline basis functions. Adopting a recovery based error estimator in the proposed approach for evaluation of the model quality and performing the adaptive processes is in progress. Finally, some numerical examples with available analytical solutions are investigated by the developed scheme.

Non-destructive techniques for damage detection became the focus of engineering interests in the last few years. However, applying these techniques to large complex structures like civil engineering buildings still has some limitations since these types of structures are
unique and the methodologies often need a large number of specimens for reliable results. For this reason, cost and time can greatly influence the final results.
Model Assisted Probability Of Detection (MAPOD) has taken its place among the ranks of damage identification techniques, especially with advances in computer capacity and modeling tools. Nevertheless, the essential condition for a successful MAPOD is having a reliable model in advance. This condition is opening the door for model assessment and model quality problems. In this work, an approach is proposed that uses Partial Models (PM) to compute the Probability Of damage Detection (POD). A simply supported beam, that can be structurally modified and
tested under laboratory conditions, is taken as an example. The study includes both experimental and numerical investigations, the application of vibration-based damage detection approaches and a comparison of the results obtained based on tests and simulations.
Eventually, a proposal for a methodology to assess the reliability and the robustness of the models is given.

THE INFLUENCE OF THE LOCAL CONCAVITY ON THE FUNCTIONING OF BEARING SHELL OF HIGH-RISE CONSTRUCTION
(2012)

Areas with various defects and damages, which reduce carrying capacity, were examined in a study of metal chimneys. In this work, the influence of the local dimples on the function of metal chimneys was considered. Modeling tasks were completed in the software packages LIRA and ANSYS. Parameters were identified, which characterize the local dimples, and a numerical study of the influence of local dimples on the stress-strain state of shells of metal chimneys was conducted. A distribution field of circular and meridional tension was analyzed in a researched area. Zones of influence of dimples on the bearing cover of metal chimneys were investigated. The bearing capacities of high-rise structures with various dimple geometries and various cover parameters were determined with respect to specified areas of the trunk. Dependent relationships are represented graphically for the decrease in bearing capacity of a cover with respect to dimples. Diameter and thickness of covers of metal chimneys were constructed according to the resulting data.

This paper is focused on the first numerical tests for coupling between analytical solution and finite element method on the example of one problem of fracture mechanics. The calculations were done according to ideas proposed in [1]. The analytical solutions are constructed by using an orthogonal basis of holomorphic and anti-holomorphic functions. For coupling with finite element method the special elements are constructed by using the trigonometric interpolation theorem.

A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.

This paper presents a methodology for uncertainty quantification in cyclic creep analysis. Several models- , namely BP model, Whaley and Neville model, modified MC90 for cyclic loading and modified Hyperbolic function for cyclic loading are used for uncertainty quantification. Three types of uncertainty are included in Uncertainty Quantification (UQ): (i) natural variability in loading and materials properties; (ii) data uncertainty due to measurement errors; and (iii) modelling uncertainty and errors during cyclic creep analysis. Due to the consideration of all type of uncertainties, a measure for the total variation of the model response is achieved. The study finds that the BP, modified Hyperbolic and modified MC90 are best performing models for cyclic creep prediction in that order. Further, global Sensitivity Analysis (SA) considering the uncorrelated and correlated parameters is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identifying the important parameters. The error in determining the input quantities and model itself can produce significant changes in creep prediction values. The variability influence of input random quantities on the cyclic creep was studied by means of the stochastic uncertainty and sensitivity analysis namely the Gartner et al. method and Saltelli et al. method. All input imperfections were considered to be random quantities. The Latin Hypercube Sampling (LHS) numerical simulation method (Monte Carlo type method) was used. It has been found by the stochastic sensitivity analysis that the cyclic creep deformation variability is most sensitive to the Elastic modulus of concrete, compressive strength, mean stress, cyclic stress amplitude, number of cycle, in that order.

Many structures in different engineering applications suffer from cracking. In order to make reliable prognosis about the serviceability of those structures it is of utmost importance to identify cracks as precisely as possible by non-destructive testing. A novel approach (XIGA), which combines the Isogeometric Analysis (IGA) and the Extended Finite Element Method (XFEM) is used for the forward problem, namely the analysis of a cracked material, see [1]. Applying the NURBS (Non-Uniform Rational B-Spline) based approach from IGA together with the XFEM allows to describe effectively arbitrarily shaped cracks and avoids the necessity of remeshing during the crack identification problem. We want to exploit these advantages for the inverse problem of detecting existing cracks by non-destructive testing, see e.g. [2]. The quality of the reconstructed cracks however depends on two major issues, namely the quality of the measured data (measurement error) and the discretization of the crack model. The first one will be taken into account by applying regularizing methods with a posteriori stopping criteria. The second one is critical in the sense that too few degrees of freedom, i.e. the number of control points of the NURBS, do not allow for a precise description of the crack. An increased number of control points, however, increases the number of unknowns in the inverse analysis and intensifies the ill-posedness. The trade-off between accuracy and stability is aimed to be found by applying an inverse multilevel algorithm [3, 4] where the identification is started with short knot vectors which successively will be enlarged during the identification process.