620 Ingenieurwissenschaften
Refine
Has Fulltext
- yes (100) (remove)
Document Type
- Article (56)
- Doctoral Thesis (27)
- Conference Proceeding (9)
- Master's Thesis (4)
- Book (1)
- Habilitation (1)
- Report (1)
- Working Paper (1)
Institute
- Institut für Strukturmechanik (ISM) (33)
- F. A. Finger-Institut für Baustoffkunde (FIB) (6)
- Professur Bauphysik (6)
- Graduiertenkolleg 1462 (5)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (4)
- Institut für Konstruktiven Ingenieurbau (IKI) (4)
- Materialforschungs- und -prüfanstalt an der Bauhaus-Universität (4)
- Professur Baubetrieb und Bauverfahren (4)
- Junior-Professur Komplexe Tragwerke (3)
- Professur Bauchemie und Polymere Werkstoffe (3)
Keywords
- Finite-Elemente-Methode (7)
- Beton (5)
- Modellierung (5)
- OA-Publikationsfonds2022 (5)
- Strukturmechanik (5)
- Building Information Modeling (4)
- Bruchmechanik (3)
- Erdbeben (3)
- Ingenieurbau (3)
- OA-Publikationsfonds2020 (3)
Identification of modal parameters of a space frame structure is a complex assignment due to a large number of degrees of freedom, close natural frequencies, and different vibrating mechanisms. Research has been carried out on the modal identification of rather simple truss structures. So far, less attention has been given to complex three-dimensional truss structures. This work develops a vibration-based methodology for determining modal information of three-dimensional space truss structures. The method uses a relatively complex space truss structure for its verification. Numerical modelling of the system gives modal information about the expected vibration behaviour. The identification process involves closely spaced modes that are characterised by local and global vibration mechanisms. To distinguish between local and global vibrations of the system, modal strain energies are used as an indicator. The experimental validation, which incorporated a modal analysis employing the stochastic subspace identification method, has confirmed that considering relatively high model orders is required to identify specific mode shapes. Especially in the case of the determination of local deformation modes of space truss members, higher model orders have to be taken into account than in the modal identification of most other types of structures.
The design of engineering structures takes place today and in the past on the basis of static calculations. The consideration of uncertainties in the model quality becomes more and more important with the development of new construction methods and design requirements. In addition to the traditional forced-based approaches, experiences and observations about the deformation behavior of components and the overall structure under different exposure conditions allow the introduction of novel detection and evaluation criteria.
The proceedings at hand are the result from the Bauhaus Summer School Course: Forecast Engineering held at the Bauhaus-Universität Weimar, 2017. It summarizes the results of the conducted project work, provides the abstracts of the contributions by the participants, as well as impressions from the accompanying programme and organized cultural activities.
The special character of this course is in the combination of basic disciplines of structural engineering with applied research projects in the areas of steel and reinforced concrete structures, earthquake and wind engineering as well as informatics and linking them to mathematical methods and modern tools of visualization. Its innovative character results from the ambitious engineering tasks and advanced
modeling demands.
Institute of Structural Engineering, Institute of Structural Mechanics, as well as Institute for Computing, Mathematics and Physics in Civil Engineering at the faculty of civil engineering at the Bauhaus-Universität Weimar presented special topics of structural engineering to highlight the broad spectrum of civil engineering in the field of modeling and simulation.
The summer course sought to impart knowledge and to combine research with a practical context, through a challenging and demanding series of lectures, seminars and project work. Participating students were enabled to deal with advanced methods and its practical application.
The extraordinary format of the interdisciplinary summer school offers the opportunity to study advanced developments of numerical methods and sophisticated modelling techniques in different disciplines of civil engineering for foreign and domestic students, which go far beyond traditional graduate courses.
The proceedings at hand are the result from the Bauhaus Summer School course: Forecast Engineering held at the Bauhaus-Universität Weimar, 2018. It summarizes the results of the conducted project work, provides the abstracts/papers of the contributions by the participants, as well as impressions from the accompanying programme and organized cultural activities.
The characteristic values of climatic actions in current structural design codes are based on a specified probability of exceedance during the design working life of a structure. These values are traditionally determined from the past observation data under a stationary climate assumption. However, this assumption becomes invalid in the context of climate change, where the frequency and intensity of climatic extremes varies with respect to time. This paper presents a methodology to calculate the non-stationary characteristic values using state of the art climate model projections. The non-stationary characteristic values are calculated in compliance with the requirements of structural design codes by forming quasi-stationary windows of the entire bias-corrected climate model data. Three approaches for the calculation of non-stationary characteristic values considering the design working life of a structure are compared and their consequences on exceedance probability are discussed.
The fire resistance of concrete members is controlled by the temperature distribution of the considered cross section. The thermal analysis can be performed with the advanced temperature dependent physical properties provided by 5EN6 1992-1-2. But the recalculation of laboratory tests on columns from 5TU6 Braunschweig shows, that there are deviations between the calculated and measured temperatures. Therefore it can be assumed, that the mathematical formulation of these thermal properties could be improved. A sensitivity analysis is performed to identify the governing parameters of the temperature calculation and a nonlinear optimization method is used to enhance the formulation of the thermal properties. The proposed simplified properties are partly validated by the recalculation of measured temperatures of concrete columns. These first results show, that the scatter of the differences from the calculated to the measured temperatures can be reduced by the proposed simple model for the thermal analysis of concrete.
Different types of data provide different type of information. The present research analyzes the error on prediction obtained under different data type availability for calibration. The contribution of different measurement types to model calibration and prognosis are evaluated. A coupled 2D hydro-mechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are identified by scaled sensitivities. Then, Particle Swarm Optimization is applied to determine the optimal parameter values and finally, the error in prognosis is determined. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain the actual prediction errors. The analyses presented here were performed calibrating the hydro-mechanical model to 31 data sets of 100 observations of varying data types. The prognosis results improve when using diversified information for calibration. However, when using several types of information, the number of observations has to be increased to be able to cover a representative part of the model domain. For an analysis with constant number of observations, a compromise between data type availability and domain coverage proves to be the best solution. Which type of calibration information contributes to the best prognoses could not be determined in advance. The error in model prognosis does not depend on the error in calibration, but on the parameter error, which unfortunately cannot be determined in inverse problems since we do not know its real value. The best prognoses were obtained independent of calibration fit. However, excellent calibration fits led to an increase in prognosis error variation. In the case of excellent fits; parameters' values came near the limits of reasonable physical values more often. To improve the prognoses reliability, the expected value of the parameters should be considered as prior information on the optimization algorithm.
As an optimization that starts from a randomly selected structure generally does not guarantee reasonable optimality, the use of a systemic approach, named the ground structure, is widely accepted in steel-made truss and frame structural design. However, in the case of reinforced concrete (RC) structural optimization, because of the orthogonal orientation of structural members, randomly chosen or architect-sketched framing is used. Such a one-time fixed layout trend, in addition to its lack of a systemic approach, does not necessarily guarantee optimality. In this study, an approach for generating a candidate ground structure to be used for cost or weight minimization of 3D RC building structures with included slabs is developed. A multiobjective function at the floor optimization stage and a single objective function at the frame optimization stage are considered. A particle swarm optimization (PSO) method is employed for selecting the optimal ground structure. This method enables generating a simple, yet potential, real-world representation of topologically preoptimized ground structure while both structural and main architectural requirements are considered. This is supported by a case study for different floor domain sizes.
Das Bauwesen hat sich in den letzten Jahren durch die Globalisierung des Marktes verbunden mit einer verstärkten Nutzung moderner Technologien stark gewandelt. Die Planung und die Durchführung von Bauvorhaben werden zunehmend komplexer und sind mit erhöhten Risiken verbunden. Geld- und Zeitressourcen werden bei einem immer härter werdenden Konkurrenzkampf knapper.
Das Projektmanagement stellt Lösungsansätze bereit, um Bauvorhaben auch unter erschwerten Bedingungen und erhöhten Risiken erfolgreich zum Abschluss zu bringen. Dabei hat ein systematisches Risikomanagement beginnend bei der Projektentwicklung bis zum Projektabschluss eine für den Projekterfolg entscheidende Bedeutung.
Ziel der Arbeit ist es, eine quantitative Risikoerfassung für Projektmanager als professionelle Bauherrenvertretung und die Simulation der Risikoauswirkungen auf den Verlauf eines Projektes während der Planungs- und Bauphase zu ermöglichen. Mit Hilfe eines abstrakten Modells soll eine differenzierte, praxisnahe Simulation durchführbar sein, die die verschiedenen Arten der Leistungs- und Kostenentstehung widerspiegelt. Parallel dazu soll die Beschreibung von Risiken so abstrahiert werden, dass beliebige Risiken quantitativ erfassbar und anschließend ihre Auswirkungen inklusive mögliche Gegenmaßnahmen in das Modell integrierbar sind.
Anhand zweier Beispiele werden die unterschiedlichen Einsatzmöglichkeiten der quantitativen Erfassung von Projektrisiken und der anschließenden Simulation ihrer Auswirkungen aufgezeigt. Bei dem ersten Beispiel, einem realen, bereits abgeschlossenen Schieneninfrastrukturprojekt, wird die Wirksamkeit einer vorbeugenden Maßnahme gegen ein Projektrisiko untersucht. Im zweiten Beispiel wird ein Planspielansatz zur praxisnahen Aus- und Weiterbildung von Projektmanagern entwickelt. Inhalt des Planspiels ist die Planung und Errichtung eines privatfinanzierten, öffentlichen Repräsentationsbaus mit teilweiser Fremdnutzung.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
The polymeric clay nanocomposites are a new class of materials of which recently have become the centre of attention due to their superior mechanical and physical properties. Several studies have been performed on the mechanical characterisation of these nanocomposites; however most of those studies have neglected the effect of the interfacial region between the clays and the matrix despite of its significant influence on the mechanical performance of the nanocomposites.
There are different analytical methods to calculate the overall elastic material properties of the composites. In this study we use the Mori-Tanaka method to determine the overall stiffness of the composites for simple inclusion geometries of cylinder and sphere. Furthermore, the effect of interphase layer on the overall properties of composites is calculated. Here, we intend to get ounds for the effective mechanical properties to compare with the analytical results. Hence, we use linear displacement boundary conditions (LD) and uniform traction boundary conditions (UT) accordingly. Finally, the analytical results are compared with numerical results and they are in a good agreement.
The next focus of this dissertation is a computational approach with a hierarchical multiscale method on the mesoscopic level. In other words, in this study we use the stochastic analysis and computational homogenization method to analyse the effect of thickness and stiffness of the interfacial region on the overall elastic properties of the clay/epoxy nanocomposites. The results show that the increase in interphase thickness, reduces the stiffness of the clay/epoxy naocomposites and this decrease becomes significant in higher clay contents. The results of the sensitivity analysis prove that the stiffness of the interphase layer has more significant effect on the final stiffness of nanocomposites. We also validate the results with the available experimental results from the literature which show good agreement.
The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM) method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM) method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
Within the scope of literature, the influence of openings within the infill walls that are bounded by a reinforced concrete frame and excited by seismic drift forces in both in- and out-of-plane direction is still uncharted. Therefore, a 3D micromodel was developed and calibrated thereafter, to gain more insight in the topic. The micromodels were calibrated against their equivalent physical test specimens of in-plane, out-of-plane drift driven tests on frames with and without infill walls and openings, as well as out-of-plane bend test of masonry walls. Micromodels were rectified based on their behavior and damage states. As a result of the calibration process, it was found that micromodels were sensitive and insensitive to various parameters, regarding the model’s behavior and computational stability. It was found that, even within the same material model, some parameters had more effects when attributed to concrete rather than on masonry. Generally, the in-plane behavior of infilled frames was found to be largely governed by the interface material model. The out-of-plane masonry wall simulations were governed by the tensile strength of both the interface and masonry material model. Yet, the out-of-plane drift driven test was governed by the concrete material properties.
The present article aims to provide an overview of the consequences of dynamic soil-structure interaction (SSI) on building structures and the available modelling techniques to resolve SSI problems. The role of SSI has been traditionally considered beneficial to the response of structures. However, contemporary studies and evidence from past earthquakes showed detrimental effects of SSI in certain conditions. An overview of the related investigations and findings is presented and discussed in this article. Additionally, the main approaches to evaluate seismic soil-structure interaction problems with the commonly used modelling techniques and computational methods are highlighted. The strength, limitations, and application cases of each model are also discussed and compared. Moreover, the role of SSI in various design codes and global guidelines is summarized. Finally, the advancements and recent findings on the SSI effects on the seismic response of buildings with different structural systems and foundation types are presented. In addition, with the aim of helping new researchers to improve previous findings, the research gaps and future research tendencies in the SSI field are pointed out.
Building Information Modeling is a powerful tool for the design and for a consistent set of data in a virtual storage. For the application in the phases of realization and on site it needs further development. The paper describes main challenges and main features, which will help the development of software to better service the needs of construction site managers
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.
In this work different fibre optic sensors for the structural health monitoring of civil engineering structures are reported. A fibre optic crack sensor and two different fibre optic moisture sensors have been designed to detect the moisture ingress in concrete based building structures. Moreover, the degeneration of the mechanical properties of optical glass fibre sensors and hence their long-term stability and reliability due to the mechanical and chemical impact of the concrete environment is discussed as well as the advantage of applying a fibre optic sensor system for the structural health monitoring of sewerage tunnels is demonstrated.
One major research focus in the Material Science and Engineering Community in the past decade has been to obtain a more fundamental understanding on the phenomenon 'material failure'. Such an understanding is critical for engineers and scientists developing new materials with higher strength and toughness, developing robust designs against failure, or for those concerned with an accurate estimate of a component's design life. Defects like cracks and dislocations evolve at
nano scales and influence the macroscopic properties such as strength, toughness and ductility of a material. In engineering applications, the global response of the system is often governed by the behaviour at the smaller length scales. Hence, the sub-scale behaviour must be computed accurately for good predictions of the full scale behaviour.
Molecular Dynamics (MD) simulations promise to reveal the fundamental mechanics of material failure by modeling the atom to atom interactions. Since the atomistic dimensions are of the order of Angstroms ( A), approximately 85 billion atoms are required to model a 1 micro- m^3 volume of Copper. Therefore, pure atomistic models are prohibitively expensive with everyday engineering computations involving macroscopic cracks and shear bands, which are much larger than the atomistic length and time scales. To reduce the computational effort, multiscale methods are required, which are able to couple a continuum description of the structure with an atomistic description. In such paradigms, cracks and dislocations are explicitly modeled at the atomistic scale, whilst a self-consistent continuum model elsewhere.
Many multiscale methods for fracture are developed for "fictitious" materials based on "simple" potentials such as the Lennard-Jones potential. Moreover, multiscale methods for evolving cracks are rare. Efficient methods to coarse grain the fine scale defects are missing. However, the existing multiscale methods for fracture do not adaptively adjust the fine scale domain as the crack propagates. Most methods, therefore only "enlarge" the fine scale domain and therefore drastically increase computational cost. Adaptive adjustment requires the fine scale domain to be refined and coarsened. One of the major difficulties in multiscale methods for fracture is to up-scale fracture related material information from the fine scale to the coarse scale, in particular for complex crack problems. Most of the existing approaches therefore were applied to examples with comparatively few macroscopic cracks.
Key contributions
The bridging scale method is enhanced using the phantom node method so that cracks can be modeled at the coarse scale. To ensure self-consistency in the bulk, a virtual atom cluster is devised providing the response of the intact material at the coarse scale. A molecular statics model is employed in the fine scale where crack propagation is modeled by naturally breaking the bonds. The fine scale and coarse scale models are coupled by enforcing the displacement boundary conditions on the ghost atoms. An energy criterion is used to detect the crack tip location. Adaptive refinement and coarsening schemes are developed and implemented during the crack propagation. The results were observed to be in excellent agreement with the pure atomistic simulations. The developed multiscale method is one of the first adaptive multiscale method for fracture.
A robust and simple three dimensional coarse graining technique to convert a given atomistic region into an equivalent coarse region, in the context of multiscale fracture has been developed. The developed method is the first of its kind. The developed coarse graining technique can be applied to identify and upscale the defects like: cracks, dislocations and shear bands. The current method has been applied to estimate the equivalent coarse scale models of several complex fracture patterns arrived from the pure atomistic simulations. The upscaled fracture pattern agree well with the actual fracture pattern. The error in the potential energy of the pure atomistic and the coarse grained model was observed to be acceptable.
A first novel meshless adaptive multiscale method for fracture has been developed. The phantom node method is replaced by a meshless differential reproducing kernel particle method. The differential reproducing kernel particle method is comparatively more expensive but allows for a more "natural" coupling between the two scales due to the meshless interpolation functions. The higher order continuity is also beneficial. The centro symmetry parameter is used to detect the crack tip location. The developed multiscale method is employed to study the complex crack propagation. Results based on the meshless adaptive multiscale method were observed to be in excellent agreement with the pure atomistic simulations.
The developed multiscale methods are applied to study the fracture in practical materials like Graphene and Graphene on Silicon surface. The bond stretching and the bond reorientation were observed to be the net mechanisms of the crack growth in Graphene. The influence of time step on the crack propagation was studied using two different time steps. Pure atomistic simulations of fracture in Graphene on Silicon surface are presented. Details of the three dimensional multiscale method to study the fracture in Graphene on Silicon surface are discussed.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
The current study attempts to recognise an adequate classification for a semi-rigid beam-to-column connection by investigating strength, stiffness and ductility. For this purpose, an experimental test was carried out to investigate the moment-rotation (M-theta) features of flush end-plate (FEP) connections including variable parameters like size and number of bolts, thickness of end-plate, and finally, size of beams and columns. The initial elastic stiffness and ultimate moment capacity of connections were determined by an extensive analytical procedure from the proposed method prescribed by ANSI/AISC 360-10, and Eurocode 3 Part 1-8 specifications. The behaviour of beams with partially restrained or semi-rigid connections were also studied by incorporating classical analysis methods. The results confirmed that thickness of the column flange and end-plate substantially govern over the initial rotational stiffness of of flush end-plate connections. The results also clearly showed that EC3 provided a more reliable classification index for flush end-plate (FEP) connections. The findings from this study make significant contributions to the current literature as the actual response characteristics of such connections are non-linear. Therefore, such semirigid behaviour should be used to for an analysis and design method.
Der vorliegende Beitrag beschreibt die Problematik bei der Prognose verkehrsbedingter Schadstoff-Immissionen. Im Mittelpunkt steht die Entwicklung und der Aufbau einer Simulationsumgebung zur Evaluation von umweltorientierten Verkehrsmanagement-Strategien. Die Simulationsumgebung wird über die drei Felder Verkehr, Emission, Immission entwickelt und findet zunächst Anwendung in der Evaluation verkehrlicher Maßnahmen für die Friedberger Landstraße in Frankfurt am Main.
In dieser Arbeit werden die Ergebnisse von experimentellen Untersuchungen an unbewehrten und bewehrten modifizierten Betonen unter monoton steigender Belastung bis zum Bruch, einfacher Kurzzeitbelastung im Grenzbereich der Tragfähigkeit und mehrfach wiederholter Belastung mit kontinuierlicher Be- und Entlastungsgeschwindigkeit vorgestellt und ausgewertet. Für die Modifizierung der Betone wurden zwei grundsätzliche Vorgehens¬weisen angewendet: die Variation der Gesteinskörnung und die Modifizierung der Bindemittelphase mit thermoplastischen Polymeren. Die Auswirkungen der Modifikationen auf die Festigkeitseigenschaften und das Formänderungsverhalten des Betons bei Kurzzeitbelastung waren dabei von besonderem Interesse.
Die beobachteten Veränderungen der Festbetoneigenschaften sowie der nichtlineare Zu-sammenhang zwischen den elastischen und nichtelastischen Verformungsanteilen signali-sieren, dass derartige Modifizierungen das Verformungs- und Bruchverhalten von Beton sig-nifikant beeinflussen und somit beim Nachweis der Tragfähigkeit und Gebrauchstauglichkeit berücksichtigt werden müssen. Neben der Evaluierung des beanspruchungsabhängigen Formänderungsverhaltens werden die etablierten Ansätze zur Beschreibung der Gefügezu-standsbereiche bei Druckbelastung weiter entwickelt, so dass die Übergänge zwischen den Bereichen exakt ermittelt und die Ausprägung der Bereiche quantifiziert werden können. Damit ist ein genauerer Vergleich der durch die Modifizierungen hervorgerufenen Verände-rungen möglich.
The planning process in civil engineering is highly complex and not manageable in its entirety.
The state of the art decomposes complex tasks into smaller, manageable sub-tasks. Due to the close interrelatedness of the sub-tasks, it is essential to couple them. However, from a software engineering point of view, this is quite challenging to do because of the numerous incompatible software applications on the market. This study is concerned with two main objectives: The first is the generic formulation of coupling strategies in order to support engineers in the implementation and selection of adequate coupling strategies. This has been achieved by the use of a coupling pattern language combined with a four-layered, metamodel architecture, whose applicability has been performed on a real coupling scenario. The second one is the quality assessment of coupled software. This has been developed based on the evaluated schema mapping. This approach has been described using mathematical expressions derived from the set theory and graph theory by taking the various mapping patterns into account. Moreover, the coupling quality has been evaluated within the formalization process by considering the uncertainties that arise during mapping and has resulted in global quality values, which can be used by the user to assess the exchange. Finally, the applicability of the proposed approach has been shown using an engineering case study.
Für anwendungsbezogene Lösungsansätze im Bereich der Siedlungswasserwirtschaft im urbanen Raum ist es wichtig, die Inhalte komplexer ineinander greifender Systeme zu begreifen. Ein Simulationsspiel kann hilfreich sein, um den Nutzer mit neuen Technologien und Möglichkeiten der Kombination vertraut zu machen. Aufgrund hoher Anforderungen an Komplexität und Detailliertheit der Modelle ist die Entwicklung eines solchen Spiels teuer und aufwändig. Diese Arbeit untersucht, inwieweit sich das kommerziell zu Unterhaltungszwecken entwickelte Spiel SimCity 5 (Version 2013) nutzen lässt bzw. wie es konfiguriert werden muss. Im Speziellen wird dies am Beispiel des naturnahen Regenwassermanagements im urbanen Raum erläutert.
Die Analyse von SimCity 5 zeigt, dass sich das Spiel durchaus als Werkzeug zur Entscheidungsunterstützung eignet. Die Teilsysteme der Siedlungswasserwirtschaft sind jedoch zu stark vereinfacht, sodass Verbesserungsbedarf besteht.
Um Szenarien des naturnahen Regenwassermanagements in das Spiel zu integrieren, wurde im Rahmen der Arbeit das Modell SimRegen entwickelt. Da derzeit keine Schnittstelle zur agentenbasierten Simulationsengine GlassBox freigegeben ist, wurden Teilaspekte des Modells mit Hilfe des agentenbasierten Simulationswerkzeugs NetLogo (Version 5.0.4) implementiert.
Bei Analysen des Gebäudebestands im Quartierskontext werden zu Dokumentationszwecken viele Bilddaten erzeugt. Diese Daten sind im Nachhinein häufig keinen eindeutig genauen Standorten und Blickwinkeln auf das Bauwerk zuzuordnen. Insbesondere gilt dies für Ortsunkundige oder für Detailaufnahmen. Eine zusätzliche Herausforderung stellt die Aufnahme von Wärmebrücken- oder andersartigen Gebäudedetails durch Thermogramme dar. In der Praxis kommen hier oftmals analoge, fehleranfällige Lösungen zum Einsatz.
Durch die Nutzung von Georeferenzierung kann diese Lücke geschlossen und eine eindeutige Kommunikation und Auswertung gewährleistet werden. Im Gegensatz zu den üblichen Kameras sind Smartphones nach Stand der Technik ausreichend ausgestattet, um neben Daten zu Standort auch die Orientierungswinkel einer Bildaufnahme zu dokumentieren. Die georefenzierten Bilder können auf Grundlage der in den sogenannten Exif-Daten mitgeschriebenen Informationen händisch in ein bestehendes Quartiersmodell integriert werden.
Anhand eines universitären Musterquartiers wird die nutzerfreundliche Realisierung beispielhaft erprobt und auf ihre Potentiale zur Automatisierung in Python untersucht. Hierfür wurde ein bestehendes Quartiersmodell als geometrische Grundlage genutzt und um RGB-Bilder sowie Thermogramme erweitert. Das beschriebene Vorgehen wird im Rahmen der Anwendung auf seinen möglichen Einsatz im Rahmen einer energetischen Quartierserfassung sowie einer Bauschadensdokumentation untersucht.
Mit dem vorliegenden Beitrag wird dem Nutzenden ein Werkzeug bereitgestellt, das die hochwertige Dokumentation einer Bestandserfassung, auch im Quartierskontext, ermöglicht.
Das Hauptziel der Arbeit war es zu klären, ob alkalihaltige Enteisungsmittel eine Alkali-Kieselsäure-Reaktion (AKR) auslösen und/oder beschleunigen können und was die dabei ggf. zugrunde liegenden Mechanismen sind. Die Untersuchungen dazu ergaben, dass die auf Verkehrsflächen eingesetzten alkalihaltigen Enteisungsmittel auf Basis von Natriumchlorid (Fahrbahndecken) bzw. auf Basis der Alkaliacetate und -formiate (Flugbetriebsflächen) den Ablauf einer AKR in Betonen mit alkalireaktiven Gesteinskörnungen auslösen und mitunter stark beschleunigen können. Dabei nimmt die AKR-fördernde Wirkung der Enteisungsmittel in der Reihenfolge Natriumchlorid - Alkaliacetate - Alkaliformiate erheblich zu.
Es zeigte sich, dass im Fall der Alkaliacetate und -formiate nicht allein die Zufuhr von Alkalien von Bedeutung ist, sondern dass es außerdem zu einer Freisetzung von OH-Ionen aus dem Portlandit und folglich zu einem Anstieg des pH-Wertes in der Porenlösung kommt. Dadurch wird der Angriff auf alkalireaktives SiO2 in Gesteinskörnungen verstärkt und der Ablauf einer AKR beschleunigt. Unter äußerer NaCl-Zufuhr kommt es hingegen nicht zu einem Anstieg des pH-Wertes, was der Grund für die weniger stark AKR-fördernde Wirkung von NaCl ist. Von Bedeutung sind hier die zugeführten Na-Ionen und offenbar ein sich andeutender, direkter Einfluss von NaCl auf das SiO2-Löseverhalten. Sind pH-Wert und Na-Konzentration in der Porenlösung ausreichend hoch, wird sich thermodynamisch bedingt AKR-Gel bilden. Die Bildung von FRIEDEL’schem Salz ist dabei nur eine Begleiterscheinung, aber keine Voraussetzung für den Ablauf einer AKR unter äußerer NaCl-Zufuhr.
Es zeigte sich weiter, dass sich mit der FIB-Klimawechsellagerung als Performance-Prüfung das AKR-Schädigungspotential von Betonen für Fahrbahndecken und Flugbetriebsflächen zuverlässig beurteilen lässt. Die Vorteile der FIB-Klimawechsellagerung liegen in der Prüfung kompletter, projektspezifischer Betonzusammensetzungen unter Beachtung aller praxisrelevanten klimatischen Einwirkungen und vor allem in der Berücksichtigung einer äußeren Alkalizufuhr. Innerhalb von 36 Wochen kann das AKR-Schädigungspotential einer Betonzusammensetzung für eine Nutzungsdauer von 20-30 Jahren in der Praxis sicher beurteilt werden.
We present a physics-informed deep learning model for the transient heat transfer analysis of three-dimensional functionally graded materials (FGMs) employing a Runge–Kutta discrete time scheme. Firstly, the governing equation, associated boundary conditions and the initial condition for transient heat transfer analysis of FGMs with exponential material variations are presented. Then, the deep collocation method with the Runge–Kutta integration scheme for transient analysis is introduced. The prior physics that helps to generalize the physics-informed deep learning model is introduced by constraining the temperature variable with discrete time schemes and initial/boundary conditions. Further the fitted activation functions suitable for dynamic analysis are presented. Finally, we validate our approach through several numerical examples on FGMs with irregular shapes and a variety of boundary conditions. From numerical experiments, the predicted results with PIDL demonstrate well agreement with analytical solutions and other numerical methods in predicting of both temperature and flux distributions and can be adaptive to transient analysis of FGMs with different shapes, which can be the promising surrogate model in transient dynamic analysis.
"Qualitätsmanagement (QM) ist im Sinne reibungslos funktionierender Abläufe ein unverzichtbarer Bestandteil jedes Büros, unabhängig von der Größe und dem Kerngeschäft und unabhängig davon, ob ein Zertifizierungsverfahren durchlaufen
wird oder nicht. Im Laufe der Jahre werden in Ingenieur- und Architekturbüros meist zahlreiche organisatorische Einzelregelungen getroffen, die den Alltag erleichtern sollen. Eine systematische Zusammenstellung, Einführung und Kontrolle unterbleibt jedoch oft. Häufig schrecken die Verantwortlichen vor dem vermeintlichen Aufwand für die systematische Zusammenstellung in einem QM-Handbuch und dem vermeintlich noch viel größeren Aufwand für eine externe Überprüfung im Rahmen eines externen Audits mit anschließender Zertifizierung zurück. Der
Nutzen, der alleine schon durch ein passgenau aufgebautes und gelebtes QMHandbuch entsteht, wird nicht realisiert.
Der QM-Standard „Planer am Bau“ (PaB) ist ein branchenspezifischer Standard, der gezielt für Ingenieur- und Architekturbüros entwickelt worden ist und ausschließlich deren Belange berücksichtigt. Im Ergebnis entstehen nach klaren Vorgaben der Mindestanforderungen schlanke, auf die jeweiligen Bürobesonderheiten angepasste Handbücher, die durch den TÜV Rheinland auditiert und zertifiziert werden
können. Der Nachweis eines wirksamen QM-Systems ist mit diesem Zertifikat erbracht, was unter anderem in VgV-Ausschreibungen Vorteile bringen kann."
Operator Calculus Approach to Comparison of Elasticity Models for Modelling of Masonry Structures
(2022)
The solution of any engineering problem starts with a modelling process aimed at formulating a mathematical model, which must describe the problem under consideration with sufficient precision. Because of heterogeneity of modern engineering applications, mathematical modelling scatters nowadays from incredibly precise micro- and even nano-modelling of materials to macro-modelling, which is more appropriate for practical engineering computations. In the field of masonry structures, a macro-model of the material can be constructed based on various elasticity theories, such as classical elasticity, micropolar elasticity and Cosserat elasticity. Evidently, a different macro-behaviour is expected depending on the specific theory used in the background. Although there have been several theoretical studies of different elasticity theories in recent years, there is still a lack of understanding of how modelling assumptions of different elasticity theories influence the modelling results of masonry structures. Therefore, a rigorous approach to comparison of different three-dimensional elasticity models based on quaternionic operator calculus is proposed in this paper. In this way, three elasticity models are described and spatial boundary value problems for these models are discussed. In particular, explicit representation formulae for their solutions are constructed. After that, by using these representation formulae, explicit estimates for the solutions obtained by different elasticity theories are obtained. Finally, several numerical examples are presented, which indicate a practical difference in the solutions.
Polymeric nanocomposites (PNCs) are considered for numerous nanotechnology such as: nano-biotechnology, nano-systems, nanoelectronics, and nano-structured materials. Commonly , they are formed by polymer (epoxy) matrix reinforced with a nanosized filler. The addition of rigid nanofillers to the epoxy matrix has offered great improvements in the fracture toughness without sacrificing other important thermo-mechanical properties. The physics of the fracture in PNCs is rather complicated and is influenced by different parameters. The presence of uncertainty in the predicted output is expected as a result of stochastic variance in the factors affecting the fracture mechanism. Consequently, evaluating the improved fracture toughness in PNCs is a challenging problem.
Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) have been employed to predict the fracture energy of polymer/particle nanocomposites. The ANN and ANFIS models were constructed, trained, and tested based on a collection of 115 experimental datasets gathered from the literature. The performance evaluation indices of the developed ANN and ANFIS showed relatively small error, with high coefficients of determination (R2), and low root mean square error and mean absolute percentage error.
In the framework for uncertainty quantification of PNCs, a sensitivity analysis (SA) has been conducted to examine the influence of uncertain input parameters on the fracture toughness of polymer/clay nanocomposites (PNCs). The phase-field approach is employed to predict the macroscopic properties of the composite considering six uncertain input parameters. The efficiency, robustness, and repeatability are compared and evaluated comprehensively for five different SA methods.
The Bayesian method is applied to develop a methodology in order to evaluate the performance of different analytical models used in predicting the fracture toughness of polymeric particles nanocomposites. The developed method have considered the model and parameters uncertainties based on different reference data (experimental measurements) gained from the literature. Three analytical models differing in theory and assumptions were examined. The coefficients of variation of the model predictions to the measurements are calculated using the approximated optimal parameter sets. Then, the model selection probability is obtained with respect to the different reference data.
Stochastic finite element modeling is implemented to predict the fracture toughness of polymer/particle nanocomposites. For this purpose, 2D finite element model containing an epoxy matrix and rigid nanoparticles surrounded by an interphase zone is generated. The crack propagation is simulated by the cohesive segments method and phantom nodes. Considering the uncertainties in the input parameters, a polynomial chaos expansion (PCE) surrogate model is construed followed by a sensitivity analysis.
Encapsulation-based self-healing concrete (SHC) is the most promising technique for providing a self-healing mechanism to concrete. This is due to its capacity to heal fractures effectively without human interventions, extending the operational life and lowering maintenance costs. The healing mechanism is created by embedding capsules containing the healing agent inside the concrete. The healing agent will be released once the capsules are fractured and the healing occurs in the vicinity of the damaged part. The healing efficiency of the SHC is still not clear and depends on several factors; in the case of microcapsules SHC the fracture of microcapsules is the most important aspect to release the healing agents and hence heal the cracks. This study contributes to verifying the healing efficiency of SHC and the fracture mechanism of the microcapsules. Extended finite element method (XFEM) is a flexible, and powerful discrete crack method that allows crack propagation without the requirement for re-meshing and has been shown high accuracy for modeling fracture in concrete. In this thesis, a computational fracture modeling approach of Encapsulation-based SHC is proposed based on the XFEM and cohesive surface technique (CS) to study the healing efficiency and the potential of fracture and debonding of the microcapsules or the solidified healing agents from the concrete matrix as well. The concrete matrix and a microcapsule shell both are modeled by the XFEM and combined together by CS. The effects of the healed-crack length, the interfacial fracture properties, and microcapsule size on the load carrying capability and fracture pattern of the SHC have been studied. The obtained results are compared to those obtained from the zero thickness cohesive element approach to demonstrate the significant accuracy and the validity of the proposed simulation. The present fracture simulation is developed to study the influence of the capsular clustering on the fracture mechanism by varying the contact surface area of the CS between the microcapsule shell and the concrete matrix. The proposed fracture simulation is expanded to 3D simulations to validate the 2D computational simulations and to estimate the accuracy difference ratio between 2D and 3D simulations. In addition, a proposed design method is developed to design the size of the microcapsules consideration of a sufficient volume of healing agent to heal the expected crack width. This method is based on the configuration of the unit cell (UC), Representative Volume Element (RVE), Periodic Boundary Conditions (PBC), and associated them to the volume fraction (Vf) and the crack width as variables. The proposed microcapsule design is verified through computational fracture simulations.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
Determining the earthquake hazard of any settlement is one of the primary studies for reducing earthquake damage. Therefore, earthquake hazard maps used for this purpose must be renewed over time. Turkey Earthquake Hazard Map has been used instead of Turkey Earthquake Zones Map since 2019. A probabilistic seismic hazard was performed by using these last two maps and different attenuation relationships for Bitlis Province (Eastern Turkey) were located in the Lake Van Basin, which has a high seismic risk. The earthquake parameters were determined by considering all districts and neighborhoods in the province. Probabilistic seismic hazard analyses were carried out for these settlements using seismic sources and four different attenuation relationships. The obtained values are compared with the design spectrum stated in the last two earthquake maps. Significant differences exist between the design spectrum obtained according to the different exceedance probabilities. In this study, adaptive pushover analyses of sample-reinforced concrete buildings were performed using the design ground motion level. Structural analyses were carried out using three different design spectra, as given in the last two seismic design codes and the mean spectrum obtained from attenuation relationships. Different design spectra significantly change the target displacements predicted for the performance levels of the buildings.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
In this study, an application of evolutionary multi-objective optimization algorithms on the optimization of sandwich structures is presented. The solution strategy is known as Elitist Non-Dominated Sorting Evolution Strategy (ENSES) wherein Evolution Strategies (ES) as Evolutionary Algorithm (EA) in the elitist Non-dominated Sorting Genetic algorithm (NSGA-II) procedure. Evolutionary algorithm seems a compatible approach to resolve multi-objective optimization problems because it is inspired by natural evolution, which closely linked to Artificial Intelligence (AI) techniques and elitism has shown an important factor for improving evolutionary multi-objective search. In order to evaluate the notion of performance by ENSES, the well-known study case of sandwich structures are reconsidered. For Case 1, the goals of the multi-objective optimization are minimization of the deflection and the weight of the sandwich structures. The length, the core and skin thicknesses are the design variables of Case 1. For Case 2, the objective functions are the fabrication cost, the beam weight and the end deflection of the sandwich structures. There are four design variables i.e., the weld height, the weld length, the beam depth and the beam width in Case 2. Numerical results are presented in terms of Paretooptimal solutions for both evaluated cases.
The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.
A more careful consideration of food waste is needed for planning the urban environment. The research signals links between the organization of individuals, the built environment and food waste management through a study conducted in Mexico. It recognizes the different scales within which solid waste management operates, explores food waste production at household levels, and investigates the urban circumstances that influence its management. This is based on the idea that sustainable food waste management in cities requires a constellation of processes through which a ‘people centered’ approach offers added value to technical and biological facts. This distinction addresses how urban systems react to waste and what behavioral and structural factors affect current sanitary practices in Mexico. Food waste is a resource-demanding item, which makes for a considerable amount of refuse being disposed of in landfills in developing cities. The existing data shortage on waste generation at household levels debilitates implementation strategies and there is a need for more contextual knowledge associated with waste. The evidence-based study includes an explorative phase on the culture of waste management and a more in-depth examination of domestic waste composition. Mixed data collection tools including a household based survey, a food waste diary and weighing recording system were developed to enquire into the daily practices of waste disposal in households. The contrasting urban environment of Mexico City Metropolitan Area holds indistinctive boundaries between the core and the periphery, which hinder the implementation of integrated environmental plans. External determinants are different modes of urban transformation and internal determinants are building features and their consolidation processes. At the household level, less and more affluents groups responded differently to external environmental stressors. A targeted planning proposition is required for each group. Local alternative waste management is more likely to be implement in less affluent contexts. Further, more effective demand-driven service delivery implies better integration between the formal and informal sectors. The results show that efforts toward securing long-term changes in Mexico and other cities with similar circumstances require creating synergy between education, building consolidation, local infrastructure and social engagement.
Methods based on B-splines for model representation, numerical analysis and image registration
(2015)
The thesis consists of inter-connected parts for modeling and analysis using newly developed isogeometric methods. The main parts are reproducing kernel triangular B-splines, extended isogeometric analysis for solving weakly discontinuous problems, collocation methods using superconvergent points, and B-spline basis in image registration applications.
Each topic is oriented towards application of isogeometric analysis basis functions to ease the process of integrating the modeling and analysis phases of simulation.
First, we develop reproducing a kernel triangular B-spline-based FEM for solving PDEs. We review the triangular B-splines and their properties. By definition, the triangular basis function is very flexible in modeling complicated domains. However, instability results when it is applied for analysis. We modify the triangular B-spline by a reproducing kernel technique, calculating a correction term for the triangular kernel function from the chosen surrounding basis. The improved triangular basis is capable to obtain the results with higher accuracy and almost optimal convergence rates.
Second, we propose an extended isogeometric analysis for dealing with weakly discontinuous problems such as material interfaces. The original IGA is combined with XFEM-like enrichments which are continuous functions themselves but with discontinuous derivatives. Consequently, the resulting solution space can approximate solutions with weak discontinuities. The method is also applied to curved material interfaces, where the inverse mapping and the curved triangular elements are considered.
Third, we develop an IGA collocation method using superconvergent points. The collocation methods are efficient because no numerical integration is needed. In particular when higher polynomial basis applied, the method has a lower computational cost than Galerkin methods. However, the positions of the collocation points are crucial for the accuracy of the method, as they affect the convergent rate significantly. The proposed IGA collocation method uses superconvergent points instead of the traditional Greville abscissae points. The numerical results show the proposed method can have better accuracy and optimal convergence rates, while the traditional IGA collocation has optimal convergence only for even polynomial degrees.
Lastly, we propose a novel dynamic multilevel technique for handling image registration. It is application of the B-spline functions in image processing. The procedure considered aims to align a target image from a reference image by a spatial transformation. The method starts with an energy function which is the same as a FEM-based image registration. However, we simplify the solving procedure, working on the energy function directly. We dynamically solve for control points which are coefficients of B-spline basis functions. The new approach is more simple and fast. Moreover, it is also enhanced by a multilevel technique in order to prevent instabilities. The numerical testing consists of two artificial images, four real bio-medical MRI brain and CT heart images, and they show our registration method is accurate, fast and efficient, especially for large deformation problems.
Matrix-free voxel-based finite element method for materials with heterogeneous microstructures
(2019)
Modern image detection techniques such as micro computer tomography
(μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis.
However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm.
This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained.
Kleine Kommunen im ländlichen Raum sind aufgrund ihrer oft eingeschränkten personellen und finanziellen Kapazitäten bisher eher sporadisch in den Themenfeldern Energieeffizienz und Erneuerbare Energien aktiv. Immer wieder stellt sich daher Frage, wie die Klimaschutzstrategien des Bundes und der Länder dort mit dem verfügbaren Personal kostengünstig realisierbar sind. Vor diesem Hintergrund wird ein Werkzeug entwickelt, mit dessen Hilfe der aktive Einstieg in diese Thematik mit geringen Aufwand und überwiegend barrierefrei möglich ist.
Der Aufbau eines prozessorientierten Entwicklungs- und Moderationsmodells zur Erprobung und Umsetzung bezahlbarer Handlungsoptionen für Energieeinsparungen und effizienten Energieeinsatz im überwiegend ländlichen geprägten Raum ist der Schwerpunkt der Softwarelösung.
Kommunen werden mit deren Hilfe in die Lage versetzt, in die notwendigen Prozesse der Energie- und Wärmewende einzusteigen. Dabei soll der modulare Aufbau die regulären Schritte notwendiger (integrierter) Planungsprozesse nicht vollständig ersetzen. Vielmehr können innerhalb der Online-Anwendung - überwiegend automatisiert - konkrete Maßnahmenvorschläge erstellt werden, die ein solides Fundament der künftigen energetischen Entwicklung der Kommunen darstellen.
Für eine gezielte Validierung der Ergebnisse und der Ableitung potentieller Maßnahmen werden für die Erprobung Modellkommunen in Thüringen, Bayern und Hessen als Reallabore einbezogen.
Das Tool steht bisher zunächst nur den beteiligten Modellkommunen zur Verfügung. Die entwickelte Softwarelösung soll künftig Schritt für Schritt allen interessierten Kommunen mit diversen Hilfsmitteln und einer Vielzahl anderer praktischer Bestandteile zur Verfügung gestellt werden.
Bolted connections are commonly used in steel construction. The load-bearing behavior of bolt fittings has extensively been studied in various research activities and the bearing capacity of bolted connections can be assessed well by standard regulations for practical applications. With regard to tensile loading, the nut does not have strong influence on resistances, since the failure occurs in the bolts due to higher material strengths of the nuts. In some applications, so-called “blind holes” are used to connect plated components. In a manner of speaking, the nut is replaced by the “outer” plate with a prefabricated hole and thread, in which the bolt can be screwed and tightened. In such connections, the limit load capacity cannot solely be assessed by the bolt resistance, since the threaded hole in the base material has strong influence on the structural behavior. In this context, the available screw-in depth of the blind hole is of fundamental importance. The German National Annex of EN 1993-1-8 provides information on a necessary depth in order to transfer the full tensile capacity of the bolt. However, some connections do not allow to fabricate such depths. In these cases, the capacity of the connection is unclear and not specified. In this paper, first experiments on corresponding connections with different screw-in depths are presented and compared to limit load capacities according to the standard.
Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.
The main objective of this thesis is to investigate the characteristics of rice husk ash RHA) and then its behaviour in self-compacting high performance concrete (SCHPC) with respects to rheological properties, hydration and microstructure development and alkali silica reaction, in comparison with silica fume (SF). The main results show that the RHA is a macro-mesoporous amorphous siliceous material with a very high silica content comparable with SF. The pore size distribution is the most important parameter of RHA besides amorphous silica content. This parameter affects pore volume, specific surface area, and thus the water demand and the pozzolanic reactivity of RHA and its behaviour in SCHPC. The incorporation of RHA decreases filling and passing abilities, but significantly increases plastic viscosity and segregation resistance of SCHPC. Therefore, RHA can be used as a viscosity modifying admixture for SCHPC. The incorporation of RHA increases the superplasticizer adsorption, the superplasticizer saturation dosage, yield stress and plastic viscosity of mortar. Fresh mortar formulated from SCHPC is a shear-thickening material. The incorporation of RHA/SF ecreases the shearthickening degree. The incorporation of RHA/SF increases the degree of cement hydration. SF appears more effective at 3 days possibly due to the better nucleation site effect, whereas RHA dominates at the later ages possibly due to the internal water curing effect. The incorporation of RHA/SF increases the degree of C3S hydration, particularly the C3S hydration rate from 3 to 14 days. The pozzolanic reaction takes place outside and inside RHA particles.
The internal pozzolanic eaction products consolidate the pores inside RHA particles rather than contribute to the pore refinement in the cement matrix. In the presence of the high alkali concentration, RHA particles act as microreactive aggregates and react with alkali hydroxide to generate the expansive alkali silica reaction products. Increasing the particle size and temperature increases the alkali silica reactivity of RHA. The mechanism for the successive pozzolanic and alkali silica reactions of RHA is theorized. Additionally, a new simple
mix design method is proposed for SCHPC containing various supplementary cementitious materials, i.e. RHA, SF, fly ash and limestone powder.
Rice husk ash (RHA) is classified as a highly reactive pozzolan. It has a very high silica content similar to that of silica fume (SF). Using less-expensive and locally available RHA as a mineral admixture in concrete brings ample benefits to the costs, the technical properties of concrete as well as to the environment. An experimental study of the effect of RHA blending on workability, strength and durability of high performance fine-grained concrete (HPFGC) is presented. The results show that the addition of RHA to HPFGC improved significantly compressive strength, splitting tensile strength and chloride penetration resistance. Interestingly, the ratio of compressive strength to splitting tensile strength of HPFGC was lower than that of ordinary concrete, especially for the concrete made with 20 % RHA. Compressive strength and splitting tensile strength of HPFGC containing RHA was similar and slightly higher, respectively, than for HPFGC containing SF. Chloride penetration resistance of HPFGC containing 10–15 % RHA was comparable with that of HPFGC containing 10 % SF.