620 Ingenieurwissenschaften
Refine
Has Fulltext
- yes (100) (remove)
Document Type
- Article (56)
- Doctoral Thesis (27)
- Conference Proceeding (9)
- Master's Thesis (4)
- Book (1)
- Habilitation (1)
- Report (1)
- Working Paper (1)
Institute
- Institut für Strukturmechanik (ISM) (33)
- F. A. Finger-Institut für Baustoffkunde (FIB) (6)
- Professur Bauphysik (6)
- Graduiertenkolleg 1462 (5)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (4)
- Institut für Konstruktiven Ingenieurbau (IKI) (4)
- Materialforschungs- und -prüfanstalt an der Bauhaus-Universität (4)
- Professur Baubetrieb und Bauverfahren (4)
- Junior-Professur Komplexe Tragwerke (3)
- Professur Bauchemie und Polymere Werkstoffe (3)
Keywords
- Finite-Elemente-Methode (7)
- Beton (5)
- Modellierung (5)
- OA-Publikationsfonds2022 (5)
- Strukturmechanik (5)
- Building Information Modeling (4)
- Bruchmechanik (3)
- Erdbeben (3)
- Ingenieurbau (3)
- OA-Publikationsfonds2020 (3)
When predicting sound pressure levels induced by structure-borne sound sources and describing the sound propagation path through the building structure as exactly as possible, it is necessary to characterize the vibration behavior of the structure-borne sound sources. In this investigation, the characterization of structure-borne sound sources was performed using the two-stage method (TSM) described in EN 15657. Four different structure-borne sound sources were characterized and subsequently installed in a lightweight test stand. The resulting sound pressure levels in an adjacent receiving room were measured. In the second step, sound pressure levels were predicted according to EN 12354-5 based on the parameters of the structure-borne sound sources. Subsequently, the predicted and the measured sound pressure levels were compared to obtain reliable statements on the achievable accuracy when using source quantities determined by TSM with this prediction method.
Identification of modal parameters of a space frame structure is a complex assignment due to a large number of degrees of freedom, close natural frequencies, and different vibrating mechanisms. Research has been carried out on the modal identification of rather simple truss structures. So far, less attention has been given to complex three-dimensional truss structures. This work develops a vibration-based methodology for determining modal information of three-dimensional space truss structures. The method uses a relatively complex space truss structure for its verification. Numerical modelling of the system gives modal information about the expected vibration behaviour. The identification process involves closely spaced modes that are characterised by local and global vibration mechanisms. To distinguish between local and global vibrations of the system, modal strain energies are used as an indicator. The experimental validation, which incorporated a modal analysis employing the stochastic subspace identification method, has confirmed that considering relatively high model orders is required to identify specific mode shapes. Especially in the case of the determination of local deformation modes of space truss members, higher model orders have to be taken into account than in the modal identification of most other types of structures.
The characteristic values of climatic actions in current structural design codes are based on a specified probability of exceedance during the design working life of a structure. These values are traditionally determined from the past observation data under a stationary climate assumption. However, this assumption becomes invalid in the context of climate change, where the frequency and intensity of climatic extremes varies with respect to time. This paper presents a methodology to calculate the non-stationary characteristic values using state of the art climate model projections. The non-stationary characteristic values are calculated in compliance with the requirements of structural design codes by forming quasi-stationary windows of the entire bias-corrected climate model data. Three approaches for the calculation of non-stationary characteristic values considering the design working life of a structure are compared and their consequences on exceedance probability are discussed.
As an optimization that starts from a randomly selected structure generally does not guarantee reasonable optimality, the use of a systemic approach, named the ground structure, is widely accepted in steel-made truss and frame structural design. However, in the case of reinforced concrete (RC) structural optimization, because of the orthogonal orientation of structural members, randomly chosen or architect-sketched framing is used. Such a one-time fixed layout trend, in addition to its lack of a systemic approach, does not necessarily guarantee optimality. In this study, an approach for generating a candidate ground structure to be used for cost or weight minimization of 3D RC building structures with included slabs is developed. A multiobjective function at the floor optimization stage and a single objective function at the frame optimization stage are considered. A particle swarm optimization (PSO) method is employed for selecting the optimal ground structure. This method enables generating a simple, yet potential, real-world representation of topologically preoptimized ground structure while both structural and main architectural requirements are considered. This is supported by a case study for different floor domain sizes.
We present a physics-informed deep learning model for the transient heat transfer analysis of three-dimensional functionally graded materials (FGMs) employing a Runge–Kutta discrete time scheme. Firstly, the governing equation, associated boundary conditions and the initial condition for transient heat transfer analysis of FGMs with exponential material variations are presented. Then, the deep collocation method with the Runge–Kutta integration scheme for transient analysis is introduced. The prior physics that helps to generalize the physics-informed deep learning model is introduced by constraining the temperature variable with discrete time schemes and initial/boundary conditions. Further the fitted activation functions suitable for dynamic analysis are presented. Finally, we validate our approach through several numerical examples on FGMs with irregular shapes and a variety of boundary conditions. From numerical experiments, the predicted results with PIDL demonstrate well agreement with analytical solutions and other numerical methods in predicting of both temperature and flux distributions and can be adaptive to transient analysis of FGMs with different shapes, which can be the promising surrogate model in transient dynamic analysis.
Nonlocal theories concern the interaction of objects, which are separated in space. Classical examples are Coulomb’s law or Newton’s law of universal gravitation. They had signficiant impact in physics and engineering. One classical application in mechanics is the failure of quasi-brittle materials. While local models lead to an ill-posed boundary value problem and associated mesh dependent results, nonlocal models guarantee the well-posedness and are furthermore relatively easy to implement into commercial computational software.
Die fortschreitende Digitalisierung lässt innovative bauprojekt- und unternehmensinterne Workflows sowie Organisationssysteme entstehen. In diesem Zusammenhang ist die digitale Fortentwicklung durch Building Information Modeling [BIM] als Veränderungsprozess zu definieren, der Organisationsstrukturen nachhaltig umformen wird. BIM ist die führende digitale Arbeitsmethodik im Bauwesen, die entwurfs-, ausführungs- und bauprojektbezogenen Belangen gerecht werden kann. Die deutsche Bauwirtschaft ist im Vergleich zu anderen Branchen jedoch als digital rückständig zu betrachten. Sie ist durch einen Markt gekennzeichnet, an dem kleine und mittelständische Unternehmen [KMU] in hoher Zahl vertreten sind. Aufgrund von Anwendungsunkenntnis der kleinen und mittelständischen Unternehmen fehlt der flächendeckende und durchgängige BIM-Einsatz in Projekten. Mit dem Fokus auf dem Bauprojekt als temporärer Organisation adressiert der vorliegende Forschungsschwerpunkt die Schaffung eines realistischen Abbilds erprobter BIM-Anwendungsfälle in Modellprojekten. Herausgearbeitet werden derzeit bestehende BIM-Herausforderungen für Erstanwender, die die durchgängige BIM-Anwendung in Deutschland bisher hemmen.
Die Forschungsarbeit fokussiert sich auf die Evaluation erfolgskritischer Faktoren [ekF] in BIM-Anwendungsfällen [AWF] im Rahmen einer qualitativen Inhaltsanalyse. Die digitale Transformation birgt strukturrelevante Veränderungsdeterminanten für Organisationen durch die BIM-Anwendung und außerdem Herausforderungen, die in der Anwendungsfallforschung betrachtet werden.
Die Zielstellung ist dreiteilig. Ein entwickeltes BIM-Strukturmodell erfasst die aktuelle Richtlinienarbeit sowie Standardisierung und stellt dadurch den Rahmen notwendiger BIM-Strukturen im Bauprojekt auf. Aus dem Strukturmodell ist ein Modell zur Prüfung von Anwendungsfallrisiken abgeleitet worden. Dieses wird auf gezielt recherchierte BIM-Modellprojekte in Deutschland angewendet, um aus den erfolgskritischen Faktoren der darin praktizierten BIM-Anwendungsfälle eine ekF-Risikomatrix abzuleiten. Daraus geht ein unterstützendes BIM-Anwendungsinstrument in Form von BPMN-Abläufen für KMU hervor. Resultierend aus der Verbindung des BIM-Strukturmodels und der Anwendungsfallanalyse wird in den einzelnen Ablaufübersichten eine Risikoverortung je Anwendungsfall kenntlich gemacht. Unternehmen ohne BIM-Anwendungsexpertise in Bauprojektorganisationen erhalten auf diese Weise einen instrumentellen und niederschwelligen Zugang zu BIM, um die kollaborativen und wirtschaftlichen Vorteile der digitalen Arbeitsmethodik nutzen zu können.
Modell bedarfsorientierter Leistungserbringung im FM auf Grundlage von Sensortechnologien und BIM
(2023)
Während der Digitalisierung im Bauwesen insbesondere im Bereich der Planungs- und Errichtungsphase von Bauwerken immer größere Aufmerksamkeit zuteilwird, ist das digitale Potenzial im Facility Management weit weniger ausgeschöpft, als dies möglich wäre. Vor dem Hintergrund, dass die Bewirtschaftung von Gebäuden jedoch einen wesentlichen Kostenanteil im Lebenszyklus darstellt, ist eine Fokussierung auf digitale Prozesse im Gebäudebetrieb erforderlich. Im Facility Management werden Dienstleistungen häufig verrichtungsorientiert, d. h. nach statischen Intervallen, oder bedarfsorientiert erbracht. Beide Arten der Leistungserbringung weisen Defizite auf, beispielweise weil Tätigkeiten auf Basis definierter Intervalle erbracht werden, ohne dass eine Notwendigkeit besteht oder weil bestehende Bedarfe mangels Möglichkeiten der Bedarfsermittlung nicht identifiziert werden. Speziell die Definition und Ermittlung eines Bedarfs zur Leistungserbringung ist häufig subjektiv geprägt. Auch sind Dienstleister oft nicht in frühen Phasen der Gebäudeplanung involviert und erhalten für ihre Dienstleistungen notwendige Daten und Informationen erst kurz vor Inbetriebnahme des zu betreibenden Gebäudes.
Aktuelle Ansätze des Building Information Modeling (BIM) und die zunehmende Verfügbarkeit von Sensortechnologien in Gebäuden bieten Chancen, die o. g. Defizite zu beheben.
In der vorliegenden Arbeit werden deshalb Datenmodelle und Methoden entwickelt, die mithilfe von BIM-basierten Datenbankstrukturen sowie Auswertungs- und Entscheidungsmethodiken Dienstleistungen der Gebäudebewirtschaftung objektiviert und automatisiert auslösen können. Der Fokus der Arbeit liegt dabei auf dem Facility Service der Reinigungs- und Pflegedienste des infrastrukturellen Facility Managements.
Eine umfangreiche Recherche etablierter Normen und Standards sowie öffentlich zugänglicher Leistungsausschreibungen bilden die Grundlage der Definition erforderlicher Informationen zur Leistungserbringung. Die identifizierten statischen Gebäude- und Prozessinformationen werden in einem relationalen Datenbankmodell strukturiert, das nach einer Darstellung von Messgrößen und der Beschreibung des Vorgehens zur Auswahl geeigneter Sensoren für die Erfassung von Bedarfen, um Sensorinformationen erweitert wird. Um Messwerte verschiedener und bereits in Gebäuden existenten Sensoren für die Leistungsauslösung verwenden zu können, erfolgt die Implementierung einer Normierungsmethodik in das Datenbankmodell. Auf diese Weise kann der Bedarf zur Leistungserbringung ausgehend von Grenzwerten ermitteln werden. Auch sind Verknüpfungsmethoden zur Kombination verschiedener Anwendungen in dem Datenbankmodell integriert. Zusätzlich zur direkten Auslösung erforderlicher Aktivitäten ermöglicht das entwickelte Modell eine opportune Auslösung von Leistungen, d. h. eine Leistungserbringung vor dem eigentlich bestehenden Bedarf. Auf diese Weise können tätigkeitsähnliche oder räumlich nah beieinander liegende Tätigkeiten sinnvoll vorzeitig erbracht werden, um für den Dienstleister eine Wegstreckeneinsparung zu ermöglichen. Die Arbeit beschreibt zudem die für die Auswertung, Entscheidungsfindung und Auftragsüberwachung benötigen Algorithmen.
Die Validierung des entwickelten Modells bedarfsorientierter Leistungserbringung erfolgt in einer relationalen Datenbank und zeigt simulativ für unterschiedliche Szenarien des Gebäudebetriebs, dass Bedarfsermittlungen auf Grundlage von Sensortechnologien erfolgen und Leistungen opportun ausgelöst, beauftragt und dokumentiert werden können.
The present article aims to provide an overview of the consequences of dynamic soil-structure interaction (SSI) on building structures and the available modelling techniques to resolve SSI problems. The role of SSI has been traditionally considered beneficial to the response of structures. However, contemporary studies and evidence from past earthquakes showed detrimental effects of SSI in certain conditions. An overview of the related investigations and findings is presented and discussed in this article. Additionally, the main approaches to evaluate seismic soil-structure interaction problems with the commonly used modelling techniques and computational methods are highlighted. The strength, limitations, and application cases of each model are also discussed and compared. Moreover, the role of SSI in various design codes and global guidelines is summarized. Finally, the advancements and recent findings on the SSI effects on the seismic response of buildings with different structural systems and foundation types are presented. In addition, with the aim of helping new researchers to improve previous findings, the research gaps and future research tendencies in the SSI field are pointed out.
Plastic structural analysis may be applied without any difficulty and with little effort for structural member verifications with regard to lateral torsional buckling of doubly symmetric rolled I sections. Suchlike analyses can be performed based on the plastic zone theory, specifically using finite beam elements with seven degrees of freedom and 2nd order theory considering material nonlinearity. The existing Eurocode enables these approaches and the coming-up generation will provide corresponding regulations in EN 1993-1-14. The investigations allow the determination of computationally accurate limit loads, which are determined in the present paper for selected structural systems with different sets of parameters, such as length, steel grade and cross section types. The results are compared to approximations gained by more sophisticated FEM analyses (commercial software Ansys Workbench applying solid elements) for reasons of verification/validation. In this course, differences in the results of the numerical models are addressed and discussed. In addition, results are compared to resistances obtained by common design regulations based on reduction factors χlt including regulations of EN 1993-1-1 (including German National Annex) as well as prEN 1993-1-1: 2020-08 (proposed new Eurocode generation). Concluding, correlations of results and their advantages as well as disadvantages are discussed.
Encapsulation-based self-healing concrete (SHC) is the most promising technique for providing a self-healing mechanism to concrete. This is due to its capacity to heal fractures effectively without human interventions, extending the operational life and lowering maintenance costs. The healing mechanism is created by embedding capsules containing the healing agent inside the concrete. The healing agent will be released once the capsules are fractured and the healing occurs in the vicinity of the damaged part. The healing efficiency of the SHC is still not clear and depends on several factors; in the case of microcapsules SHC the fracture of microcapsules is the most important aspect to release the healing agents and hence heal the cracks. This study contributes to verifying the healing efficiency of SHC and the fracture mechanism of the microcapsules. Extended finite element method (XFEM) is a flexible, and powerful discrete crack method that allows crack propagation without the requirement for re-meshing and has been shown high accuracy for modeling fracture in concrete. In this thesis, a computational fracture modeling approach of Encapsulation-based SHC is proposed based on the XFEM and cohesive surface technique (CS) to study the healing efficiency and the potential of fracture and debonding of the microcapsules or the solidified healing agents from the concrete matrix as well. The concrete matrix and a microcapsule shell both are modeled by the XFEM and combined together by CS. The effects of the healed-crack length, the interfacial fracture properties, and microcapsule size on the load carrying capability and fracture pattern of the SHC have been studied. The obtained results are compared to those obtained from the zero thickness cohesive element approach to demonstrate the significant accuracy and the validity of the proposed simulation. The present fracture simulation is developed to study the influence of the capsular clustering on the fracture mechanism by varying the contact surface area of the CS between the microcapsule shell and the concrete matrix. The proposed fracture simulation is expanded to 3D simulations to validate the 2D computational simulations and to estimate the accuracy difference ratio between 2D and 3D simulations. In addition, a proposed design method is developed to design the size of the microcapsules consideration of a sufficient volume of healing agent to heal the expected crack width. This method is based on the configuration of the unit cell (UC), Representative Volume Element (RVE), Periodic Boundary Conditions (PBC), and associated them to the volume fraction (Vf) and the crack width as variables. The proposed microcapsule design is verified through computational fracture simulations.
Determining the earthquake hazard of any settlement is one of the primary studies for reducing earthquake damage. Therefore, earthquake hazard maps used for this purpose must be renewed over time. Turkey Earthquake Hazard Map has been used instead of Turkey Earthquake Zones Map since 2019. A probabilistic seismic hazard was performed by using these last two maps and different attenuation relationships for Bitlis Province (Eastern Turkey) were located in the Lake Van Basin, which has a high seismic risk. The earthquake parameters were determined by considering all districts and neighborhoods in the province. Probabilistic seismic hazard analyses were carried out for these settlements using seismic sources and four different attenuation relationships. The obtained values are compared with the design spectrum stated in the last two earthquake maps. Significant differences exist between the design spectrum obtained according to the different exceedance probabilities. In this study, adaptive pushover analyses of sample-reinforced concrete buildings were performed using the design ground motion level. Structural analyses were carried out using three different design spectra, as given in the last two seismic design codes and the mean spectrum obtained from attenuation relationships. Different design spectra significantly change the target displacements predicted for the performance levels of the buildings.
The floods in 2002 and 2013, as well as the recent flood of 2021, caused billions Euros worth of property damage in Germany. The aim of the project Innovative Vulnerability and Risk Assessment of Urban Areas against Flood Events (INNOVARU) involved the development of a practicable flood damage model that enables realistic damage statements for the residential building stock. In addition to the determination of local flood risks, it also takes into account the vulnerability of individual buildings and allows for the prognosis of structural damage. In this paper, we discuss an improved method for the prognosis of structural damage due to flood impact. Detailed correlations between inundation level and flow velocities depending on the vulnerability of the building types, as well as the number of storeys, are considered. Because reliable damage data from events with high flow velocities were not available, an innovative approach was adopted to cover a wide range of flow velocities. The proposed approach combines comprehensive damage data collected after the 2002 flood in Germany with damage data of the 2011 Tohoku earthquake tsunami in Japan. The application of the developed methods enables a reliable reinterpretation of the structural damage caused by the August flood of 2002 in six study areas in the Free State of Saxony.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
Operator Calculus Approach to Comparison of Elasticity Models for Modelling of Masonry Structures
(2022)
The solution of any engineering problem starts with a modelling process aimed at formulating a mathematical model, which must describe the problem under consideration with sufficient precision. Because of heterogeneity of modern engineering applications, mathematical modelling scatters nowadays from incredibly precise micro- and even nano-modelling of materials to macro-modelling, which is more appropriate for practical engineering computations. In the field of masonry structures, a macro-model of the material can be constructed based on various elasticity theories, such as classical elasticity, micropolar elasticity and Cosserat elasticity. Evidently, a different macro-behaviour is expected depending on the specific theory used in the background. Although there have been several theoretical studies of different elasticity theories in recent years, there is still a lack of understanding of how modelling assumptions of different elasticity theories influence the modelling results of masonry structures. Therefore, a rigorous approach to comparison of different three-dimensional elasticity models based on quaternionic operator calculus is proposed in this paper. In this way, three elasticity models are described and spatial boundary value problems for these models are discussed. In particular, explicit representation formulae for their solutions are constructed. After that, by using these representation formulae, explicit estimates for the solutions obtained by different elasticity theories are obtained. Finally, several numerical examples are presented, which indicate a practical difference in the solutions.
Kleine Kommunen im ländlichen Raum sind aufgrund ihrer oft eingeschränkten personellen und finanziellen Kapazitäten bisher eher sporadisch in den Themenfeldern Energieeffizienz und Erneuerbare Energien aktiv. Immer wieder stellt sich daher Frage, wie die Klimaschutzstrategien des Bundes und der Länder dort mit dem verfügbaren Personal kostengünstig realisierbar sind. Vor diesem Hintergrund wird ein Werkzeug entwickelt, mit dessen Hilfe der aktive Einstieg in diese Thematik mit geringen Aufwand und überwiegend barrierefrei möglich ist.
Der Aufbau eines prozessorientierten Entwicklungs- und Moderationsmodells zur Erprobung und Umsetzung bezahlbarer Handlungsoptionen für Energieeinsparungen und effizienten Energieeinsatz im überwiegend ländlichen geprägten Raum ist der Schwerpunkt der Softwarelösung.
Kommunen werden mit deren Hilfe in die Lage versetzt, in die notwendigen Prozesse der Energie- und Wärmewende einzusteigen. Dabei soll der modulare Aufbau die regulären Schritte notwendiger (integrierter) Planungsprozesse nicht vollständig ersetzen. Vielmehr können innerhalb der Online-Anwendung - überwiegend automatisiert - konkrete Maßnahmenvorschläge erstellt werden, die ein solides Fundament der künftigen energetischen Entwicklung der Kommunen darstellen.
Für eine gezielte Validierung der Ergebnisse und der Ableitung potentieller Maßnahmen werden für die Erprobung Modellkommunen in Thüringen, Bayern und Hessen als Reallabore einbezogen.
Das Tool steht bisher zunächst nur den beteiligten Modellkommunen zur Verfügung. Die entwickelte Softwarelösung soll künftig Schritt für Schritt allen interessierten Kommunen mit diversen Hilfsmitteln und einer Vielzahl anderer praktischer Bestandteile zur Verfügung gestellt werden.
Bei Analysen des Gebäudebestands im Quartierskontext werden zu Dokumentationszwecken viele Bilddaten erzeugt. Diese Daten sind im Nachhinein häufig keinen eindeutig genauen Standorten und Blickwinkeln auf das Bauwerk zuzuordnen. Insbesondere gilt dies für Ortsunkundige oder für Detailaufnahmen. Eine zusätzliche Herausforderung stellt die Aufnahme von Wärmebrücken- oder andersartigen Gebäudedetails durch Thermogramme dar. In der Praxis kommen hier oftmals analoge, fehleranfällige Lösungen zum Einsatz.
Durch die Nutzung von Georeferenzierung kann diese Lücke geschlossen und eine eindeutige Kommunikation und Auswertung gewährleistet werden. Im Gegensatz zu den üblichen Kameras sind Smartphones nach Stand der Technik ausreichend ausgestattet, um neben Daten zu Standort auch die Orientierungswinkel einer Bildaufnahme zu dokumentieren. Die georefenzierten Bilder können auf Grundlage der in den sogenannten Exif-Daten mitgeschriebenen Informationen händisch in ein bestehendes Quartiersmodell integriert werden.
Anhand eines universitären Musterquartiers wird die nutzerfreundliche Realisierung beispielhaft erprobt und auf ihre Potentiale zur Automatisierung in Python untersucht. Hierfür wurde ein bestehendes Quartiersmodell als geometrische Grundlage genutzt und um RGB-Bilder sowie Thermogramme erweitert. Das beschriebene Vorgehen wird im Rahmen der Anwendung auf seinen möglichen Einsatz im Rahmen einer energetischen Quartierserfassung sowie einer Bauschadensdokumentation untersucht.
Mit dem vorliegenden Beitrag wird dem Nutzenden ein Werkzeug bereitgestellt, das die hochwertige Dokumentation einer Bestandserfassung, auch im Quartierskontext, ermöglicht.
Data acquisition systems and methods to capture high-resolution images or reconstruct 3D point clouds of existing structures are an effective way to document their as-is condition. These methods enable a detailed analysis of building surfaces, providing precise 3D representations. However, for the condition assessment and documentation, damages are mainly annotated in 2D representations, such as images, orthophotos, or technical drawings, which do not allow for the application of a 3D workflow or automated comparisons of multitemporal datasets. In the available software for building heritage data management and analysis, a wide range of annotation and evaluation functions are available, but they also lack integrated post-processing methods and systematic workflows. The article presents novel methods developed to facilitate such automated 3D workflows and validates them on a small historic church building in Thuringia, Germany. Post-processing steps using photogrammetric 3D reconstruction data along with imagery were implemented, which show the possibilities of integrating 2D annotations into 3D documentations. Further, the application of voxel-based methods on the dataset enables the evaluation of geometrical changes of multitemporal annotations in different states and the assignment to elements of scans or building models. The proposed workflow also highlights the potential of these methods for condition assessment and planning of restoration work, as well as the possibility to represent the analysis results in standardised building model formats.
In this paper, we present an open-source code for the first-order and higher-order nonlocal operator method (NOM) including a detailed description of the implementation. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combined with the method of weighed residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. The implementation in this paper is focused on linear elastic solids for sake of conciseness through the NOM can handle more complex nonlinear problems. The NOM can be very flexible and efficient to solve partial differential equations (PDEs), it’s also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Finally, we present some classical benchmark problems including the classical cantilever beam and plate-with-a-hole problem, and we also make an extension of this method to solve complicated problems including phase-field fracture modeling and gradient elasticity material.
Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square.
Bolted connections are commonly used in steel construction. The load-bearing behavior of bolt fittings has extensively been studied in various research activities and the bearing capacity of bolted connections can be assessed well by standard regulations for practical applications. With regard to tensile loading, the nut does not have strong influence on resistances, since the failure occurs in the bolts due to higher material strengths of the nuts. In some applications, so-called “blind holes” are used to connect plated components. In a manner of speaking, the nut is replaced by the “outer” plate with a prefabricated hole and thread, in which the bolt can be screwed and tightened. In such connections, the limit load capacity cannot solely be assessed by the bolt resistance, since the threaded hole in the base material has strong influence on the structural behavior. In this context, the available screw-in depth of the blind hole is of fundamental importance. The German National Annex of EN 1993-1-8 provides information on a necessary depth in order to transfer the full tensile capacity of the bolt. However, some connections do not allow to fabricate such depths. In these cases, the capacity of the connection is unclear and not specified. In this paper, first experiments on corresponding connections with different screw-in depths are presented and compared to limit load capacities according to the standard.
Marine macroalgae such as Ulva intestinalis have promising properties as feedstock for cosmetics and pharmaceuticals. However, since the quantity and quality of naturally grown algae vary widely, their exploitability is reduced – especially for producers in high-priced markets. Moreover, the expansion of marine or shore-based cultivation systems is unlikely in Europe, since promising sites either lie in fishing zones, recreational areas, or natural reserves. The aim was therefore to develop a closed photobioreactor system enabling full control of abiotic environmental parameters and an effective reconditioning of the cultivation medium in order to produce marine macroalgae at sites distant from the shore. To assess the feasibility and functionality of the chosen technological concept, a prototypal plant has been implemented in central Germany – a site distant from the sea. Using a newly developed, submersible LED light source, cultivation experiments with Ulva intestinalis led to growth rates of 7.72 ± 0.04 % day−1 in a cultivation cycle of 28 days. Based on the space demand of the production system, this results in fresh mass productivity of 3.0 kg m−2, respectively, of 1.1 kg m−2 per year. Also considering the ratio of biomass to energy input amounting to 2.76 g kWh−1, significant future improvements of the developed photobioreactor system should include the optimization of growth parameters, and the reduction of the system’s overall energy demand.
Scaling of concrete due to salt frost attack is an important durability issue in moderate and cold climates. The actual damage mechanism is still not completely understood. Two recent damage theories—the glue spall theory and the cryogenic suction theory—offer plausible, but conflicting explanations for the salt frost scaling mechanism. The present study deals with the cryogenic suction theory, which assumes that freezing concrete can take up unfrozen brine from a partly frozen deicing solution during salt frost attack. According to the model hypothesis, the resulting saturation of the concrete surface layer intensifies the ice formation in this layer and causes salt frost scaling. In this study an experimental technique was developed that makes it possible to quantify to which extent brine uptake can increase ice formation in hardened cement paste (used as a model material for concrete). The experiments were carried out with low temperature differential scanning calorimetry, where specimens were subjected to freeze–thaw cycles while being in contact with NaCl brine. Results showed that the ice content in the specimens increased with subsequent freeze–thaw cycles due to the brine uptake at temperatures below 0 °C. The ability of the hardened cement paste to bind chlorides from the absorbed brine at the same time affected the freezing/melting behavior of the pore solution and the magnitude of the ice content.
The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate.
Within the scope of literature, the influence of openings within the infill walls that are bounded by a reinforced concrete frame and excited by seismic drift forces in both in- and out-of-plane direction is still uncharted. Therefore, a 3D micromodel was developed and calibrated thereafter, to gain more insight in the topic. The micromodels were calibrated against their equivalent physical test specimens of in-plane, out-of-plane drift driven tests on frames with and without infill walls and openings, as well as out-of-plane bend test of masonry walls. Micromodels were rectified based on their behavior and damage states. As a result of the calibration process, it was found that micromodels were sensitive and insensitive to various parameters, regarding the model’s behavior and computational stability. It was found that, even within the same material model, some parameters had more effects when attributed to concrete rather than on masonry. Generally, the in-plane behavior of infilled frames was found to be largely governed by the interface material model. The out-of-plane masonry wall simulations were governed by the tensile strength of both the interface and masonry material model. Yet, the out-of-plane drift driven test was governed by the concrete material properties.
Encapsulation-based self-healing concrete has received a lot of attention nowadays in civil engineering field. These capsules are embedded in the cementitious matrix during concrete mixing. When the cracks appear, the embedded capsules which are placed along the path of incoming crack are fractured and then release of healing agents in the vicinity of damage. The materials of capsules need to be designed in a way that they should be able to break with small deformation, so the internal fluid can be released to seal the crack. This study focuses on computational modeling of fracture in encapsulation-based selfhealing concrete. The numerical model of 2D and 3D with randomly packed aggreates and capsules have been developed to analyze fracture mechanism that plays a significant role in the fracture probability of capsules and consequently the self-healing process. The capsules are assumed to be made of Poly Methyl Methacrylate (PMMA) and the potential cracks are represented by pre-inserted cohesive elements with tension and shear softening laws along the element boundaries of the mortar matrix, aggregates, capsules, and at the interfaces between these phases. The effects of volume fraction, core-wall thickness ratio, and mismatch fracture properties of capsules on the load carrying capacity of self-healing concrete and fracture probability of the capsules are investigated. The output of this study will become valuable tool to assist not only the experimentalists but also the manufacturers in designing an appropriate capsule material for self-healing concrete.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
In this paper we present a theoretical background for a coupled analytical–numerical approach to model a crack propagation process in two-dimensional bounded domains. The goal of the coupled analytical–numerical approach is to obtain the correct solution behaviour near the crack tip by help of the analytical solution constructed by using tools of complex function theory and couple it continuously with the finite element solution in the region far from the singularity. In this way, crack propagation could be modelled without using remeshing. Possible directions of crack growth can be calculated through the minimization of the total energy composed of the potential energy and the dissipated energy based on the energy release rate. Within this setting, an analytical solution of a mixed boundary value problem based on complex analysis and conformal mapping techniques is presented in a circular region containing an arbitrary crack path. More precisely, the linear elastic problem is transformed into a Riemann–Hilbert problem in the unit disk for holomorphic functions. Utilising advantages of the analytical solution in the region near the crack tip, the total energy could be evaluated within short computation times for various crack kink angles and lengths leading to a potentially efficient way of computing the minimization procedure. To this end, the paper presents a general strategy of the new coupled approach for crack propagation modelling. Additionally, we also discuss obstacles in the way of practical realisation of this strategy.
In this work, extensive reactive molecular dynamics simulations are conducted to analyze the nanopore creation by nano-particles impact over single-layer molybdenum disulfide (MoS2) with 1T and 2H phases. We also compare the results with graphene monolayer. In our simulations, nanosheets are exposed to a spherical rigid carbon projectile with high initial velocities ranging from 2 to 23 km/s. Results for three different structures are compared to examine the most critical factors in the perforation and resistance force during the impact. To analyze the perforation and impact resistance, kinetic energy and displacement time history of the projectile as well as perforation resistance force of the projectile are investigated.
Interestingly, although the elasticity module and tensile strength of the graphene are by almost five times higher than those of MoS2, the results demonstrate that 1T and 2H-MoS2 phases are more resistive to the impact loading and perforation than graphene. For the MoS2nanosheets, we realize that the 2H phase is more resistant to impact loading than the 1T counterpart.
Our reactive molecular dynamics results highlight that in addition to the strength and toughness, atomic structure is another crucial factor that can contribute substantially to impact resistance of 2D materials. The obtained results can be useful to guide the experimental setups for the nanopore creation in MoS2or other 2D lattices.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
When it comes to monitoring of huge structures, main issues are limited time, high costs and how to deal with the big amount of data. In order to reduce and manage them, respectively, methods from the field of optimal design of experiments are useful and supportive. Having optimal experimental designs at hand before conducting any measurements is leading to a highly informative measurement concept, where the sensor positions are optimized according to minimal errors in the structures’ models. For the reduction of computational time a combined approach using Fisher Information Matrix and mean-squared error in a two-step procedure is proposed under the consideration of different error types. The error descriptions contain random/aleatoric and systematic/epistemic portions. Applying this combined approach on a finite element model using artificial acceleration time measurement data with artificially added errors leads to the optimized sensor positions. These findings are compared to results from laboratory experiments on the modeled structure, which is a tower-like structure represented by a hollow pipe as the cantilever beam. Conclusively, the combined approach is leading to a sound experimental design that leads to a good estimate of the structure’s behavior and model parameters without the need of preliminary measurements for model updating.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
"Qualitätsmanagement (QM) ist im Sinne reibungslos funktionierender Abläufe ein unverzichtbarer Bestandteil jedes Büros, unabhängig von der Größe und dem Kerngeschäft und unabhängig davon, ob ein Zertifizierungsverfahren durchlaufen
wird oder nicht. Im Laufe der Jahre werden in Ingenieur- und Architekturbüros meist zahlreiche organisatorische Einzelregelungen getroffen, die den Alltag erleichtern sollen. Eine systematische Zusammenstellung, Einführung und Kontrolle unterbleibt jedoch oft. Häufig schrecken die Verantwortlichen vor dem vermeintlichen Aufwand für die systematische Zusammenstellung in einem QM-Handbuch und dem vermeintlich noch viel größeren Aufwand für eine externe Überprüfung im Rahmen eines externen Audits mit anschließender Zertifizierung zurück. Der
Nutzen, der alleine schon durch ein passgenau aufgebautes und gelebtes QMHandbuch entsteht, wird nicht realisiert.
Der QM-Standard „Planer am Bau“ (PaB) ist ein branchenspezifischer Standard, der gezielt für Ingenieur- und Architekturbüros entwickelt worden ist und ausschließlich deren Belange berücksichtigt. Im Ergebnis entstehen nach klaren Vorgaben der Mindestanforderungen schlanke, auf die jeweiligen Bürobesonderheiten angepasste Handbücher, die durch den TÜV Rheinland auditiert und zertifiziert werden
können. Der Nachweis eines wirksamen QM-Systems ist mit diesem Zertifikat erbracht, was unter anderem in VgV-Ausschreibungen Vorteile bringen kann."
In recent decades, a multitude of concepts and models were developed to understand, assess and predict muscular mechanics in the context of physiological and pathological events.
Most of these models are highly specialized and designed to selectively address fields in, e.g., medicine, sports science, forensics, product design or CGI; their data are often not transferable to other ranges of application. A single universal model, which covers the details of biochemical and neural processes, as well as the development of internal and external force and motion patterns and appearance could not be practical with regard to the diversity of the questions to be investigated and the task to find answers efficiently. With reasonable limitations though, a generalized approach is feasible.
The objective of the work at hand was to develop a model for muscle simulation which covers the phenomenological aspects, and thus is universally applicable in domains where up until now specialized models were utilized. This includes investigations on active and passive motion, structural interaction of muscles within the body and with external elements, for example in crash scenarios, but also research topics like the verification of in vivo experiments and parameter identification. For this purpose, elements for the simulation of incompressible deformations were studied, adapted and implemented into the finite element code SLang. Various anisotropic, visco-elastic muscle models were developed or enhanced. The applicability was demonstrated on the base of several examples, and a general base for the implementation of further material models was developed and elaborated.
The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
Die Gase Sauerstoff und Stickstoff werden für eine Vielzahl an technischen, industriellen, biologischen und medizinischen Einsatzzwecken benötigt. So liegen Anwendungsgebiete dieser Gase neben der klassischen metallverarbeitenden und der chemischen Industrie bei Sauerstoff vor allem in der Medizin, Verbrennungs- und Kläranlagenoptimierung sowie der Fischzucht und bei Stickstoff als Schutz- beziehungsweise Inertgas in der Kunststoffindustrie, der Luft- und Raumfahrt sowie dem Brandschutz.
Die Bereitstellung der Gase Sauerstoff und Stickstoff wird nahezu ausschließlich durch die Abtrennung aus der Umgebungsluft realisiert, welche aus ca. 78 Vol.-% Stickstoff, 21 Vol.-% Sauerstoff und 1 Vol.-% Spurengasen (Ar, CO2, Ne, He, ...) besteht. Am Markt etablierte Verfahren der Luftzerlegung sind das Linde-, das PSA- (pressure swing adsorption/Druckwechseladsorption) oder verschiedene Membran-Verfahren. Hierdurch werden die benötigten Gase entweder direkt vor Ort beim Verbraucher erzeugt (PSA- und Polymer-Membranverfahren: geringe Reinheiten) oder zentral in großen Anlagen hergestellt (Linde-Verfahren: hohe Reinheiten) und anschließend zum Verbraucher in Form von Flaschen- oder Tankgasen geliefert (Tansportkosten).
Für kleinere Verbraucher mit hohen Ansprüchen an die Reinheit des benötigten Sauerstoffs beziehungsweise Stickstoffs ergibt sich nur die Möglichkeit, die Gase als kostenintensive Transportgase zentraler Gaseversorger zu beziehen und sich somit in eine Abhängigkeit (Lieferverträge, Flaschen-/Tankmieten, ...) zu diesen zu begeben sowie eine eigene Lagerhaltung für die benötigten Gase (Mehraufwand, Lagerkosten, Platzbedarf) zu betreiben.
Ziel dieser Arbeit ist es, keramische Material-Systeme auf Basis chemischer Hochtemperatur-Reaktionen als Reaktive Oxidkeramiken zu entwickeln und diese hinsichtlich eines möglichen Einsatzes für die Sauerstoffseparation in neuartigen Luftzerlegungsanlagen zu untersuchen.
Derartige Anlagen sollen in ihrem Prinzip an die regenerative Sauerstoffseparation angelehnt sein und in ihren Reaktoren die Reaktiven Oxidkeramiken als Festbett-Material abwechselnd mit Luft be- und Vakuum oder O2-armen Atmosphären entladen.
Die Verwendung Reaktiver Oxidkeramiken, welche im Vergleich zu den bisherigen Materialien höhere Sauerstoffaustauschmengen und -raten bei gleichzeitig hoher Lebensdauer und Korrosionsbeständigkeit sowie relativ einfacher Handhabe aufweisen würden, soll ein Schritt in Richtung einer effizienten alternativen Luftzerlegungstechnologie sein.
Mit den Reaktiven Oxidkeramiken in einer Luftzerlegungsanlage sollte es im besten Fall möglich sein, in kleinen Anlagen sehr reinen Sauerstoff und zugleich sauerstofffreies Inertgas zu erzeugen sowie eine Sauerstoffan- oder -abreicherung von Luft, Prozess- oder Abgasen zu generieren.
Somit besäße eine solche, auf Reaktiven Oxidkeramiken basierende Technologie sehr weit gefächerte Einsatzgebiete und demzufolge ein enormes wirtschaftliches Potential.
Institute of Structural Engineering, Institute of Structural Mechanics, as well as Institute for Computing, Mathematics and Physics in Civil Engineering at the faculty of civil engineering at the Bauhaus-Universität Weimar presented special topics of structural engineering to highlight the broad spectrum of civil engineering in the field of modeling and simulation.
The summer course sought to impart knowledge and to combine research with a practical context, through a challenging and demanding series of lectures, seminars and project work. Participating students were enabled to deal with advanced methods and its practical application.
The extraordinary format of the interdisciplinary summer school offers the opportunity to study advanced developments of numerical methods and sophisticated modelling techniques in different disciplines of civil engineering for foreign and domestic students, which go far beyond traditional graduate courses.
The proceedings at hand are the result from the Bauhaus Summer School course: Forecast Engineering held at the Bauhaus-Universität Weimar, 2018. It summarizes the results of the conducted project work, provides the abstracts/papers of the contributions by the participants, as well as impressions from the accompanying programme and organized cultural activities.
The design of engineering structures takes place today and in the past on the basis of static calculations. The consideration of uncertainties in the model quality becomes more and more important with the development of new construction methods and design requirements. In addition to the traditional forced-based approaches, experiences and observations about the deformation behavior of components and the overall structure under different exposure conditions allow the introduction of novel detection and evaluation criteria.
The proceedings at hand are the result from the Bauhaus Summer School Course: Forecast Engineering held at the Bauhaus-Universität Weimar, 2017. It summarizes the results of the conducted project work, provides the abstracts of the contributions by the participants, as well as impressions from the accompanying programme and organized cultural activities.
The special character of this course is in the combination of basic disciplines of structural engineering with applied research projects in the areas of steel and reinforced concrete structures, earthquake and wind engineering as well as informatics and linking them to mathematical methods and modern tools of visualization. Its innovative character results from the ambitious engineering tasks and advanced
modeling demands.
The proceedings at hand are the result of the International Master Course Module: "Nonlinear Analysis of Structures: Wind Induced Vibrations" held at the Faculty of Civil Engineering at Bauhaus-University Weimar, Germany in the summer semester 2019 (April - August). This material summarizes the results of the project work done throughout the semester, provides an overview of the topic, as well as impressions from the accompanying programme.
Wind Engineering is a particular field of Civil Engineering that evaluates the resistance of structures caused by wind loads. Bridges, high-rise buildings, chimneys and telecommunication towers might be susceptible to wind vibrations due to their increased flexibility, therefore a special design is carried for this aspect. Advancement in technology and scientific studies permit us doing research at small scale for more accurate analyses. Therefore scaled models of real structures are built and tested for various construction scenarios. These models are placed in wind tunnels where experiments are conducted to determine parameters such as: critical wind speeds for bridge decks, static wind coefficients and forces for buildings or bridges. The objective of the course was to offer insight to the students into the assessment of long-span cable-supported bridges and high-rise buildings under wind excitation. The participating students worked in interdisciplinary teams to increase their knowledge in the understanding and influences on the behaviour of wind-sensitive structures.
Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too.
Components of structural glazing have to meet different requirements and resist various impacts, depending on the field of application. Within an international research project of the EU innovation program Horizon 2020, special glass panes with a fluid circulating in capillaries are developed exploiting solar energy. Major influences to this glazing are UV irradiation and the fluidic contact, effecting the mechanical and optical durability of the bonding material within the glass setup. Regarding to visual requirements, acrylate adhesives and EVA films are analyzed as possible bonding materials by destructive and non-destructive testing methods. Two types of specimen are presented for obtaining the mechanical behavior and the surface appearances of the bonding material.
Der vorliegende Beitrag beschreibt die Problematik bei der Prognose verkehrsbedingter Schadstoff-Immissionen. Im Mittelpunkt steht die Entwicklung und der Aufbau einer Simulationsumgebung zur Evaluation von umweltorientierten Verkehrsmanagement-Strategien. Die Simulationsumgebung wird über die drei Felder Verkehr, Emission, Immission entwickelt und findet zunächst Anwendung in der Evaluation verkehrlicher Maßnahmen für die Friedberger Landstraße in Frankfurt am Main.
Identification of flaws in structures is a critical element in the management of maintenance and quality assurance processes in engineering. Nondestructive testing (NDT) techniques based on a wide range of physical principles have been developed and are used in common practice for structural health monitoring. However, basic NDT techniques are usually limited in their ability to provide the accurate information on locations, dimensions and shapes of flaws. One alternative to extract additional information from the results of NDT is to append it with a computational model that provides detailed analysis of the physical process involved and enables the accurate identification of the flaw parameters. The aim here is to develop the strategies to uniquely identify cracks in two-dimensional 2D) structures under dynamic loadings.
A local NDT technique combined eXtended Finite Element Method (XFEM) with dynamic loading in order to identify the cracks in the structures quickly and accurately is developed in this dissertation. The Newmark-b time integration method with Rayleigh damping is used for the time integration. We apply Nelder-Mead (NM)and Quasi-Newton (QN) methods for identifying the crack tip in plate. The inverse problem is solved iteratively, in which XFEM is used for solving the forward problem in each iteration. For a timeharmonic excitation with a single frequency and a short-duration signal measured along part of the external boundary, the crack is detected through the solution of an inverse time-dependent problem. Compared to the static load, we show that the dynamic loads are more effective for crack detection problems. Moreover, we tested different dynamic loads and find that NM method works more efficient under the harmonic load than the pounding load while the QN method achieves almost the same results for both load types.
A global strategy, Multilevel Coordinate Search (MCS) with XFEM (XFEM-MCS) methodology under the dynamic electric load, to detect multiple cracks in 2D piezoelectric plates is proposed in this dissertation. The Newmark-b method is employed for the time integration and in each iteration the forward problem is solved by XFEM for various cracks. The objective functional is minimized by using a global search algorithm MCS. The test problems show that the XFEM-MCS algorithm under the dynamic electric load can be effectively employed for multiple cracks detection in piezoelectric materials, and it proves to be robust in identifying defects in piezoelectric structures. Fiber-reinforced composites (FRCs) are extensively applied in practical engineering since they have high stiffness and strength. Experiments reveal a so-called interphase zone, i.e. the space between the outside interface of the fiber and the inside interface of the matrix. The interphase strength between the fiber and the matrix strongly affects the mechanical properties as a result of the large ratio of interface/volume. For the purpose of understanding the mechanical properties of FRCs with functionally graded interphase (FGI), a closed-form expression of the interface strength between a fiber and a matrix is obtained in this dissertation using a continuum modeling approach according to the ver derWaals (vdW) forces. Based on the interatomic potential, we develop a new modified nonlinear cohesive law, which is applied to study the interface delamination of FRCs with FGI under different loadings. The analytical solutions show that the delamination behavior strongly depends on the interphase thickness, the fiber radius, the Young’s moduli and Poisson’s ratios of the fiber and the matrix. Thermal conductivity is the property of a material to conduct heat. With the development and deep research of 2D materials, especially graphene and molybdenum disulfide (MoS2), the thermal conductivity of 2D materials attracts wide attentions. The thermal conductivity of graphene nanoribbons (GNRs) is found to appear a tendency of decreasing under tensile strain by classical molecular dynamics (MD) simulations. Hence, the strain effects of graphene can play a key role in the continuous tunability and applicability of its thermal conductivity property at nanoscale, and the dissipation of thermal conductivity is an obstacle for the applications of thermal management. Up to now, the thermal conductivity of graphene under shear deformation has not been investigated yet. From a practical point of view, good thermal managements of GNRs have significantly potential applications of future GNR-based thermal nanodevices, which can greatly improve performances of the nanosized devices due to heat dissipations. Meanwhile, graphene is a thin membrane structure, it is also important to understand the wrinkling behavior under shear deformation. MoS2 exists in the stable semiconducting 1H phase (1H-MoS2) while the metallic 1T phase (1T-MoS2) is unstable at ambient conditions. As it’s well known that much attention has been focused on studying the nonlinear optical properties of the 1H-MoS2. In a very recent research, the 1T-type monolayer crystals of TMDCs, MX2 (MoS2, WS2 ...) was reported having an intrinsic in-plane negative Poisson’s ratio. Luckily, nearly at the same time, unprecedented long-term (>3months) air stability of the 1T-MoS2 can be achieved by using the donor lithium hydride (LiH). Therefore, it’s very important to study the thermal conductivity of 1T-MoS2.
The thermal conductivity of graphene under shear strain is systematically studied in this dissertation by MD simulations. The results show that, in contrast to the dramatic decrease of thermal conductivity of graphene under uniaxial tensile, the thermal conductivity of graphene is not sensitive to the shear strain, and the thermal conductivity decreases only 12-16%. The wrinkle evolves when the shear strain is around 5%-10%, but the thermal conductivity barely changes.
The thermal conductivities of single-layer 1H-MoS2(1H-SLMoS2) and single-layer 1T-MoS2 (1T-SLMoS2) with different sample sizes, temperatures and strain rates have been studied systematically in this dissertation. We find that the thermal conductivities of 1H-SLMoS2 and 1T-SLMoS2 in both the armchair and the zigzag directions increase with the increasing of the sample length, while the increase of the width of the sample has minor effect on the thermal conductions of these two structures. The thermal conductivity of 1HSLMoS2 is smaller than that of 1T-SLMoS2 under size effect. Furthermore, the temperature effect results show that the thermal conductivities of both 1H-SLMoS2 and 1T-SLMoS2 decrease with the increasing of the temperature. The thermal conductivities of 1HSLMoS2 and 1T-SLMoS2 are nearly the same (difference <6%) in both of the chiral orientations under corresponding temperatures, especially in the armchair direction (difference <2.8%). Moreover, we find that the strain effects on the thermal conductivity of 1HSLMoS2 and 1T-SLMoS2 are different. More specifically, the thermal conductivity decreases with the increasing tensile strain rate for
1T-SLMoS2, while fluctuates with the growth of the strain for 1HSLMoS2. Finally, we find that the thermal conductivity of same sized 1H-SLMoS2 is similar with that of the strained 1H-SLMoS2 structure.
Matrix-free voxel-based finite element method for materials with heterogeneous microstructures
(2019)
Modern image detection techniques such as micro computer tomography
(μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis.
However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm.
This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained.
The high resource demand of the building sector clearly indicates the need to search for alternative, renewable and energy-efficient materials. This work presents paper-laminated sandwich elements with a core of corrugated paperboard that can serve as architectural components with a load-bearing capacity after a linear folding process. Conventional methods either use paper tubes or glued layers of honeycomb panels. In contrast, the folded components are extremely lightweight, provide the material strength exactly where it is statically required and offer many possibilities for design variants. After removing stripes of the paper lamination, the sandwich can be folded in a linear way at this position. Without the resistance of the missing paper, the sandwich core can be easily compressed. The final angle of the folding correlates with the width of the removed paper stripe. As such, this angle can be described by a simple geometric equation. The geometrical basis for the production of folded sandwich elements was established and many profile types were generated such as triangular, square or rectangular shapes. The method allows the easy planning and fast production of components that can be used in the construction sector. A triangle profile was used to create a load-bearing frame as supporting structure for an experimental building. This first permanent building completely made of corrugated cardboard was evaluated in a two-year test to confirm the efficiency of the developed components. In addition to the frame shown in this paper, large-scale sandwich elements with a core of folded components can be used to fabricate lightweight ceilings and large-scale sandwich components. The method enables the efficient production of linearly folded cardboard elements which can replace normal wooden components like beams, pillars or frames and bring a fully recycled material in the context of architectural construction.
Occupant needs with regard to residential buildings are not well known due to a lack of representative scientific studies. To improve the lack of data, a large scale study was carried out using a Post Occupancy Evaluation of 1,416 building occupants. Several criteria describing the needs of occupants were evaluated with regard to their subjective level of relevance. Additionally, we investigated the degree to which deficiencies subjectively exist, and the degree to which occupants were able to accept them. From the data obtained, a hierarchy of criteria was created. It was found that building occupants ranked the physiological needs of air quality and thermal comfort the highest. Health hazards such as mould and contaminated building materials were unacceptable for occupants, while other deficiencies were more likely to be tolerated. Occupant satisfaction was also investigated. We found that most occupants can be classified as satisfied, although some differences do exist between different populations. To explain the relationship between the constructs of what we call relevance, acceptance, deficiency and satisfaction, we then created an explanatory model. Using correlation and regression analysis, the validity of the model was then confirmed by applying the collected data. The results of the study are both relevant in shaping further research and in providing guidance on how to maximize tenant satisfaction in real estate management.
Der vorliegende Text beschreibt die intensive Erforschung von Wabenplatten aus Papierwerkstoffen, die durch Faltprozesse neue räumliche Zustände einnehmen können und somit ihr ursprüngliches Anwendungsspektrum erweitern. Die gezeigten Lösungsansätze bewegen sich dabei im Spannungsfeld von Architektur und Ingenieurbau, denn die gefalteten Bauteile sind nicht nur äußerst tragfähig sondern besitzen auch eine ästhetische Form. Die entwickelten Verfahren und Konstruktionen werden auf einem hohen architektonischen Niveau präsentiert und mit einfachen ingenieurtechnischen Methoden verifiziert. Zur Lösungsfindung werden geometrische Verfahren ebenso angewendet wie konstruktive Faustformeln und Recherchen aus Architektur und Forschung.
Der Fokus der Arbeit liegt auf der Untersuchung von Faltungen in Wabenplatten. Während der Auseinandersetzung mit der Thematik erschienen jedoch viele weitere Aspekte als sehr interessant und bearbeitungswürdig. Als theoretische Grundlage dieser Arbeit werden deshalb die geschichtliche Entwicklung und die gesellschaftliche Bedeutung von Papier und Papierwerkstoffen analysiert und deren Produktionsprozesse beleuchtet. Diese Vorgehensweise ermöglicht eine Einordnung des Potentials und der Bedeutung des Werkstoffs Papier. Der Kontext der Arbeit wird dadurch gestärkt und führt zu interessanten zukünftigen Forschungsansätzen.
Intensive Untersuchungen widmen sich der geometrischen Bestimmung von Faltungen in Wabenplatten aus Papierwerkstoffen sowie deren Manifestation als konstruktive Bauteile. Auch die statischen Eigenschaften der Elemente und ihr Konstruktionspotential werden erforscht und aufbereitet. Wichtige Impulse aus Forschung und Technik fließen in die Recherche der Arbeit ein und erlauben die Verortung der Ergebnisse im architektonischen Kontext. Versuchsreihen und Materialstudien an Prototypen belegen die Ergebnisse virtueller und rechnerischer Studien. Konzepte zur parametrischen Berechnung und Visualisierung der Forschungsergebnisse werden präsentiert und zeigen zukunftsfähige Planungshilfen für die Industrie auf. Etliche Testreihen zu unterschiedlichsten Abdichtungskonzepten führen zur Realisierung eines sehenswerten Experimentalbaus. Er erlaubt die dauerhafte Untersuchung der entwickelten Bauteile unter realistischen Bedingungen und bestätigt deren Leistungsfähigkeit. Dadurch wird nicht nur ein dauerhaftes Monitoring und eine Evaluierung der Leistungsdaten möglich sondern es wird auch der sichtbare Beweis erbracht, dass mit Papierwerkstoffen effiziente und hochwertige Architekturen zu realisieren sind, welche das enorme gestalterische Potential von gefalteten Wabenplatten ausnutzen.
Im Rahmen der Dissertation ist ein analytisches Berechnungsverfahren zur Ermittlung der Kapazität in lichtsignalgeregelten Zufahrten mit zusätzlichen Aufstellstreifen bei gleichzeitiger Freigabezeit entwickelt worden, dass sich durch folgende Eigenschaften auszeichnet:
a) einfaches Berechnungsverfahren – Ansatz eines einfachen linearen Berechnungsansatzes, der auf den Grundzusammenhängen des Verkehrsablaufs in lichtsignalgeregelten Zufahrten aufbaut,
b) breites Anwendungsgebiet – Berechnungsverfahren kann in Zufahrten mit bis zu zwei zusätzlichen Aufstellstreifen angewendet werden,
c) hohe Genauigkeit – Im Rahmen eines direkten Vergleichs konnte u. a.
gezeigt werden, dass mit dem hergeleiteten analytischen Berechnungsverfahren genauere Kapazitätswerte ermittelt werden können, als mit dem Berechnungsverfahren nach HBS 2015.
This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed.
A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth.
Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven.
Polymeric nanocomposites (PNCs) are considered for numerous nanotechnology such as: nano-biotechnology, nano-systems, nanoelectronics, and nano-structured materials. Commonly , they are formed by polymer (epoxy) matrix reinforced with a nanosized filler. The addition of rigid nanofillers to the epoxy matrix has offered great improvements in the fracture toughness without sacrificing other important thermo-mechanical properties. The physics of the fracture in PNCs is rather complicated and is influenced by different parameters. The presence of uncertainty in the predicted output is expected as a result of stochastic variance in the factors affecting the fracture mechanism. Consequently, evaluating the improved fracture toughness in PNCs is a challenging problem.
Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) have been employed to predict the fracture energy of polymer/particle nanocomposites. The ANN and ANFIS models were constructed, trained, and tested based on a collection of 115 experimental datasets gathered from the literature. The performance evaluation indices of the developed ANN and ANFIS showed relatively small error, with high coefficients of determination (R2), and low root mean square error and mean absolute percentage error.
In the framework for uncertainty quantification of PNCs, a sensitivity analysis (SA) has been conducted to examine the influence of uncertain input parameters on the fracture toughness of polymer/clay nanocomposites (PNCs). The phase-field approach is employed to predict the macroscopic properties of the composite considering six uncertain input parameters. The efficiency, robustness, and repeatability are compared and evaluated comprehensively for five different SA methods.
The Bayesian method is applied to develop a methodology in order to evaluate the performance of different analytical models used in predicting the fracture toughness of polymeric particles nanocomposites. The developed method have considered the model and parameters uncertainties based on different reference data (experimental measurements) gained from the literature. Three analytical models differing in theory and assumptions were examined. The coefficients of variation of the model predictions to the measurements are calculated using the approximated optimal parameter sets. Then, the model selection probability is obtained with respect to the different reference data.
Stochastic finite element modeling is implemented to predict the fracture toughness of polymer/particle nanocomposites. For this purpose, 2D finite element model containing an epoxy matrix and rigid nanoparticles surrounded by an interphase zone is generated. The crack propagation is simulated by the cohesive segments method and phantom nodes. Considering the uncertainties in the input parameters, a polynomial chaos expansion (PCE) surrogate model is construed followed by a sensitivity analysis.
Entwicklung und Untersuchung von alternativen Dicalciumsilicat-Bindern auf der Basis von alpha-C2SH
(2018)
Um den Klimawandel zu begrenzen, müssen die CO2-Emissionen drastisch gesenkt werden [100]. Bis 2050 soll bei der Herstellung von Zement eine Einsparung um 51–60 % auf 0,425–0,350 tCO2/tZement erfolgen [7]. Um dieses Ziel zu erreichen, sind alternative Bindemittelkonzepte notwendig [70].
Diese Arbeit widmet sich alternativen, hochreaktiven Dicalciumsilicat-Bindemitteln, die durch die thermische Aktivierung von α-Dicalcium-Silicat-Hydrat (α-C2SH) erzeugt werden. Das α-C2SH ist eine kristalline C S H-Phase, die im hydrothermalen Prozess, beispielsweise aus Branntkalk und Quarz, herstellbar ist. Die thermische Aktivierung kann bei sehr niedrigen Temperaturen erfolgen (>420 °C) und führt zu einem Multiphasen-C2S-Binder. Als besonders reaktive Bestandteile können x-C2S und röntgenamorphe Anteile enthalten sein. Weiterhin können β C2S, γ C2S und Dellait (Ca6(SiO4)(Si2O7)(OH)2) entstehen.
Im Rahmen der Arbeit wird zunächst der Stand des Wissens zur Polymorphie und Hydratation von C2S zusammengefasst. Es werden bekannte C2S-basierte Bindemittelkonzepte vorgestellt und bewertet.
Die Herstellung von C2S-Bindern wird experimentell im Labormaßstab untersucht. Dabei kommen unterschiedliche Autoklaven und ein Muffelofen zum Einsatz. Die Herstellungsparameter werden hinsichtlich Phasenbestand und Reaktivität optimiert. Die Bindemittel werden durch quantitative Röntgen-Phasenanalyse (QXRD), Rasterelektronenmikroskopie (REM), N2-Adsorption (BET-Methode), Heliumpycnometer, Thermoanalyse (TGA/DSC) und 29Si-MAS- sowie 29Si-1H-CP/MAS-NMR-Spektroskopie charakterisiert. Das Hydratationsverhalten der Bindemittel wird vorrangig mithilfe von Wärmeflusskalorimetrie untersucht. Weiterhin werden in situ und ex situ XRD-, TGA/DSC- und REM-Untersuchungen durchgeführt. Anhand von zwei Bindemitteln wird die Fähigkeit zur Erzielung hoher Festigkeiten demonstriert. Abschließend erfolgt eine Abschätzung zu Energiebedarf und CO2-Emissionen für die Herstellung der untersuchten C2S-Binder.
Die Ergebnisse zeigen, dass für eine hohe Reaktivität der Binder eine niedrige Brenntemperatur und ein geringer Wasserdampfpartialdruck während der thermischen Aktivierung entscheidend sind. Weiterhin muss das hydrothermal hergestellte α-C2SH eine möglichst hohe spezifische Oberfläche aufweisen. Diese Parameter beeinflussen den Phasenbestand und die phasenspezifische Reaktivität. Brenntemperaturen von ca. 420–500 °C führen zu hochreaktiven Bindern, die im Rahmen dieser Arbeit als Niedertemperatur-C2S-Binder bezeichnet werden. Temperaturen von ca. 600–800 °C führen zu Bindern mit geringerer Reaktivität, die im Rahmen dieser Arbeit als Hochtemperatur-C2S bezeichnet werden. Höhere Brenntemperaturen (1000 °C) führen zu Bindemitteln, die innerhalb der ersten drei Tage keine hydraulische Aktivität zeigen.
Die untersuchten Bindemittel können sehr hohe Reaktionsgeschwindigkeiten erreichen. Die Wärmeflusskalorimetrie deutet bei einigen Bindemitteln einen nahezu vollständigen Umsatz innerhalb von drei Tagen an. Durch XRD wurde für einen Binder der vollständige Verbrauch von x-C2S innerhalb von drei Tagen nachgewiesen. Für einen mittels in-situ-XRD und Wärmeflusskalorimetrie untersuchten Binder wurde gezeigt, dass die Phasen vorrangig in der Reihenfolge röntgenamorph > x-C2S > β-C2S > γ-C2S hydratisieren. Hydratationsprodukte sind nadelige C S H-Phasen und Portlandit.
Die Herstellung durch thermische Aktivierung von α-C2SH führt zu tafeligen Bindemittelpartikeln, die teilweise Zwickelräume und Poren zwischen den einzelnen Partikeln einschließen. Um eine verarbeitbare Bindemittelpaste zu erzeugen, sind daher sehr hohe Wasser/Bindemittel-Werte (z. B. 1,4) erforderlich. Der Wasseranspruch kann durch Mahlung etwa auf das Niveau von Zement gesenkt werden.
Die Druckfestigkeitsentwicklung wurde an zwei Niedertemperatur-C2S-Kompositbindern mit 40 % Kalksteinmehl bzw. 40 % Hüttensand untersucht. Aufgrund von theoretischen Betrachtungen zur Porosität in Abhängigkeit des w/b-Wertes wurde dieser auf 0,3 festgelegt. Durch Zugabe von PCE-Fließmittel wurde ein verarbeitbarer Mörtel erhalten. Die Festigkeitsentwicklung ist sehr schnell. Der Kalksteinmehl-Binder erreichte nach zwei Tagen 46 N/mm². Bis Tag 28 trat keine weitere Festigkeitssteigerung ein. Der Hüttensand-Binder erreichte nach zwei Tagen 62 N/mm². Durch die Hüttensandreaktion stieg die Festigkeit bis auf 85 N/mm² nach 28 Tagen an.
Für den Herstellungsprozess von Niedertemperatur-C2S-Binder wurden Energieverbräuche und CO2-Emissionen abgeschätzt. Es deutet sich an, dass, bezogen auf die Bindemittelmenge, keine wesentlichen Einsparungen im Vergleich zur Portlandzementherstellung möglich sind. Für die tatsächlichen Emissionen muss jedoch zusätzlich die Leistungsfähigkeit der Bindemittel berücksichtigt werden. Die Leistungsfähigkeit kann als erforderliche Bindemittelmenge betrachtet werden, die je m³ Beton eingesetzt werden muss, um bestimmte Festigkeits-, Dauerhaftigkeits- und Verarbeitungseigenschaften zu erreichen.
Aus verschiedenen Veröffentlichungen [94, 201, 206] wurde die These abgeleitet, dass die Leistungsfähigkeit eines Bindemittels maßgeblich von der C-S-H-Menge bestimmt wird, die während der Hydratation gebildet wird. Daher wird für NT-C2S-Binder eine außergewöhnlich hohe Leistungsfähigkeit erwartet.
Auf Basis der Leistungsfähigkeitsthese verringern sich die abgeschätzten CO2-Emissionen von NT-C2S-Bindern, sodass gegenüber Portlandzement ein mögliches Einsparpotenzial von 42 % ermittelt wurde.
Der Ausbau von digitalen Hochgeschwindigkeitsnetzen ist gekennzeichnet durch neuartige Anforderungen an den Planungsprozess. Diese Anforderungen erfordern wiederum den Einsatz von neuartigen Paradigmen, die eine effiziente und zugleich genaue Planung von flächendeckenden Glasfasernetzen ermöglichen. Hierbei können wiederkehrende Planungsaufgaben durch eine gezielte computergestützte Automatisierung effizienter und genauer ausgeführt, als es mit bisherigen Planungskonzepten möglich ist. Dieses Arbeitspapier beschreibt die computergestützte Ausführung eines Planungsprozesses auf Basis von fünf grundlegenden, iterativen Planungsschritten und gibt Empfehlungen für eine effiziente und genaue Planung von Glasfasernetzen. Der hier vorgestellte Ansatz ermöglicht es Netzbetreibern und Investoren, den Ausbau beliebiger Siedlungs- und Gewerbegebiete auf der zuverlässigen Basis von belastbarem Faktenwissen wirtschaftlich zu priorisieren.
A coupled thermo-hydro-mechanical model of jointed hard rock for compressed air energy storage
(2014)
Renewable energy resources such as wind and solar are intermittent, which causes instability when being connected to utility grid of electricity. Compressed air energy storage (CAES) provides an economic and technical viable solution to this problem by utilizing subsurface rock cavern to store the electricity generated by renewable energy in the form of compressed air. Though CAES has been used for over three decades, it is only restricted to salt rock or aquifers for air tightness reason. In this paper, the technical feasibility of utilizing hard rock for CAES is investigated by using a coupled thermo-hydro-mechanical (THM) modelling of nonisothermal gas flow. Governing equations are derived from the rules of energy balance, mass balance, and static equilibrium. Cyclic volumetric mass source and heat source models are applied to simulate the gas injection and production. Evaluation is carried out for intact rock and rock with discrete crack, respectively. In both cases, the heat and pressure losses using air mass control and supplementary air injection are compared.
Tensile strain and compress strain can greatly affect the thermal conductivity of graphene nanoribbons (GNRs). However, the effect of GNRs under shear strain, which is also one of the main strain effect, has not been studied systematically yet. In this work, we employ reverse nonequilibrium molecular dynamics (RNEMD) to the systematical study of the thermal conductivity of GNRs (with model size of 4 nm × 15 nm) under the shear strain. Our studies show that the thermal conductivity of GNRs is not sensitive to the shear strain, and the thermal conductivity decreases only 12–16% before the pristine structure is broken. Furthermore, the phonon frequency and the change of the micro-structure of GNRs, such as band angel and bond length, are analyzed to explore the tendency of thermal conductivity. The results show that the main influence of shear strain is on the in-plane phonon density of states (PDOS), whose G band (higher frequency peaks) moved to the low frequency, thus the thermal conductivity is decreased. The unique thermal properties of GNRs under shear strains suggest their great potentials for graphene nanodevices and great potentials in the thermal managements and thermoelectric applications.
The Carbon journal is pleased to introduce a themed collection of recent articles in the area of computational carbon nanoscience. This virtual special issue was assembled from previously published Carbon articles by Guest Editors Quan Wang and Behrouz Arash, and can be accessed as a set in the special issue section of the journal website homepage: www.journals.elsevier.com/carbon. The article below by our guest editors serves as an introduction to this virtual special issue, and also a commentary on the growing role of computation as a tool to understand the synthesis and properties of carbon nanoforms and their behavior in composite materials.
Water content is a key parameter to monitor in nuclear waste repositories such as the planed underground repository in Bure, France, in the Callovo-Oxfordian (COx) clay formation. High-frequency electromagnetic (HF-EM) measurement techniques, i.e., time or frequency domain reflectometry, offer useful tools for quantitative estimation of water content in porous media. However, despite the efficiency of HF-EM methods, the relationship between water content and dielectric material properties needs to be characterized. Moreover, the high amount of swelling clay in the COx clay leads to dielectric relaxation effects which induce strong dispersion coupled with high absorption of EM waves. Against this background, the dielectric relaxation behavior of the clay rock was studied at frequencies from 1 MHz to 10 GHz with network analyzer technique in combination with coaxial transmission line cells. For this purpose, undisturbed and disturbed clay rock samples were conditioned to achieve a water saturation range from 0.16 to nearly saturation. The relaxation behavior was quantified based on a generalized fractional relaxation model under consideration of an apparent direct current conductivity assuming three relaxation processes: a high-frequency water process and two interface processes which are related to interactions between the aqueous pore solution and mineral particles (adsorbed/hydrated water relaxation, counter ion relaxation and Maxwell-Wagner effects). The frequency-dependent HF-EM properties were further modeled based on a novel hydraulic-mechanical-electromagnetic coupling approach developed for soils. The results show the potential of HF-EM techniques for quantitative monitoring of the hydraulic state in underground repositories in clay formations.
This paper presents a novel numerical procedure based on the combination of an edge-based smoothed finite element (ES-FEM) with a phantom-node method for 2D linear elastic fracture mechanics. In the standard phantom-node method, the cracks are formulated by adding phantom nodes, and the cracked element is replaced by two new superimposed elements. This approach is quite simple to implement into existing explicit finite element programs. The shape functions associated with discontinuous elements are similar to those of the standard finite elements, which leads to certain simplification with implementing in the existing codes. The phantom-node method allows modeling discontinuities at an arbitrary location in the mesh. The ES-FEM model owns a close-to-exact stiffness that is much softer than lower-order finite element methods (FEM). Taking advantage of both the ES-FEM and the phantom-node method, we introduce an edge-based strain smoothing technique for the phantom-node method. Numerical results show that the proposed method achieves high accuracy compared with the extended finite element method (XFEM) and other reference solutions.
A simple multiscale analysis framework for heterogeneous solids based on a computational homogenization technique is presented. The macroscopic strain is linked kinematically to the boundary displacement of a circular or spherical representative volume which contains the microscopic information of the material. The macroscopic stress is obtained from the energy principle between the macroscopic scale and the microscopic scale. This new method is applied to several standard examples to show its accuracy and consistency of the method proposed.
Spatial time domain reflectometry (spatial TDR) is a new measurement method for determining water content profiles along elongated probes (transmission lines). The method is based on the inverse modeling of TDR reflectograms using an optimization algorithm. By means of using flat ribbon cables it is possible to take two independent TDRmeasurements from both ends of the probe, which are used to improve the spatial information content of the optimization results and to consider effects caused by electrical conductivity. The method has been used for monitoring water content distributions on a full-scale levee model made of well-graded clean sand. Flood simulation tests, irrigation tests, and long-term observations were carried out on the model. The results show that spatial TDR is able to determine water content distributions with an accuracy of the spatial resolution of about ±3 cm compared to pore pressure measurements and an average deviation of ±2 vol % compared to measurements made using another independent TDR measurement system.
A known phenomenon during laser welding of thin sheets is the deformation caused by thermally induced stresses. This deformation can result in a change of the gap width between the welded parts, which leads to an unstable welding process. Inducing displacements by using a second heat source will compensate for the change in gap width, hence optimizing the welding process. The base material is 1 mm thick austenitic stainless steel 1.4301, which is welded by a CO2 laser. The second heat source is a diode laser. The gap between the welded parts was set between 0.05 mm and 0.1 mm. The influence of the second heat source on the welding process and the welding result is described. The usage of a second heat source allows a higher gap width to be set prior to the welding process. The results of the numerical simulation were found to be corresponding to those of the experiments.
Strain measurement is important in mechanical testing. A wide variety of techniques exists for measuring strain in the tensile test; namely the strain gauge, extensometer, stress and strain determined by machine crosshead motion, Geometric Moire technique, optical strain measurement techniques and others. Each technique has its own advantages and disadvantages. The purpose of this study is to quantitatively compare the strain measurement techniques. To carry out the tensile test experiments for S 235, sixty samples were cut from the web of the I-profile in longitudinal and transverse directions in four different dimensions. The geometry of samples are analysed by 3D scanner and vernier caliper. In addition, the strain values were determined by using strain gauge, extensometer and machine crosshead motion. Three techniques of strain measurement are compared in quantitative manner based on the calculation of mechanical properties (modulus of elasticity, yield strength, tensile strength, percentage elongation at maximum force) of structural steel. A statistical information was used for evaluating the results. It is seen that the extensometer and strain gauge provided reliable data, however the extensometer offers several advantages over the strain gauge and crosshead motion for testing structural steel in tension. Furthermore, estimation of measurement uncertainty is presented for the basic material parameters extracted through strain measurement.
Stonecutters and Sutong Bridge have pushed the world record for main span length of cable-stayed bridges to over 1000m. The design of these bridges, both located in typhoon prone regions, is strongly influenced by wind effects during their erection. Rigorous wind tunnel test programmes have been devised and executed to determine the aerodynamic behaviour of the structures in the most critical erection conditions. Testing was augmented by analytical and numerical analyses to verify the safety of the structures throughout construction and to ensure that no serviceability problems would affect the erection process. This paper outlines the wind properties assumed for the bridge sites, the experimental test programme with some of its results, the dynamic properties of the bridges during free cantilevering erection and the assessment of their aerodynamic performance. Along the way, it discusses the similarities and some revealing differences between the two bridges in terms of their dynamic response to wind action.
Lack of Information technology applications on construction projects lead to complex flow of data during project life cycle. Building Information Modeling (BIM) has gained attention in the Architectural, Engineering and Construction (AEC) industry, envisage the use of virtual n-dimensional (n-D) models to identify potential conflicts in design, construction or operational of any facility. A questionnaire has been designed to investigate perceptions regarding BIM advantages. Around 102 valid responses received from diversified stakeholders. Results showed very low BIM adoption with low level of ‘Buzz’. BIM is a faster and more effective method for designing and construction management, it improves quality of the design and construction and reduces rework during construction; which came out as the top thee advantages according to the perception of AEC professionals of Pakistan.BIM has least impact on reduction of cost, time and human resources. This research is a bench mark study to understand adoption and advantageous of BIM in Pakistan Construction Industry.
Rice husk ash (RHA) is classified as a highly reactive pozzolan. It has a very high silica content similar to that of silica fume (SF). Using less-expensive and locally available RHA as a mineral admixture in concrete brings ample benefits to the costs, the technical properties of concrete as well as to the environment. An experimental study of the effect of RHA blending on workability, strength and durability of high performance fine-grained concrete (HPFGC) is presented. The results show that the addition of RHA to HPFGC improved significantly compressive strength, splitting tensile strength and chloride penetration resistance. Interestingly, the ratio of compressive strength to splitting tensile strength of HPFGC was lower than that of ordinary concrete, especially for the concrete made with 20 % RHA. Compressive strength and splitting tensile strength of HPFGC containing RHA was similar and slightly higher, respectively, than for HPFGC containing SF. Chloride penetration resistance of HPFGC containing 10–15 % RHA was comparable with that of HPFGC containing 10 % SF.
Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.
In this study, an application of evolutionary multi-objective optimization algorithms on the optimization of sandwich structures is presented. The solution strategy is known as Elitist Non-Dominated Sorting Evolution Strategy (ENSES) wherein Evolution Strategies (ES) as Evolutionary Algorithm (EA) in the elitist Non-dominated Sorting Genetic algorithm (NSGA-II) procedure. Evolutionary algorithm seems a compatible approach to resolve multi-objective optimization problems because it is inspired by natural evolution, which closely linked to Artificial Intelligence (AI) techniques and elitism has shown an important factor for improving evolutionary multi-objective search. In order to evaluate the notion of performance by ENSES, the well-known study case of sandwich structures are reconsidered. For Case 1, the goals of the multi-objective optimization are minimization of the deflection and the weight of the sandwich structures. The length, the core and skin thicknesses are the design variables of Case 1. For Case 2, the objective functions are the fabrication cost, the beam weight and the end deflection of the sandwich structures. There are four design variables i.e., the weld height, the weld length, the beam depth and the beam width in Case 2. Numerical results are presented in terms of Paretooptimal solutions for both evaluated cases.
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.
Building Information Modeling is a powerful tool for the design and for a consistent set of data in a virtual storage. For the application in the phases of realization and on site it needs further development. The paper describes main challenges and main features, which will help the development of software to better service the needs of construction site managers
The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM) method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM) method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
Different types of data provide different type of information. The present research analyzes the error on prediction obtained under different data type availability for calibration. The contribution of different measurement types to model calibration and prognosis are evaluated. A coupled 2D hydro-mechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are identified by scaled sensitivities. Then, Particle Swarm Optimization is applied to determine the optimal parameter values and finally, the error in prognosis is determined. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain the actual prediction errors. The analyses presented here were performed calibrating the hydro-mechanical model to 31 data sets of 100 observations of varying data types. The prognosis results improve when using diversified information for calibration. However, when using several types of information, the number of observations has to be increased to be able to cover a representative part of the model domain. For an analysis with constant number of observations, a compromise between data type availability and domain coverage proves to be the best solution. Which type of calibration information contributes to the best prognoses could not be determined in advance. The error in model prognosis does not depend on the error in calibration, but on the parameter error, which unfortunately cannot be determined in inverse problems since we do not know its real value. The best prognoses were obtained independent of calibration fit. However, excellent calibration fits led to an increase in prognosis error variation. In the case of excellent fits; parameters' values came near the limits of reasonable physical values more often. To improve the prognoses reliability, the expected value of the parameters should be considered as prior information on the optimization algorithm.
Previous publications about biochar in anaerobic digestion show encouraging results with regard to increased biogas yields. This work investigates such effects in a solid-state fermentation of bio-waste. Unlike in previous trials, the influence of biochar is tested with a setup that simulates an industrial-scale biogas plant. Both the biogas and the methane yield increased around 5% with a biochar addition of 5%-based on organic dry matter biochar to bio-waste. An addition of 10% increased the yield by around 3%. While scaling effects prohibit a simple transfer of the results to industrial-scale plants, and although the certainty of the results is reduced by the heterogeneity of the bio-waste, further research in this direction seems promising.
Purpose of this study is to evaluate safety impact of the deceleration lane at the Upstream Zone of at-grade U-turns on 4-lane divided Thai highways. A substantial speed reduction is required by vehicles for diverging and making U-turn, and the deceleration lanes are provided for this purpose. These lanes are also providing a storage space for the U-turning vehicles to avoid unnecessary blockage of through lanes and reduce the potential of rear-end collisions. The safety at the U-turn is greatly influenced by the proper or improper use of the deceleration lanes. Subject to their length, full or partial speed adjustment can occur within the deceleration lane also the road users’ behavior is influenced. To assess the safety impact, the four groups of U-turns with the varying length of deceleration lanes were identified. Owing to limitation of availability and reliability of road crash data in Thailand, widely accepted Traffic Conflict Technique (TCT) was used as an alternative and proactive methodology. The U-turns’ geometric data, traffic conflicts and volume data were recorded in the field at 8 locations, 8 hours per location. Severity Conflict Rate (SCR) was assessed by applying a weighing factor (based on the severity grades according to the Czech TCT) to the observed conflicts related to the conflicting traffic volumes. A comparative higher value of SCR represents a lower level of safety. According to the results, increase in the functional length of the deceleration lane yields a lower value of SCR and a higher level of the road safety.
To assess the safety impact of auxiliary lanes at downstream locations of U-turns, the Traffic Conflict Technique was used. On the basis of the installed components at those locations, four types of U-turns were identified: those without any auxiliary lane, those with an acceleration lane, those with outer widening, and those with both an acceleration lane and outer widening. The available crash data is unreliable, therefore to assess the level of road safety, Conflict Indexes were formulated to put more emphasis on severe crashes than on slight ones by using two types of weighting coefficients. The first coefficient was based on the subjective assessment of the seriousness of the conflict situation and the second was based on the relative speed and angle between conflicting streams. A comparatively higher Conflict Index value represents a lower level of road safety. According to the results, a lower level of road safety occurs if two components apply or if a location is without any auxiliary lane. The highest level of road safety occurs if the layout includes only a single component, either an acceleration lane or outer widening.
The current study attempts to recognise an adequate classification for a semi-rigid beam-to-column connection by investigating strength, stiffness and ductility. For this purpose, an experimental test was carried out to investigate the moment-rotation (M-theta) features of flush end-plate (FEP) connections including variable parameters like size and number of bolts, thickness of end-plate, and finally, size of beams and columns. The initial elastic stiffness and ultimate moment capacity of connections were determined by an extensive analytical procedure from the proposed method prescribed by ANSI/AISC 360-10, and Eurocode 3 Part 1-8 specifications. The behaviour of beams with partially restrained or semi-rigid connections were also studied by incorporating classical analysis methods. The results confirmed that thickness of the column flange and end-plate substantially govern over the initial rotational stiffness of of flush end-plate connections. The results also clearly showed that EC3 provided a more reliable classification index for flush end-plate (FEP) connections. The findings from this study make significant contributions to the current literature as the actual response characteristics of such connections are non-linear. Therefore, such semirigid behaviour should be used to for an analysis and design method.
The fire resistance of concrete members is controlled by the temperature distribution of the considered cross section. The thermal analysis can be performed with the advanced temperature dependent physical properties provided by 5EN6 1992-1-2. But the recalculation of laboratory tests on columns from 5TU6 Braunschweig shows, that there are deviations between the calculated and measured temperatures. Therefore it can be assumed, that the mathematical formulation of these thermal properties could be improved. A sensitivity analysis is performed to identify the governing parameters of the temperature calculation and a nonlinear optimization method is used to enhance the formulation of the thermal properties. The proposed simplified properties are partly validated by the recalculation of measured temperatures of concrete columns. These first results show, that the scatter of the differences from the calculated to the measured temperatures can be reduced by the proposed simple model for the thermal analysis of concrete.
In this work different fibre optic sensors for the structural health monitoring of civil engineering structures are reported. A fibre optic crack sensor and two different fibre optic moisture sensors have been designed to detect the moisture ingress in concrete based building structures. Moreover, the degeneration of the mechanical properties of optical glass fibre sensors and hence their long-term stability and reliability due to the mechanical and chemical impact of the concrete environment is discussed as well as the advantage of applying a fibre optic sensor system for the structural health monitoring of sewerage tunnels is demonstrated.
In this study, the behavior of a widely graded soil prone to suffusion and necessity of homogeneity quantifi cation for such a soil in internal stability considerations are discussed. With the help of suffusion tests, the dependency of the particle washout to homogeneity of sample is shown. The validity of the great infl uence of homogeneity on suffusion processes by the presentation of arguments and evidences are established. It is emphasized that the internal stability of a widely graded soil cannot be directly correlated to the common geotechnical parameters such as dry density or permeability. The initiation and propagation of the suffusion processes are clearly a particle scale phenomenon, so the homogeneity of particle assemblies (micro-scale) has a decisive effect on particle rearrangement and washout processes. It is addressed that the guidelines for assessing internal stability lack a fundamental, scientifi c basis for quantifi cation of homogeneity. The observation of the segregation processes within the sample in an ascending layered order (for downwards fl ow) inspired the author to propose a new packing model for granular materials which are prone to internally instability.
It is shown that the particle arrangement, especially the arrangement of soil skeleton particles or the so-called primary fabric has the main role in suffusiv processes. Therefore, an experimental approach for identifi cation of the skeleton in the soil matrix is proposed. 3D models of Sequential Fill Tests using Discrete Element Method (DEM) and 3D models of granular packings for relative, stochastically and ideal homogeneous particle assemblies were generated, and simulations have been carried out.
Based on the numerical investigations and in dependency on the soil skeleton behavior, an approach for measurement of relevant scale, the so-called Representative Elementary Volume (REV) for homogeneity investigation is proposed. The development of a new testing method for quantifi cation of homogeneity is introduced (in-situ). An approach for quantifi cation of homogeneity in numerically or experimentally generated packings (samples) based on image processing method of MATLAB has been introduced. A generalized experimental method for assessment of internal stability for widely graded soils with dominant coarse matrix is developed, and a new suffusion criterion based on ideal homogeneous internally stable granular packing is designed.
My research emphasizes that in a widely graded soils with dominant coarse matrix, the soil fractions with diameters bigger than D60 build essentially the soil skeleton. The mass and spatial distribution of these fractions governs the internal stability, and the mass and distribution of the fi ll fractions are a secondary matter. For such a soil, the homogeneity of the skeleton must be cautiously measured and verified.
Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex.
The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials.
This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties
of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major
goal, the following tasks are carried out:
At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs.
At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length
scales. In particular, we homogenized the RVE into an equivalent fiber.
The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale.
Stochastic modeling and uncertainty quantification consist of the following ingredients:
- Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively.
- Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data.
- Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance.
In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided.
The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results.
Methods based on B-splines for model representation, numerical analysis and image registration
(2015)
The thesis consists of inter-connected parts for modeling and analysis using newly developed isogeometric methods. The main parts are reproducing kernel triangular B-splines, extended isogeometric analysis for solving weakly discontinuous problems, collocation methods using superconvergent points, and B-spline basis in image registration applications.
Each topic is oriented towards application of isogeometric analysis basis functions to ease the process of integrating the modeling and analysis phases of simulation.
First, we develop reproducing a kernel triangular B-spline-based FEM for solving PDEs. We review the triangular B-splines and their properties. By definition, the triangular basis function is very flexible in modeling complicated domains. However, instability results when it is applied for analysis. We modify the triangular B-spline by a reproducing kernel technique, calculating a correction term for the triangular kernel function from the chosen surrounding basis. The improved triangular basis is capable to obtain the results with higher accuracy and almost optimal convergence rates.
Second, we propose an extended isogeometric analysis for dealing with weakly discontinuous problems such as material interfaces. The original IGA is combined with XFEM-like enrichments which are continuous functions themselves but with discontinuous derivatives. Consequently, the resulting solution space can approximate solutions with weak discontinuities. The method is also applied to curved material interfaces, where the inverse mapping and the curved triangular elements are considered.
Third, we develop an IGA collocation method using superconvergent points. The collocation methods are efficient because no numerical integration is needed. In particular when higher polynomial basis applied, the method has a lower computational cost than Galerkin methods. However, the positions of the collocation points are crucial for the accuracy of the method, as they affect the convergent rate significantly. The proposed IGA collocation method uses superconvergent points instead of the traditional Greville abscissae points. The numerical results show the proposed method can have better accuracy and optimal convergence rates, while the traditional IGA collocation has optimal convergence only for even polynomial degrees.
Lastly, we propose a novel dynamic multilevel technique for handling image registration. It is application of the B-spline functions in image processing. The procedure considered aims to align a target image from a reference image by a spatial transformation. The method starts with an energy function which is the same as a FEM-based image registration. However, we simplify the solving procedure, working on the energy function directly. We dynamically solve for control points which are coefficients of B-spline basis functions. The new approach is more simple and fast. Moreover, it is also enhanced by a multilevel technique in order to prevent instabilities. The numerical testing consists of two artificial images, four real bio-medical MRI brain and CT heart images, and they show our registration method is accurate, fast and efficient, especially for large deformation problems.
Reinforced concrete walls are commonly selected as the lateral resisting systems in seismic design of buildings. The design procedure requires reliable/robust models to predict the wall response. Many researchers, thus, have focused on using the available experimental data to be able to comment on the quality of models at hand. What is missing though is an uncertain attitude towards the experimental data since such data can be affected by different sources of uncertainty. In this paper, we introduce the database created for model quality evaluation purposes considering the uncertainties in the experimental data. This is the first step of a larger study on experience-based model quality evaluation of reinforced concrete walls. Here, we briefly present the database as well as six sample validations of the developed numerical model (the quality of which is to be assessed). The database contains the information on nearly 300 wall specimens from about 50 sources. Both the database and the numerical model, built for uncertainty/sensitivity analysis purposes, are mainly based on ten parameters. These include geometry, material, reinforcement layout and loading properties. The validation results prove that the model is able to predict the wall response satisfactorily. Consequently, the validated numerical model could be used in further quality evaluation studies.
The polymeric clay nanocomposites are a new class of materials of which recently have become the centre of attention due to their superior mechanical and physical properties. Several studies have been performed on the mechanical characterisation of these nanocomposites; however most of those studies have neglected the effect of the interfacial region between the clays and the matrix despite of its significant influence on the mechanical performance of the nanocomposites.
There are different analytical methods to calculate the overall elastic material properties of the composites. In this study we use the Mori-Tanaka method to determine the overall stiffness of the composites for simple inclusion geometries of cylinder and sphere. Furthermore, the effect of interphase layer on the overall properties of composites is calculated. Here, we intend to get ounds for the effective mechanical properties to compare with the analytical results. Hence, we use linear displacement boundary conditions (LD) and uniform traction boundary conditions (UT) accordingly. Finally, the analytical results are compared with numerical results and they are in a good agreement.
The next focus of this dissertation is a computational approach with a hierarchical multiscale method on the mesoscopic level. In other words, in this study we use the stochastic analysis and computational homogenization method to analyse the effect of thickness and stiffness of the interfacial region on the overall elastic properties of the clay/epoxy nanocomposites. The results show that the increase in interphase thickness, reduces the stiffness of the clay/epoxy naocomposites and this decrease becomes significant in higher clay contents. The results of the sensitivity analysis prove that the stiffness of the interphase layer has more significant effect on the final stiffness of nanocomposites. We also validate the results with the available experimental results from the literature which show good agreement.
One major research focus in the Material Science and Engineering Community in the past decade has been to obtain a more fundamental understanding on the phenomenon 'material failure'. Such an understanding is critical for engineers and scientists developing new materials with higher strength and toughness, developing robust designs against failure, or for those concerned with an accurate estimate of a component's design life. Defects like cracks and dislocations evolve at
nano scales and influence the macroscopic properties such as strength, toughness and ductility of a material. In engineering applications, the global response of the system is often governed by the behaviour at the smaller length scales. Hence, the sub-scale behaviour must be computed accurately for good predictions of the full scale behaviour.
Molecular Dynamics (MD) simulations promise to reveal the fundamental mechanics of material failure by modeling the atom to atom interactions. Since the atomistic dimensions are of the order of Angstroms ( A), approximately 85 billion atoms are required to model a 1 micro- m^3 volume of Copper. Therefore, pure atomistic models are prohibitively expensive with everyday engineering computations involving macroscopic cracks and shear bands, which are much larger than the atomistic length and time scales. To reduce the computational effort, multiscale methods are required, which are able to couple a continuum description of the structure with an atomistic description. In such paradigms, cracks and dislocations are explicitly modeled at the atomistic scale, whilst a self-consistent continuum model elsewhere.
Many multiscale methods for fracture are developed for "fictitious" materials based on "simple" potentials such as the Lennard-Jones potential. Moreover, multiscale methods for evolving cracks are rare. Efficient methods to coarse grain the fine scale defects are missing. However, the existing multiscale methods for fracture do not adaptively adjust the fine scale domain as the crack propagates. Most methods, therefore only "enlarge" the fine scale domain and therefore drastically increase computational cost. Adaptive adjustment requires the fine scale domain to be refined and coarsened. One of the major difficulties in multiscale methods for fracture is to up-scale fracture related material information from the fine scale to the coarse scale, in particular for complex crack problems. Most of the existing approaches therefore were applied to examples with comparatively few macroscopic cracks.
Key contributions
The bridging scale method is enhanced using the phantom node method so that cracks can be modeled at the coarse scale. To ensure self-consistency in the bulk, a virtual atom cluster is devised providing the response of the intact material at the coarse scale. A molecular statics model is employed in the fine scale where crack propagation is modeled by naturally breaking the bonds. The fine scale and coarse scale models are coupled by enforcing the displacement boundary conditions on the ghost atoms. An energy criterion is used to detect the crack tip location. Adaptive refinement and coarsening schemes are developed and implemented during the crack propagation. The results were observed to be in excellent agreement with the pure atomistic simulations. The developed multiscale method is one of the first adaptive multiscale method for fracture.
A robust and simple three dimensional coarse graining technique to convert a given atomistic region into an equivalent coarse region, in the context of multiscale fracture has been developed. The developed method is the first of its kind. The developed coarse graining technique can be applied to identify and upscale the defects like: cracks, dislocations and shear bands. The current method has been applied to estimate the equivalent coarse scale models of several complex fracture patterns arrived from the pure atomistic simulations. The upscaled fracture pattern agree well with the actual fracture pattern. The error in the potential energy of the pure atomistic and the coarse grained model was observed to be acceptable.
A first novel meshless adaptive multiscale method for fracture has been developed. The phantom node method is replaced by a meshless differential reproducing kernel particle method. The differential reproducing kernel particle method is comparatively more expensive but allows for a more "natural" coupling between the two scales due to the meshless interpolation functions. The higher order continuity is also beneficial. The centro symmetry parameter is used to detect the crack tip location. The developed multiscale method is employed to study the complex crack propagation. Results based on the meshless adaptive multiscale method were observed to be in excellent agreement with the pure atomistic simulations.
The developed multiscale methods are applied to study the fracture in practical materials like Graphene and Graphene on Silicon surface. The bond stretching and the bond reorientation were observed to be the net mechanisms of the crack growth in Graphene. The influence of time step on the crack propagation was studied using two different time steps. Pure atomistic simulations of fracture in Graphene on Silicon surface are presented. Details of the three dimensional multiscale method to study the fracture in Graphene on Silicon surface are discussed.
The main objective of this thesis is to investigate the characteristics of rice husk ash RHA) and then its behaviour in self-compacting high performance concrete (SCHPC) with respects to rheological properties, hydration and microstructure development and alkali silica reaction, in comparison with silica fume (SF). The main results show that the RHA is a macro-mesoporous amorphous siliceous material with a very high silica content comparable with SF. The pore size distribution is the most important parameter of RHA besides amorphous silica content. This parameter affects pore volume, specific surface area, and thus the water demand and the pozzolanic reactivity of RHA and its behaviour in SCHPC. The incorporation of RHA decreases filling and passing abilities, but significantly increases plastic viscosity and segregation resistance of SCHPC. Therefore, RHA can be used as a viscosity modifying admixture for SCHPC. The incorporation of RHA increases the superplasticizer adsorption, the superplasticizer saturation dosage, yield stress and plastic viscosity of mortar. Fresh mortar formulated from SCHPC is a shear-thickening material. The incorporation of RHA/SF ecreases the shearthickening degree. The incorporation of RHA/SF increases the degree of cement hydration. SF appears more effective at 3 days possibly due to the better nucleation site effect, whereas RHA dominates at the later ages possibly due to the internal water curing effect. The incorporation of RHA/SF increases the degree of C3S hydration, particularly the C3S hydration rate from 3 to 14 days. The pozzolanic reaction takes place outside and inside RHA particles.
The internal pozzolanic eaction products consolidate the pores inside RHA particles rather than contribute to the pore refinement in the cement matrix. In the presence of the high alkali concentration, RHA particles act as microreactive aggregates and react with alkali hydroxide to generate the expansive alkali silica reaction products. Increasing the particle size and temperature increases the alkali silica reactivity of RHA. The mechanism for the successive pozzolanic and alkali silica reactions of RHA is theorized. Additionally, a new simple
mix design method is proposed for SCHPC containing various supplementary cementitious materials, i.e. RHA, SF, fly ash and limestone powder.
Das Hauptziel der Arbeit war es zu klären, ob alkalihaltige Enteisungsmittel eine Alkali-Kieselsäure-Reaktion (AKR) auslösen und/oder beschleunigen können und was die dabei ggf. zugrunde liegenden Mechanismen sind. Die Untersuchungen dazu ergaben, dass die auf Verkehrsflächen eingesetzten alkalihaltigen Enteisungsmittel auf Basis von Natriumchlorid (Fahrbahndecken) bzw. auf Basis der Alkaliacetate und -formiate (Flugbetriebsflächen) den Ablauf einer AKR in Betonen mit alkalireaktiven Gesteinskörnungen auslösen und mitunter stark beschleunigen können. Dabei nimmt die AKR-fördernde Wirkung der Enteisungsmittel in der Reihenfolge Natriumchlorid - Alkaliacetate - Alkaliformiate erheblich zu.
Es zeigte sich, dass im Fall der Alkaliacetate und -formiate nicht allein die Zufuhr von Alkalien von Bedeutung ist, sondern dass es außerdem zu einer Freisetzung von OH-Ionen aus dem Portlandit und folglich zu einem Anstieg des pH-Wertes in der Porenlösung kommt. Dadurch wird der Angriff auf alkalireaktives SiO2 in Gesteinskörnungen verstärkt und der Ablauf einer AKR beschleunigt. Unter äußerer NaCl-Zufuhr kommt es hingegen nicht zu einem Anstieg des pH-Wertes, was der Grund für die weniger stark AKR-fördernde Wirkung von NaCl ist. Von Bedeutung sind hier die zugeführten Na-Ionen und offenbar ein sich andeutender, direkter Einfluss von NaCl auf das SiO2-Löseverhalten. Sind pH-Wert und Na-Konzentration in der Porenlösung ausreichend hoch, wird sich thermodynamisch bedingt AKR-Gel bilden. Die Bildung von FRIEDEL’schem Salz ist dabei nur eine Begleiterscheinung, aber keine Voraussetzung für den Ablauf einer AKR unter äußerer NaCl-Zufuhr.
Es zeigte sich weiter, dass sich mit der FIB-Klimawechsellagerung als Performance-Prüfung das AKR-Schädigungspotential von Betonen für Fahrbahndecken und Flugbetriebsflächen zuverlässig beurteilen lässt. Die Vorteile der FIB-Klimawechsellagerung liegen in der Prüfung kompletter, projektspezifischer Betonzusammensetzungen unter Beachtung aller praxisrelevanten klimatischen Einwirkungen und vor allem in der Berücksichtigung einer äußeren Alkalizufuhr. Innerhalb von 36 Wochen kann das AKR-Schädigungspotential einer Betonzusammensetzung für eine Nutzungsdauer von 20-30 Jahren in der Praxis sicher beurteilt werden.
Für anwendungsbezogene Lösungsansätze im Bereich der Siedlungswasserwirtschaft im urbanen Raum ist es wichtig, die Inhalte komplexer ineinander greifender Systeme zu begreifen. Ein Simulationsspiel kann hilfreich sein, um den Nutzer mit neuen Technologien und Möglichkeiten der Kombination vertraut zu machen. Aufgrund hoher Anforderungen an Komplexität und Detailliertheit der Modelle ist die Entwicklung eines solchen Spiels teuer und aufwändig. Diese Arbeit untersucht, inwieweit sich das kommerziell zu Unterhaltungszwecken entwickelte Spiel SimCity 5 (Version 2013) nutzen lässt bzw. wie es konfiguriert werden muss. Im Speziellen wird dies am Beispiel des naturnahen Regenwassermanagements im urbanen Raum erläutert.
Die Analyse von SimCity 5 zeigt, dass sich das Spiel durchaus als Werkzeug zur Entscheidungsunterstützung eignet. Die Teilsysteme der Siedlungswasserwirtschaft sind jedoch zu stark vereinfacht, sodass Verbesserungsbedarf besteht.
Um Szenarien des naturnahen Regenwassermanagements in das Spiel zu integrieren, wurde im Rahmen der Arbeit das Modell SimRegen entwickelt. Da derzeit keine Schnittstelle zur agentenbasierten Simulationsengine GlassBox freigegeben ist, wurden Teilaspekte des Modells mit Hilfe des agentenbasierten Simulationswerkzeugs NetLogo (Version 5.0.4) implementiert.
A more careful consideration of food waste is needed for planning the urban environment. The research signals links between the organization of individuals, the built environment and food waste management through a study conducted in Mexico. It recognizes the different scales within which solid waste management operates, explores food waste production at household levels, and investigates the urban circumstances that influence its management. This is based on the idea that sustainable food waste management in cities requires a constellation of processes through which a ‘people centered’ approach offers added value to technical and biological facts. This distinction addresses how urban systems react to waste and what behavioral and structural factors affect current sanitary practices in Mexico. Food waste is a resource-demanding item, which makes for a considerable amount of refuse being disposed of in landfills in developing cities. The existing data shortage on waste generation at household levels debilitates implementation strategies and there is a need for more contextual knowledge associated with waste. The evidence-based study includes an explorative phase on the culture of waste management and a more in-depth examination of domestic waste composition. Mixed data collection tools including a household based survey, a food waste diary and weighing recording system were developed to enquire into the daily practices of waste disposal in households. The contrasting urban environment of Mexico City Metropolitan Area holds indistinctive boundaries between the core and the periphery, which hinder the implementation of integrated environmental plans. External determinants are different modes of urban transformation and internal determinants are building features and their consolidation processes. At the household level, less and more affluents groups responded differently to external environmental stressors. A targeted planning proposition is required for each group. Local alternative waste management is more likely to be implement in less affluent contexts. Further, more effective demand-driven service delivery implies better integration between the formal and informal sectors. The results show that efforts toward securing long-term changes in Mexico and other cities with similar circumstances require creating synergy between education, building consolidation, local infrastructure and social engagement.
Das Bauwesen hat sich in den letzten Jahren durch die Globalisierung des Marktes verbunden mit einer verstärkten Nutzung moderner Technologien stark gewandelt. Die Planung und die Durchführung von Bauvorhaben werden zunehmend komplexer und sind mit erhöhten Risiken verbunden. Geld- und Zeitressourcen werden bei einem immer härter werdenden Konkurrenzkampf knapper.
Das Projektmanagement stellt Lösungsansätze bereit, um Bauvorhaben auch unter erschwerten Bedingungen und erhöhten Risiken erfolgreich zum Abschluss zu bringen. Dabei hat ein systematisches Risikomanagement beginnend bei der Projektentwicklung bis zum Projektabschluss eine für den Projekterfolg entscheidende Bedeutung.
Ziel der Arbeit ist es, eine quantitative Risikoerfassung für Projektmanager als professionelle Bauherrenvertretung und die Simulation der Risikoauswirkungen auf den Verlauf eines Projektes während der Planungs- und Bauphase zu ermöglichen. Mit Hilfe eines abstrakten Modells soll eine differenzierte, praxisnahe Simulation durchführbar sein, die die verschiedenen Arten der Leistungs- und Kostenentstehung widerspiegelt. Parallel dazu soll die Beschreibung von Risiken so abstrahiert werden, dass beliebige Risiken quantitativ erfassbar und anschließend ihre Auswirkungen inklusive mögliche Gegenmaßnahmen in das Modell integrierbar sind.
Anhand zweier Beispiele werden die unterschiedlichen Einsatzmöglichkeiten der quantitativen Erfassung von Projektrisiken und der anschließenden Simulation ihrer Auswirkungen aufgezeigt. Bei dem ersten Beispiel, einem realen, bereits abgeschlossenen Schieneninfrastrukturprojekt, wird die Wirksamkeit einer vorbeugenden Maßnahme gegen ein Projektrisiko untersucht. Im zweiten Beispiel wird ein Planspielansatz zur praxisnahen Aus- und Weiterbildung von Projektmanagern entwickelt. Inhalt des Planspiels ist die Planung und Errichtung eines privatfinanzierten, öffentlichen Repräsentationsbaus mit teilweiser Fremdnutzung.
Nutzerorientierte Bausanierung bedeutet eine gegenüber dem konventionellen Vorgehen deutlich verstärkte Ausrichtung des Planungs- und Sanierungsprozesses auf die Anforderungen und Bedürfnisse des zukünftigen Nutzers eines Gebäudes. Dies hat einerseits ein hochwertigeres Produkt zum Ergebnis, erfordert andererseits aber auch den Einsatz neuer Methoden und Baustoffe sowie ein vernetztes Zusammenarbeiten aller am Bauprozess Beteiligten. Der Fokus der Publikation liegt dabei auf den Bereichen, die eine hohe Relevanz für die nutzerorientierte Bausanierung aufweisen. Dabei handelt es sich insbesondere um: Computergestütztes Bauaufmaß und digitale Bauwerksmodellierung (BIM), bauphysikalische Methoden zur Optimierung von Energieeffizienz und Behaglichkeit bei der Sanierung von Bestandsgebäuden, zerstörungsfreie Untersuchungsmethoden im Rahmen einer substanzschonenden Bauzustandsanalyse und Entwicklung von Ergänzungsbaustoffen.
Das Projekt nuBau ist eine Kooperation zwischen den Fakultäten Bauingenieurwesen und Architektur der Bauhaus-Universität Weimar. Die beteiligten Professuren sind: Bauphysik, Informatik in der Architektur, Polymere Werkstoffe und Werkstoffe des Bauens.
In dieser Arbeit werden die Ergebnisse von experimentellen Untersuchungen an unbewehrten und bewehrten modifizierten Betonen unter monoton steigender Belastung bis zum Bruch, einfacher Kurzzeitbelastung im Grenzbereich der Tragfähigkeit und mehrfach wiederholter Belastung mit kontinuierlicher Be- und Entlastungsgeschwindigkeit vorgestellt und ausgewertet. Für die Modifizierung der Betone wurden zwei grundsätzliche Vorgehens¬weisen angewendet: die Variation der Gesteinskörnung und die Modifizierung der Bindemittelphase mit thermoplastischen Polymeren. Die Auswirkungen der Modifikationen auf die Festigkeitseigenschaften und das Formänderungsverhalten des Betons bei Kurzzeitbelastung waren dabei von besonderem Interesse.
Die beobachteten Veränderungen der Festbetoneigenschaften sowie der nichtlineare Zu-sammenhang zwischen den elastischen und nichtelastischen Verformungsanteilen signali-sieren, dass derartige Modifizierungen das Verformungs- und Bruchverhalten von Beton sig-nifikant beeinflussen und somit beim Nachweis der Tragfähigkeit und Gebrauchstauglichkeit berücksichtigt werden müssen. Neben der Evaluierung des beanspruchungsabhängigen Formänderungsverhaltens werden die etablierten Ansätze zur Beschreibung der Gefügezu-standsbereiche bei Druckbelastung weiter entwickelt, so dass die Übergänge zwischen den Bereichen exakt ermittelt und die Ausprägung der Bereiche quantifiziert werden können. Damit ist ein genauerer Vergleich der durch die Modifizierungen hervorgerufenen Verände-rungen möglich.
The present thesis studies the effects of rice husk ash (RHA) as a pozzolanic admixture and the combination of RHA and ground granulated blast-furnace slag (GGBS) on properties of ultra-high performance concrete (UHPC). The ultimate purpose of this study is to replace completely silica fume (SF) and partially Portland cement by RHA and GGBS to achieve sustainable UHPC. To reach this aim, characteristics of RHA in dependence of grinding period, especially its pozzolanic reactivity in saturated Ca(OH)2 solution and in a cementitious system at a very low water binder ration (w/b) were assessed. The influences of RHA on compatibility between superplasticizer and binder, workability, compressive strength, shrinkage, internal relative humidity, microstructure and durability of UHPC were also evaluated. Furthermore, synergic effects of RHA and GGBS on the properties of UHPC were investigated to produce more sustainable UHPC. Finally, various heat treatments were applied to study the properties of UHPC under these conditions. All the characteristics of these UHPCs containing RHA were compared to those of mixtures containing SF.
Die Arbeit zeigt die wesentlichen Gründe auf, warum betahalbhydratreiche Niederbranntgipsbinder (industriell als Stuckgips bezeichnet) oft sehr unterschiedliche Eigenschaften aufweisen.
Der Anteil an Halbhydrat, welches aus dem stark hygroskopischen Anhydrit III (A III) durch die Reaktion mit Luftfeuchtigkeit entsteht, stellt einen erheblichen, bislang vollkommen unbeachteten Einfluss dar. Dieses Halbhydrat aus A III zeigt andere Oberflächeneigenschaften und ein Reaktionsverhalten, das von frisch gebranntem Betahalbhydrat abweicht.
Es zeigt sich, wie weitreichend der Einfluss physiko-chemischer Oberflächenprozesse wie Adsorption und Kondensation ist. Hierdurch wird nicht nur die Oberflächenenergie der Partikel abgebaut, sondern auch eine Verminderung der Hydratationswärme verursacht. Somit wirken sich physikalische Vorgänge thermodynamisch aus. Einwirkende und resultierende Parameter einer Alterung wirken wie folgt äußerst komplex zusammen:
Die dominierenden Bindemitteleigenschaften Abbindeverhalten und Wasseranspruch verändern sich durch eine Alterung sowohl aufgrund der Phasenumwandlungen als auch infolge der Veränderungen der Kristallite. Ebenso einflussreich ist die Veränderung der Oberflächencharakteristik. Die Auswirkung der Alterung auf die Reaktivität geht deutlich über den Abbau von Anhydrit III, die Dezimierung von abbindefähigem Material und die beschleunigende Wirkung von Alterungsdihydrat hinaus. Das Wachstum der Kristallite von Halbhydrat und die Verringerung der inneren Energie sowie die energetisch günstige spontane Beladung der Kristallgitterkanäle kleinster Anhydrit III-Kristallite mit dampfförmigem Wasser müssen als maßgebliche Ursachen für die Abnahme der Reaktivität infolge der Alterung herausgestellt werden. Die Abnahme der spezifischen Oberfläche und der Oberflächenenergie wirken sich außerdem auf den Lösungs- und den Hydratationsprozess aus. Der auf der Oberfläche von Anhydrit III kristallisierte Anhydrit II wirkt sich auch nach der Umwandlung von A III in Halbhydrat lösungshemmend aus. Infolge der alterungsbedingten Dihydratbildung, die bei anhaltender Feuchteeinwirkung einsetzt, wird diese Wirkung aufgehoben bzw. vermindert. Obgleich Dihydrat für seinen Beschleunigungseffekt bekannt ist, entfaltet Alterungsdihydrat infolge seiner besonderen Ausbildung innerhalb der wenige Moleküllagen umfassenden Kondenswasserschicht nur eine geringe keimbildende Wirkung.
Eine wesentliche Erkenntnis betrifft den Bindungscharakter des Überstöchiometrischen Wassers. Diesbezüglich ist eine rein physikalische Bindung nachweisbar. Das in der Arbeit als stärker adsorptiv gebunden bezeichnete Wasser kommt neben der Freien Feuchte ausschließlich bei Anwesenheit von Halbhydrat vor. Dieser Zusammenhang wird erstmalig hergestellt und mit Hilfe der kristallchemisch bedingten höheren Oberflächenenergie von Halbhydrat erklärt.
The planning process in civil engineering is highly complex and not manageable in its entirety.
The state of the art decomposes complex tasks into smaller, manageable sub-tasks. Due to the close interrelatedness of the sub-tasks, it is essential to couple them. However, from a software engineering point of view, this is quite challenging to do because of the numerous incompatible software applications on the market. This study is concerned with two main objectives: The first is the generic formulation of coupling strategies in order to support engineers in the implementation and selection of adequate coupling strategies. This has been achieved by the use of a coupling pattern language combined with a four-layered, metamodel architecture, whose applicability has been performed on a real coupling scenario. The second one is the quality assessment of coupled software. This has been developed based on the evaluated schema mapping. This approach has been described using mathematical expressions derived from the set theory and graph theory by taking the various mapping patterns into account. Moreover, the coupling quality has been evaluated within the formalization process by considering the uncertainties that arise during mapping and has resulted in global quality values, which can be used by the user to assess the exchange. Finally, the applicability of the proposed approach has been shown using an engineering case study.