56 Bauwesen
Refine
Document Type
- Doctoral Thesis (46)
- Article (31)
- Master's Thesis (17)
- Conference Proceeding (11)
- Preprint (9)
- Bachelor Thesis (6)
- Report (4)
- Book (2)
- Periodical (2)
- Study Thesis (1)
Institute
- Institut für Strukturmechanik (ISM) (28)
- Junior-Professur Computational Architecture (16)
- Professur Baubetrieb und Bauverfahren (12)
- Professur Informatik in der Architektur (12)
- Professur Modellierung und Simulation - Konstruktion (8)
- F. A. Finger-Institut für Baustoffkunde (FIB) (6)
- Professur Stahl- und Hybridbau (4)
- Institut für Konstruktiven Ingenieurbau (IKI) (3)
- Professur Bauphysik (3)
- Professur Betriebswirtschaftslehre im Bauwesen (3)
Keywords
- Architektur (8)
- OA-Publikationsfonds2022 (7)
- Aerodynamik (5)
- BIM (5)
- Bridge (5)
- OA-Publikationsfonds2020 (5)
- Beton (4)
- CAD (4)
- Erdbeben (4)
- Ingenieurwissenschaften (4)
Zusammenfassung:
Der Freistaat Thüringen und die Bauhaus-Universität Weimar haben im Jahr 2011 eine Kooperation zur „Nachwuchsförderung Gebäude-Energieeffizienz in Thüringen (NaGET)“ geschlossen. Ziel der Zusammenarbeit war die Erforschung der energetischen Qualität der Landesgebäude, um daraus Empfehlungen für eine Priorisierung energetischer Sanierungsmaßnahmen ableiten zu können.Im Ergebnis der Untersuchungen wird den Entscheidungsträgern mit der energetischen Potenzialanalyse ein Instrument zur Verfügung gestellt, dass diese bei der Vorauswahl von energetisch zu sanierenden Objekten gezielt unterstützt.
Untersuchungsgegenstand der Studie stellen die rund 1.700 Landesgebäude des Freistaates Thüringen dar, von denen 938 als energetisch relevant einzuschätzen sind. Zunächst eingegrenzt auf 270, wurden letztendlich 218 Gebäude für die energetische Potenzialanalyse ausgewählt, die alle die Anforderungen an die Datenqualität erfüllen. Der aufgebaute Datenbestand reicht hinsichtlich Umfang und Belastbarkeit deutlich über den Ausgangszustand hinaus.
Im Mittelpunkt der Untersuchungen steht die Auswertung der Verbrauchswerte für Wärme und Strom. Mit Hilfe verschiedener Analysemethoden wird rechnerisch als auch grafisch eruiert, welche Gebäude als Hochverbraucher energetisch auffällig sind. Es zeigt sich, dass die Auswertung gleichartiger Gebäude besonders geeignet ist, um auffällige Hochverbraucher zu identifizieren. Am Beispiel von Institutsgebäuden für Forschung und Lehre (BWZK 2200) und Bibliotheksgebäuden (BWZK-Kategorie 9130) wird dies veranschaulicht. Die Auswertung der Gebäude einer einzelnen Einrichtung erfolgt exemplarisch für die Universität Erfurt.
Es wird gezeigt, dass neben dem absoluten Verbrauch weitere Analysekriterien und der Vergleich mit Benchmarks zusätzliche Aufschlüsse bieten. Mit der Ermittlung des Energieeffizienzpotenzials wird eine Kenngröße vorgestellt, die einen aussagekräftigen Vergleich unter den Gebäuden erlaubt. Darauf aufbauend lässt sich eine Rangfolge von Gebäuden bilden, die zur Priorisierung von energetischen Sanierungsmaßnahmen genutzt werden kann.
Zur Durchführung einer energetischen Potenzialanalyse wird eine schrittweise Vorgehensweise vorgestellt, die von der Voranalyse über die Grobanalyse bis zur Feinanalyse eine zunehmende Detailierung vorsieht. Es wird gezeigt, dass damit ein Immobilienportfolio öffentlicher Gebäude, wie dies des Freistaates Thüringen, zielgerichtet und kostenschonend auf energetisch auffällige Gebäude hin untersucht werden kann. Am Beispiel der Universitätsbibliothek Erfurt wird verdeutlicht, wie bei einem energetisch auffälligen Objekt in einer detaillierten Untersuchung die Vorergebnisse geprüft, Ursachen für den erhöhten Energieverbrauch ermittelt und Vorschläge zur Verbesserung der energetischen Qualität erarbeitet werden können.
In einer Hochrechnung wurde mit Hilfe starker Vereinfachungen abgeschätzt, dass bei Gebäuden mit erhöhtem Heizwärmeverbrauch im Mittel eine Einsparung von 52 kWh/m²a möglich ist. Das Einsparpotenzial beim Stromverbrauch beträgt für ein Gebäude des Freistaates Thüringen durchschnittlich 44 kWh/m²a. Festzustellen ist, dass die Streuung der Energieeinsparpotenziale sehr hoch ist. Bei einzelnen Gebäuden ist eine deutliche Abweichung von den Durchschnittswerten nach oben bzw. unten zu verzeichnen. Es wird des Weiteren angenommen, dass im Idealfall 28 % der jährlichen Energiekosten des Freistaates i.H.v. rund 35 Mio. Euro eingespart werden können, wenn die betrachteten Gebäude so energetisch saniert werden, dass sie den Richtwerten für die Verbrauchshöhe entsprechen.
Viele Baudenkmale sind dem Konflikt aus baulichem Instandsetzungsbedarf für eine zeitgemäße Nutzung und einer sich möglicherweise daraus ergebenden Gefährdung der Denkmalsubstanz ausgesetzt. Gründe sind steigende Energiekosten für den Gebäudebetrieb, zeitgemäße Anforderungen an Behaglichkeit und Arbeitsschutz, sowie die Vermeidung von Schäden an der Substanz aufgrund baulicher Mängel des konstruktiven Wärme- und Feuchteschutzes. Gleichzeitig gilt für viele Bauten aber auch die Notwendigkeit regelmäßiger Nutzung und Bewirtschaftung, um den Erhalt überhaupt zu sichern. Die energetische Ertüchtigung von Baudenkmalen scheitert in diesem Spannungsfeld oft am unlösbaren Konflikt zwischen dem Erhalt der bauzeitlichen Substanz auf der einen und der notwendigen energetischen Optimierung der Gebäudehülle auf der anderen Seite. Zielsetzung dieser Fallstudie ist die beispielhafte Entwicklung einer bauklimatischen und denkmalgerechten Ertüchtigungsstrategie am Beispiel eines Verwaltungsgebäudes der Nachkriegsmoderne als Beitrag zur Lösung dieses Konfliktes.
It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.
Dieses Arbeitspapier beschreibt, wie ausgehend von einem vorhandenen Straßennetzwerk Bebauungsareale mithilfe von Unterteilungsalgorithmen automatisch umgelegt, d.h. in Grundstücke unterteilt, und anschließend auf Basis verschiedener städtebaulicher Typen bebaut werden können. Die Unterteilung von Bebauungsarealen und die Generierung von Bebauungsstrukturen unterliegen dabei bestimmten stadtplanerischen Einschränkungen, Vorgaben und Parametern. Ziel ist es aus den dargestellten Untersuchungen heraus ein Vorschlagssystem für stadtplanerische Entwürfe zu entwickeln, das anhand der Umsetzung eines ersten Softwareprototyps zur Generierung von Stadtstrukturen weiter diskutiert wird.
The increasing success of BIM (Building Information Model) and the emergence of its implementation in 3D construction models have paved a way for improving scheduling process. The recent research on application of BIM in scheduling has focused on quantity take-off, duration estimation for individual trades, schedule visualization, and clash detection.
Several experiments indicated that the lack of detailed planning causes about 30% non-productive time and stacking of trades. However, detailed planning still has not been implemented in practice despite receiving a lot of interest from researchers. The reason is associated with the huge amount and complexity of input data. In order to create a detailed planning, it is time consuming to manually decompose activities, collect and calculate the detailed information in relevant. Moreover, the coordination of detailed activities requires much effort for dealing with their complex constraints.
This dissertation aims to support the generation of detailed schedules from a rough schedule. It proposes a model for automated detailing of 4D schedules by integrating BIM, simulation and Pareto-based optimization.
As machine vision-based inspection methods in the field of Structural Health Monitoring (SHM) continue to advance, the need for integrating resulting inspection and maintenance data into a centralised building information model for structures notably grows. Consequently, the modelling of found damages based on those images in a streamlined automated manner becomes increasingly important, not just for saving time and money spent on updating the model to include the latest information gathered through each inspection, but also to easily visualise them, provide all stakeholders involved with a comprehensive digital representation containing all the necessary information to fully understand the structure’s current condition, keep track of any progressing deterioration, estimate the reduced load bearing capacity of the damaged element in the model or simulate the propagation of cracks to make well-informed decisions interactively and facilitate maintenance actions that optimally extend the service life of the structure. Though significant progress has been recently made in information modelling of damages, the current devised methods for the geometrical modelling approach are cumbersome and time consuming to implement in a full-scale model. For crack damages, an approach for a feasible automated image-based modelling is proposed utilising neural networks, classical computer vision and computational geometry techniques with the aim of creating valid shapes to be introduced into the information model, including related semantic properties and attributes from inspection data (e.g., width, depth, length, date, etc.). The creation of such models opens the door for further possible uses ranging from more accurate structural analysis possibilities to simulation of damage propagation in model elements, estimating deterioration rates and allows for better documentation, data sharing, and realistic visualisation of damages in a 3D model.
Augmented Urban Model: Ein Tangible User Interface zur Unterstützung von Stadtplanungsprozessen
(2011)
Im architektonischen und städtebaulichen Kontext erfüllen physische und digitale Modelle aufgrund ihrer weitgehend komplementären Eigenschaften und Qualitäten unterschiedliche, nicht verknüpfte Aufgaben und Funktionen im Entwurfs- und Planungsprozess. Während physische Modelle vor allem als Darstellungs- und Kommunikationsmittel aber auch als Arbeitswerkzeug genutzt werden, unterstützen digitale Modelle darüber hinaus die Evaluation eines Entwurfs durch computergestützte Analyse- und Simulationstechniken.
Analysiert wurden im Rahmen der in diesem Arbeitspapier vorgestellten Arbeit neben dem Einsatz des Modells als analogem und digitalem Werkzeug im Entwurf die Bedeutung des Modells für den Arbeitsprozess sowie Vorbilder aus dem Bereich der Tangible User Interfaces mit Bezug zu Architek¬tur und Städtebau. Aus diesen Betrachtungen heraus wurde ein Prototyp entwickelt, das Augmented Urban Model, das unter anderem auf den frühen Projekten und Forschungsansätzen aus dem Gebiet der Tangible User Interfaces aufsetzt, wie dem metaDESK von Ullmer und Ishii und dem Urban Planning Tool Urp von Underkoffler und Ishii.
Das Augmented Urban Model zielt darauf ab, die im aktuellen Entwurfs- und Planungsprozess fehlende Brücke zwischen realen und digitalen Modellwelten zu schlagen und gleichzeitig eine neue tangible Benutzerschnittstelle zu schaffen, welche die Manipulation von und die Interaktion mit digitalen Daten im realen Raum ermöglicht.
Die Bauhausstraße 11 war in der NS-Zeit Sitz von zahlreichen Institutionen der Gesundheitspolitik. Jetzt ist das Gebäude zum Gegenstand eines Forschungsprojektes geworden, in Zukunft wird auch vor Ort an seine Einbindung in nationalsozialistische Verbrechen erinnert. Dieses Buch dokumentiert und reflektiert die Erinnerungsarbeit auf dem Campus der Bauhaus-Universität Weimar und darüber hinaus. Anhand der interdisziplinären Beiträge wird das Gebäude in der heutigen Bauhausstraße 11 räumlich in Weimar und Thüringen, erinnerungspolitisch aber in einer seit Jahrzehnten erkämpften Landschaft des Gedenkens an nationalsozialistische Verbrechen verortet.
This paper presents the development of an assessment scheme for a visual qualitative evaluation of nailed connections in existing structures, such as board trusses. In terms of further use and preservation, a quick visual inspection will help to evaluate the quality of a structure regarding its load-bearing capacity and deformation behaviour. Tests of old and new nailed joints in combination with a rating scheme point out the correlation between the load-bearing capacity and condition of a joint. Old joints of comparatively good condition tend to exhibit better results than those of poor condition. Moreover, aged joints are generally more load-bearing than newly assembled ones.
The focus of the thesis is to process measurements acquired from a continuous
monitoring system at a railway bridge. Temperature, strain and ambient vibration
records are analysed and two main directions of investigation are pursued.
The first and the most demanding task is to develop processing routines able to extract modal parameters from ambient vibration measurements. For this purpose, reliable experimental models are achieved on the basis of a stochastic system identification(SSI) procedure. A fully automated algorithm based on a three-stage clustering is implemented to perform a modal parameter estimation for every single measurement. After selecting a baseline of modal parameters, the evolution of eigenfrequencies is
studied and correlated to environmental and operational factors.
The second aspect deals with the structural response to passing trains. Corresponding
triggered records of strain and temperature are processed and their assessment is
accomplished using the average strains induced by each train as the reference parameter.
Three influences due to speed, temperature and loads are distinguished and treated individually. An attempt to estimate the maximum response variation due to each factor is also carried out.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed.
Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations.
Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control.
Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records.
Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations.
Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties.
Lara Schrijver is an assistant professor at the Faculty of Architecture of the TU Delft. She is one of three program leaders for a new research program in the department of architecture, ‘The Architectural Project and its Foundations’. Schrijver holds degrees in architecture from Princeton University and the TU Delft. She received her Ph.D. from the TU Eindhoven in 2005. Schrijver has taught design and theory courses, and contributed to conferences in the Netherlands as well as abroad. She was an editor for OASE, journal for architecture, for ten years, and was co-organizer of the 2006 conference ‘The Projective Landscape’. Her current work revolves around the role of architecture in the city, and its responsibility in defining the public domain. Her first book, Radical Games, on the influence of the 1960s on contemporary discourse, is forthcoming in the spring of 2009.
Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken.
The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history.
In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain.
Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics.
In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied.
The initial shear modulus, Gmax, of soil is an important parameter for a variety of geotechnical design applications. This modulus is typically associated with shear strain levels about 5*10^-3% and below. The critical role of soil stiffness at small-strains in the design and analysis of geotechnical infrastructure is now widely accepted.
Gmax is a key parameter in small-strain dynamic analyses such as those to predict soil behavior or soil-structure interaction during earthquake, explosions, machine or traffic vibration where it is necessary to know how the shear modulus degrades from its small-strain value as the level of shear strain increases. Gmax can be equally important for small-strain cyclic situations such as those caused by wind or wave loading and for small-strain static situations as well. Gmax may also be used as an indirect indication of various soil parameters, as it, in many cases, correlates well to other soil properties such as density and sample disturbance. In recent years, a technique using bender elements was developed to investigate the small-strain shear modulus Gmax.
The objective of this thesis is to study the initial shear stiffness for various sands with different void ratios, densities, grain size distribution under dry and saturated conditions, then to compare empirical equations to predict Gmax and results from other testing devices with results of bender elements from this study.
Aktionsräume in Dresden
(2012)
In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten.
The accurate representation of aerodynamic forces is essential for a safe, yet reasonable design of long-span bridges subjected to wind effects. In this paper, a novel extension of the Pseudo-three-dimensional Vortex Particle Method (Pseudo-3D VPM) is presented for Computational Fluid Dynamics (CFD) buffeting analysis of line-like structures. This extension entails an introduction of free-stream turbulent fluctuations, based on the velocity-based turbulence generation. The aerodynamic response of a long-span bridge is obtained by subjecting the 3D dynamic representation of the structure to correlated free-stream turbulence in two-dimensional (2D) fluid planes, which are positioned along the bridge deck. The span-wise correlation of the free-stream turbulence between the 2D fluid planes is established based on Taylor's hypothesis of frozen turbulence. Moreover, the application of the laminar Pseudo-3D VPM is extended to a multimode flutter analysis. Finally, the structural response from the Pseudo-3D flutter and buffeting analyses is verified with the response, computed using the semi-analytical linear unsteady model in the time-domain. Meaningful merits of the turbulent Pseudo-3D VPM with respect to the linear unsteady model are the consideration of the 2D aerodynamic nonlinearity, nonlinear fluid memory, vortex shedding and local non-stationary turbulence effects in the aerodynamic forces. The good agreement of the responses for the two models in the 3D analyses demonstrates the applicability of the Pseudo-3D VPM for aeroelastic analyses of line-like structures under turbulent and laminar free-stream conditions.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses.
The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity.
Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.
In recent years, the discussion of digitalization has arrived in the media, at conferences, and in committees of the construction and real estate industry. While some areas are producing innovations and some contributors can be described as pioneers, other topics still show deficits with regard to digital transformation. The building permit process can also be counted in this category. Regardless of how architects and engineers in planning offices rely on innovative methods, building documents have so far remained in paper form in too many cases, or are printed out after electronic submission to the authority. Existing resources – for example in the form of a building information model, which could provide support in the building permit process – are not being taken advantage of. In order to use digital tools to support decision-making by the building permit authorities, it is necessary to understand the current situation and to question conditions before pursuing the overall automation of internal authority processes as the sole solution.
With a substantive-organizational consideration of the relevant areas that influence building permit determination, an improvement of the building permit procedure within authorities is proposed. Complex areas – such as legal situations, the use of technology, as well as the subjective alternative action – are determined and structured. With the development of a model for the determination of building permitability, both an understanding of influencing factors is conveyed and an increase in transparency for all parties involved is created.
In addition to an international literature review, an empirical study served as the research method. The empirical study was conducted in the form of qualitative expert interviews in order to determine the current state in the field of building permit procedures. The collected data material was processed and subsequently subjected to a software-supported content analysis. The results were processed, in combination with findings from the literature review, in various analyses to form the basis for a proposed model.
The result of the study is a decision model that closes the gap between the current processes within the building authorities and an overall automation of the building permit review process. The model offers support to examiners and applicants in determining building permit eligibility, through its process-oriented structuring of decision-relevant facts. The theoretical model could be transferred into practice in the form of a web application.
The capitalization of ‘certified’ sustainable building sector will be investigated over the power theory of value approach of Jonathan Nitzan and Shimshon Bichler. The study will be initiated by questioning why the environment problems are one of the first items on the agenda and by sharing the ideas of scholars who approaches the subject skeptically, because the predominant literature underlying the necessity and prominence of the topic is already well-known and adapted by the majority. Over the theory developed by Nitzan and Bichler, the concepts of capitalization, strategic sabotage, power, legitimacy, and obedience will be discussed. The hypothesis of “the absentee owners of the construction sector, holding the whip hand and capitalizing the ecology, control the growth and the creativity of green building production and make it carbon-dependent, in order to increase their profit margin” will be questioned. To strengthen the arguments in the hypothesis, the factors, the institutional arrangements, value measurement methods, which affect directly the net present value, will be investigated both in corporation and in building scale in detail, because net present value/ capitalization is asserted as the most important criteria by Nitzan and Bichler to make the investment decisions in the capitalist economic system. To trace the implications of power and the strategic sabotage that power caused, as the empirical dimension of this dissertation, an interface exploring the correlational ties between the climate responsive architecture and the ever changing political, economical, and social contexts and building economics praxis by decades will be developed and the expert interviews will be conducted with the design teams and the appraisers.
A parametric method for building design optimization based on Life Cycle Assessment - Appendix
(2016)
The building sector is responsible for a large share of human environmental impacts, over which architects and planners have a major influence. The main objective of this thesis is to develop a method for environmental building design optimization based on Life Cycle Assessment (LCA) that is applicable as part of the design process. The research approach includes a thorough analysis of LCA for buildings in relation to the architectural design stages and the establishment of a requirement catalogue. The key concept of the novel method called Parametric Life Cycle Assessment(PLCA) is to combine LCA with parametric design. The application of this method to three examples shows that building designs can be optimized time-efficiently and holistically from the beginning of the most influential early design stages, an achievement which has not been possible until now.
The aim of my research is to observe the variance of energy efficiency of a typical multi-story office building under the exposure of different climatic conditions. Energy efficiency requirements in building codes or energy standards are among the most important single measures for buildings’ energy efficiency. Therefore, this study can be set up for a better understanding of how energy efficiency of a building changes under the effect of adverse to moderate climatic conditions which possess a mentionable effect on the operation of a building.
This thesis is structured in three balanced and conceptual steps. Following the aim of the project, the virtual building model is to be analyzed under the effect of seven distinct climatic conditions namely work environment of New Delhi, Mumbai, Berlin, Lisbon, Copenhagen, Dubai and Montreal. Firstly, the task is to do a complete literature research based on the scope of similar researches and studying the problems in detail along with the theoritical background all the concepts which are implemented to get the numerical results. This chapter also comprises a detailed study of the climatic conditions of the above-mentioned cities. Different climatic traits like temperature variations, count of heating and cooling degree days, relative humidity, temperature range and comfort zonal charts for the specified cities are studied in detail. This study helps to understand the effect of these adverse to moderate climates on the operation of the building. On the second step, the virtual building model is prepared on a software platform named Revit Structures. This virtual building model is not necessarily a complete building, but it has the relevant functionalities of a real building. We perform the energy analysis and the heating and cooling analysis on this virtual building model to study the operational outcome of the building under different climatic conditions in detail. By the end of these above two tasks, two scenarios are observed. On one hand, we have a literature research and on the other hand we have the numerical results. Therefore, finally we present a comparative scenario based on the energy efficient performances of the building under such variant climatic conditions. This is followed by the prediction of thermal comfort level inside the building and it based on Fanger’s PMV Model. Understanding the literature and the numerical values in detail helps us to predict the index thermal comfort level inside the building.
The conclusion of this master thesis focuses mainly on the scopes of improvement of energy efficiency requirements in energy codes if any, differentiated according to specific locations. The initial aim of my hypothesis which is to study the impacts of climatic variations on the energy efficient performances of a building is fulfilled but as such topics have very deep and broad roots, the scope of further improvements is always predominant.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling.
The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts.
Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem.
For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time.
In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them.
Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design.
For the safe and efficient operation of dams, frequent monitoring and maintenance are required. These are usually expensive, time consuming, and cumbersome. To alleviate these issues, we propose applying a wave-based scheme for the location and quantification of damages in dams.
To obtain high-resolution “interpretable” images of the damaged regions, we drew inspiration from non-linear full-multigrid methods for inverse problems and applied a new cyclic multi-stage full-waveform inversion (FWI) scheme. Our approach is less susceptible to the stability issues faced by the standard FWI scheme when dealing with ill-posed problems. In this paper, we first selected an optimal acquisition setup and then applied synthetic data to demonstrate the capability of our approach in identifying a series of anomalies in dams by a mixture of reflection and transmission tomography. The results had sufficient robustness, showing the prospects of application in the field of non-destructive testing of dams.
Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.