56 Bauwesen
Refine
Document Type
- Doctoral Thesis (46)
- Article (31)
- Master's Thesis (17)
- Conference Proceeding (11)
- Preprint (9)
- Bachelor Thesis (6)
- Report (4)
- Book (2)
- Periodical (2)
- Study Thesis (1)
Institute
- Institut für Strukturmechanik (ISM) (28)
- Junior-Professur Computational Architecture (16)
- Professur Baubetrieb und Bauverfahren (12)
- Professur Informatik in der Architektur (12)
- Professur Modellierung und Simulation - Konstruktion (8)
- F. A. Finger-Institut für Baustoffkunde (FIB) (6)
- Professur Stahl- und Hybridbau (4)
- Institut für Konstruktiven Ingenieurbau (IKI) (3)
- Professur Bauphysik (3)
- Professur Betriebswirtschaftslehre im Bauwesen (3)
Keywords
- Architektur (8)
- OA-Publikationsfonds2022 (7)
- Aerodynamik (5)
- BIM (5)
- Bridge (5)
- OA-Publikationsfonds2020 (5)
- Beton (4)
- CAD (4)
- Erdbeben (4)
- Ingenieurwissenschaften (4)
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing.
For the safe and efficient operation of dams, frequent monitoring and maintenance are required. These are usually expensive, time consuming, and cumbersome. To alleviate these issues, we propose applying a wave-based scheme for the location and quantification of damages in dams.
To obtain high-resolution “interpretable” images of the damaged regions, we drew inspiration from non-linear full-multigrid methods for inverse problems and applied a new cyclic multi-stage full-waveform inversion (FWI) scheme. Our approach is less susceptible to the stability issues faced by the standard FWI scheme when dealing with ill-posed problems. In this paper, we first selected an optimal acquisition setup and then applied synthetic data to demonstrate the capability of our approach in identifying a series of anomalies in dams by a mixture of reflection and transmission tomography. The results had sufficient robustness, showing the prospects of application in the field of non-destructive testing of dams.
Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling.
The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts.
Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem.
For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time.
In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them.
Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
The aim of my research is to observe the variance of energy efficiency of a typical multi-story office building under the exposure of different climatic conditions. Energy efficiency requirements in building codes or energy standards are among the most important single measures for buildings’ energy efficiency. Therefore, this study can be set up for a better understanding of how energy efficiency of a building changes under the effect of adverse to moderate climatic conditions which possess a mentionable effect on the operation of a building.
This thesis is structured in three balanced and conceptual steps. Following the aim of the project, the virtual building model is to be analyzed under the effect of seven distinct climatic conditions namely work environment of New Delhi, Mumbai, Berlin, Lisbon, Copenhagen, Dubai and Montreal. Firstly, the task is to do a complete literature research based on the scope of similar researches and studying the problems in detail along with the theoritical background all the concepts which are implemented to get the numerical results. This chapter also comprises a detailed study of the climatic conditions of the above-mentioned cities. Different climatic traits like temperature variations, count of heating and cooling degree days, relative humidity, temperature range and comfort zonal charts for the specified cities are studied in detail. This study helps to understand the effect of these adverse to moderate climates on the operation of the building. On the second step, the virtual building model is prepared on a software platform named Revit Structures. This virtual building model is not necessarily a complete building, but it has the relevant functionalities of a real building. We perform the energy analysis and the heating and cooling analysis on this virtual building model to study the operational outcome of the building under different climatic conditions in detail. By the end of these above two tasks, two scenarios are observed. On one hand, we have a literature research and on the other hand we have the numerical results. Therefore, finally we present a comparative scenario based on the energy efficient performances of the building under such variant climatic conditions. This is followed by the prediction of thermal comfort level inside the building and it based on Fanger’s PMV Model. Understanding the literature and the numerical values in detail helps us to predict the index thermal comfort level inside the building.
The conclusion of this master thesis focuses mainly on the scopes of improvement of energy efficiency requirements in energy codes if any, differentiated according to specific locations. The initial aim of my hypothesis which is to study the impacts of climatic variations on the energy efficient performances of a building is fulfilled but as such topics have very deep and broad roots, the scope of further improvements is always predominant.
A parametric method for building design optimization based on Life Cycle Assessment - Appendix
(2016)
The building sector is responsible for a large share of human environmental impacts, over which architects and planners have a major influence. The main objective of this thesis is to develop a method for environmental building design optimization based on Life Cycle Assessment (LCA) that is applicable as part of the design process. The research approach includes a thorough analysis of LCA for buildings in relation to the architectural design stages and the establishment of a requirement catalogue. The key concept of the novel method called Parametric Life Cycle Assessment(PLCA) is to combine LCA with parametric design. The application of this method to three examples shows that building designs can be optimized time-efficiently and holistically from the beginning of the most influential early design stages, an achievement which has not been possible until now.
The capitalization of ‘certified’ sustainable building sector will be investigated over the power theory of value approach of Jonathan Nitzan and Shimshon Bichler. The study will be initiated by questioning why the environment problems are one of the first items on the agenda and by sharing the ideas of scholars who approaches the subject skeptically, because the predominant literature underlying the necessity and prominence of the topic is already well-known and adapted by the majority. Over the theory developed by Nitzan and Bichler, the concepts of capitalization, strategic sabotage, power, legitimacy, and obedience will be discussed. The hypothesis of “the absentee owners of the construction sector, holding the whip hand and capitalizing the ecology, control the growth and the creativity of green building production and make it carbon-dependent, in order to increase their profit margin” will be questioned. To strengthen the arguments in the hypothesis, the factors, the institutional arrangements, value measurement methods, which affect directly the net present value, will be investigated both in corporation and in building scale in detail, because net present value/ capitalization is asserted as the most important criteria by Nitzan and Bichler to make the investment decisions in the capitalist economic system. To trace the implications of power and the strategic sabotage that power caused, as the empirical dimension of this dissertation, an interface exploring the correlational ties between the climate responsive architecture and the ever changing political, economical, and social contexts and building economics praxis by decades will be developed and the expert interviews will be conducted with the design teams and the appraisers.
In recent years, the discussion of digitalization has arrived in the media, at conferences, and in committees of the construction and real estate industry. While some areas are producing innovations and some contributors can be described as pioneers, other topics still show deficits with regard to digital transformation. The building permit process can also be counted in this category. Regardless of how architects and engineers in planning offices rely on innovative methods, building documents have so far remained in paper form in too many cases, or are printed out after electronic submission to the authority. Existing resources – for example in the form of a building information model, which could provide support in the building permit process – are not being taken advantage of. In order to use digital tools to support decision-making by the building permit authorities, it is necessary to understand the current situation and to question conditions before pursuing the overall automation of internal authority processes as the sole solution.
With a substantive-organizational consideration of the relevant areas that influence building permit determination, an improvement of the building permit procedure within authorities is proposed. Complex areas – such as legal situations, the use of technology, as well as the subjective alternative action – are determined and structured. With the development of a model for the determination of building permitability, both an understanding of influencing factors is conveyed and an increase in transparency for all parties involved is created.
In addition to an international literature review, an empirical study served as the research method. The empirical study was conducted in the form of qualitative expert interviews in order to determine the current state in the field of building permit procedures. The collected data material was processed and subsequently subjected to a software-supported content analysis. The results were processed, in combination with findings from the literature review, in various analyses to form the basis for a proposed model.
The result of the study is a decision model that closes the gap between the current processes within the building authorities and an overall automation of the building permit review process. The model offers support to examiners and applicants in determining building permit eligibility, through its process-oriented structuring of decision-relevant facts. The theoretical model could be transferred into practice in the form of a web application.
Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.
Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses.
The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
The accurate representation of aerodynamic forces is essential for a safe, yet reasonable design of long-span bridges subjected to wind effects. In this paper, a novel extension of the Pseudo-three-dimensional Vortex Particle Method (Pseudo-3D VPM) is presented for Computational Fluid Dynamics (CFD) buffeting analysis of line-like structures. This extension entails an introduction of free-stream turbulent fluctuations, based on the velocity-based turbulence generation. The aerodynamic response of a long-span bridge is obtained by subjecting the 3D dynamic representation of the structure to correlated free-stream turbulence in two-dimensional (2D) fluid planes, which are positioned along the bridge deck. The span-wise correlation of the free-stream turbulence between the 2D fluid planes is established based on Taylor's hypothesis of frozen turbulence. Moreover, the application of the laminar Pseudo-3D VPM is extended to a multimode flutter analysis. Finally, the structural response from the Pseudo-3D flutter and buffeting analyses is verified with the response, computed using the semi-analytical linear unsteady model in the time-domain. Meaningful merits of the turbulent Pseudo-3D VPM with respect to the linear unsteady model are the consideration of the 2D aerodynamic nonlinearity, nonlinear fluid memory, vortex shedding and local non-stationary turbulence effects in the aerodynamic forces. The good agreement of the responses for the two models in the 3D analyses demonstrates the applicability of the Pseudo-3D VPM for aeroelastic analyses of line-like structures under turbulent and laminar free-stream conditions.
Aktionsräume in Dresden
(2012)
In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten.
The initial shear modulus, Gmax, of soil is an important parameter for a variety of geotechnical design applications. This modulus is typically associated with shear strain levels about 5*10^-3% and below. The critical role of soil stiffness at small-strains in the design and analysis of geotechnical infrastructure is now widely accepted.
Gmax is a key parameter in small-strain dynamic analyses such as those to predict soil behavior or soil-structure interaction during earthquake, explosions, machine or traffic vibration where it is necessary to know how the shear modulus degrades from its small-strain value as the level of shear strain increases. Gmax can be equally important for small-strain cyclic situations such as those caused by wind or wave loading and for small-strain static situations as well. Gmax may also be used as an indirect indication of various soil parameters, as it, in many cases, correlates well to other soil properties such as density and sample disturbance. In recent years, a technique using bender elements was developed to investigate the small-strain shear modulus Gmax.
The objective of this thesis is to study the initial shear stiffness for various sands with different void ratios, densities, grain size distribution under dry and saturated conditions, then to compare empirical equations to predict Gmax and results from other testing devices with results of bender elements from this study.
Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken.
The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history.
In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain.
Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics.
In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied.
Lara Schrijver is an assistant professor at the Faculty of Architecture of the TU Delft. She is one of three program leaders for a new research program in the department of architecture, ‘The Architectural Project and its Foundations’. Schrijver holds degrees in architecture from Princeton University and the TU Delft. She received her Ph.D. from the TU Eindhoven in 2005. Schrijver has taught design and theory courses, and contributed to conferences in the Netherlands as well as abroad. She was an editor for OASE, journal for architecture, for ten years, and was co-organizer of the 2006 conference ‘The Projective Landscape’. Her current work revolves around the role of architecture in the city, and its responsibility in defining the public domain. Her first book, Radical Games, on the influence of the 1960s on contemporary discourse, is forthcoming in the spring of 2009.
Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed.
Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations.
Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control.
Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records.
Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations.
Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
The focus of the thesis is to process measurements acquired from a continuous
monitoring system at a railway bridge. Temperature, strain and ambient vibration
records are analysed and two main directions of investigation are pursued.
The first and the most demanding task is to develop processing routines able to extract modal parameters from ambient vibration measurements. For this purpose, reliable experimental models are achieved on the basis of a stochastic system identification(SSI) procedure. A fully automated algorithm based on a three-stage clustering is implemented to perform a modal parameter estimation for every single measurement. After selecting a baseline of modal parameters, the evolution of eigenfrequencies is
studied and correlated to environmental and operational factors.
The second aspect deals with the structural response to passing trains. Corresponding
triggered records of strain and temperature are processed and their assessment is
accomplished using the average strains induced by each train as the reference parameter.
Three influences due to speed, temperature and loads are distinguished and treated individually. An attempt to estimate the maximum response variation due to each factor is also carried out.
This paper presents the development of an assessment scheme for a visual qualitative evaluation of nailed connections in existing structures, such as board trusses. In terms of further use and preservation, a quick visual inspection will help to evaluate the quality of a structure regarding its load-bearing capacity and deformation behaviour. Tests of old and new nailed joints in combination with a rating scheme point out the correlation between the load-bearing capacity and condition of a joint. Old joints of comparatively good condition tend to exhibit better results than those of poor condition. Moreover, aged joints are generally more load-bearing than newly assembled ones.
Die Bauhausstraße 11 war in der NS-Zeit Sitz von zahlreichen Institutionen der Gesundheitspolitik. Jetzt ist das Gebäude zum Gegenstand eines Forschungsprojektes geworden, in Zukunft wird auch vor Ort an seine Einbindung in nationalsozialistische Verbrechen erinnert. Dieses Buch dokumentiert und reflektiert die Erinnerungsarbeit auf dem Campus der Bauhaus-Universität Weimar und darüber hinaus. Anhand der interdisziplinären Beiträge wird das Gebäude in der heutigen Bauhausstraße 11 räumlich in Weimar und Thüringen, erinnerungspolitisch aber in einer seit Jahrzehnten erkämpften Landschaft des Gedenkens an nationalsozialistische Verbrechen verortet.
Augmented Urban Model: Ein Tangible User Interface zur Unterstützung von Stadtplanungsprozessen
(2011)
Im architektonischen und städtebaulichen Kontext erfüllen physische und digitale Modelle aufgrund ihrer weitgehend komplementären Eigenschaften und Qualitäten unterschiedliche, nicht verknüpfte Aufgaben und Funktionen im Entwurfs- und Planungsprozess. Während physische Modelle vor allem als Darstellungs- und Kommunikationsmittel aber auch als Arbeitswerkzeug genutzt werden, unterstützen digitale Modelle darüber hinaus die Evaluation eines Entwurfs durch computergestützte Analyse- und Simulationstechniken.
Analysiert wurden im Rahmen der in diesem Arbeitspapier vorgestellten Arbeit neben dem Einsatz des Modells als analogem und digitalem Werkzeug im Entwurf die Bedeutung des Modells für den Arbeitsprozess sowie Vorbilder aus dem Bereich der Tangible User Interfaces mit Bezug zu Architek¬tur und Städtebau. Aus diesen Betrachtungen heraus wurde ein Prototyp entwickelt, das Augmented Urban Model, das unter anderem auf den frühen Projekten und Forschungsansätzen aus dem Gebiet der Tangible User Interfaces aufsetzt, wie dem metaDESK von Ullmer und Ishii und dem Urban Planning Tool Urp von Underkoffler und Ishii.
Das Augmented Urban Model zielt darauf ab, die im aktuellen Entwurfs- und Planungsprozess fehlende Brücke zwischen realen und digitalen Modellwelten zu schlagen und gleichzeitig eine neue tangible Benutzerschnittstelle zu schaffen, welche die Manipulation von und die Interaktion mit digitalen Daten im realen Raum ermöglicht.
As machine vision-based inspection methods in the field of Structural Health Monitoring (SHM) continue to advance, the need for integrating resulting inspection and maintenance data into a centralised building information model for structures notably grows. Consequently, the modelling of found damages based on those images in a streamlined automated manner becomes increasingly important, not just for saving time and money spent on updating the model to include the latest information gathered through each inspection, but also to easily visualise them, provide all stakeholders involved with a comprehensive digital representation containing all the necessary information to fully understand the structure’s current condition, keep track of any progressing deterioration, estimate the reduced load bearing capacity of the damaged element in the model or simulate the propagation of cracks to make well-informed decisions interactively and facilitate maintenance actions that optimally extend the service life of the structure. Though significant progress has been recently made in information modelling of damages, the current devised methods for the geometrical modelling approach are cumbersome and time consuming to implement in a full-scale model. For crack damages, an approach for a feasible automated image-based modelling is proposed utilising neural networks, classical computer vision and computational geometry techniques with the aim of creating valid shapes to be introduced into the information model, including related semantic properties and attributes from inspection data (e.g., width, depth, length, date, etc.). The creation of such models opens the door for further possible uses ranging from more accurate structural analysis possibilities to simulation of damage propagation in model elements, estimating deterioration rates and allows for better documentation, data sharing, and realistic visualisation of damages in a 3D model.
The increasing success of BIM (Building Information Model) and the emergence of its implementation in 3D construction models have paved a way for improving scheduling process. The recent research on application of BIM in scheduling has focused on quantity take-off, duration estimation for individual trades, schedule visualization, and clash detection.
Several experiments indicated that the lack of detailed planning causes about 30% non-productive time and stacking of trades. However, detailed planning still has not been implemented in practice despite receiving a lot of interest from researchers. The reason is associated with the huge amount and complexity of input data. In order to create a detailed planning, it is time consuming to manually decompose activities, collect and calculate the detailed information in relevant. Moreover, the coordination of detailed activities requires much effort for dealing with their complex constraints.
This dissertation aims to support the generation of detailed schedules from a rough schedule. It proposes a model for automated detailing of 4D schedules by integrating BIM, simulation and Pareto-based optimization.
Dieses Arbeitspapier beschreibt, wie ausgehend von einem vorhandenen Straßennetzwerk Bebauungsareale mithilfe von Unterteilungsalgorithmen automatisch umgelegt, d.h. in Grundstücke unterteilt, und anschließend auf Basis verschiedener städtebaulicher Typen bebaut werden können. Die Unterteilung von Bebauungsarealen und die Generierung von Bebauungsstrukturen unterliegen dabei bestimmten stadtplanerischen Einschränkungen, Vorgaben und Parametern. Ziel ist es aus den dargestellten Untersuchungen heraus ein Vorschlagssystem für stadtplanerische Entwürfe zu entwickeln, das anhand der Umsetzung eines ersten Softwareprototyps zur Generierung von Stadtstrukturen weiter diskutiert wird.
It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.
Viele Baudenkmale sind dem Konflikt aus baulichem Instandsetzungsbedarf für eine zeitgemäße Nutzung und einer sich möglicherweise daraus ergebenden Gefährdung der Denkmalsubstanz ausgesetzt. Gründe sind steigende Energiekosten für den Gebäudebetrieb, zeitgemäße Anforderungen an Behaglichkeit und Arbeitsschutz, sowie die Vermeidung von Schäden an der Substanz aufgrund baulicher Mängel des konstruktiven Wärme- und Feuchteschutzes. Gleichzeitig gilt für viele Bauten aber auch die Notwendigkeit regelmäßiger Nutzung und Bewirtschaftung, um den Erhalt überhaupt zu sichern. Die energetische Ertüchtigung von Baudenkmalen scheitert in diesem Spannungsfeld oft am unlösbaren Konflikt zwischen dem Erhalt der bauzeitlichen Substanz auf der einen und der notwendigen energetischen Optimierung der Gebäudehülle auf der anderen Seite. Zielsetzung dieser Fallstudie ist die beispielhafte Entwicklung einer bauklimatischen und denkmalgerechten Ertüchtigungsstrategie am Beispiel eines Verwaltungsgebäudes der Nachkriegsmoderne als Beitrag zur Lösung dieses Konfliktes.
Zusammenfassung:
Der Freistaat Thüringen und die Bauhaus-Universität Weimar haben im Jahr 2011 eine Kooperation zur „Nachwuchsförderung Gebäude-Energieeffizienz in Thüringen (NaGET)“ geschlossen. Ziel der Zusammenarbeit war die Erforschung der energetischen Qualität der Landesgebäude, um daraus Empfehlungen für eine Priorisierung energetischer Sanierungsmaßnahmen ableiten zu können.Im Ergebnis der Untersuchungen wird den Entscheidungsträgern mit der energetischen Potenzialanalyse ein Instrument zur Verfügung gestellt, dass diese bei der Vorauswahl von energetisch zu sanierenden Objekten gezielt unterstützt.
Untersuchungsgegenstand der Studie stellen die rund 1.700 Landesgebäude des Freistaates Thüringen dar, von denen 938 als energetisch relevant einzuschätzen sind. Zunächst eingegrenzt auf 270, wurden letztendlich 218 Gebäude für die energetische Potenzialanalyse ausgewählt, die alle die Anforderungen an die Datenqualität erfüllen. Der aufgebaute Datenbestand reicht hinsichtlich Umfang und Belastbarkeit deutlich über den Ausgangszustand hinaus.
Im Mittelpunkt der Untersuchungen steht die Auswertung der Verbrauchswerte für Wärme und Strom. Mit Hilfe verschiedener Analysemethoden wird rechnerisch als auch grafisch eruiert, welche Gebäude als Hochverbraucher energetisch auffällig sind. Es zeigt sich, dass die Auswertung gleichartiger Gebäude besonders geeignet ist, um auffällige Hochverbraucher zu identifizieren. Am Beispiel von Institutsgebäuden für Forschung und Lehre (BWZK 2200) und Bibliotheksgebäuden (BWZK-Kategorie 9130) wird dies veranschaulicht. Die Auswertung der Gebäude einer einzelnen Einrichtung erfolgt exemplarisch für die Universität Erfurt.
Es wird gezeigt, dass neben dem absoluten Verbrauch weitere Analysekriterien und der Vergleich mit Benchmarks zusätzliche Aufschlüsse bieten. Mit der Ermittlung des Energieeffizienzpotenzials wird eine Kenngröße vorgestellt, die einen aussagekräftigen Vergleich unter den Gebäuden erlaubt. Darauf aufbauend lässt sich eine Rangfolge von Gebäuden bilden, die zur Priorisierung von energetischen Sanierungsmaßnahmen genutzt werden kann.
Zur Durchführung einer energetischen Potenzialanalyse wird eine schrittweise Vorgehensweise vorgestellt, die von der Voranalyse über die Grobanalyse bis zur Feinanalyse eine zunehmende Detailierung vorsieht. Es wird gezeigt, dass damit ein Immobilienportfolio öffentlicher Gebäude, wie dies des Freistaates Thüringen, zielgerichtet und kostenschonend auf energetisch auffällige Gebäude hin untersucht werden kann. Am Beispiel der Universitätsbibliothek Erfurt wird verdeutlicht, wie bei einem energetisch auffälligen Objekt in einer detaillierten Untersuchung die Vorergebnisse geprüft, Ursachen für den erhöhten Energieverbrauch ermittelt und Vorschläge zur Verbesserung der energetischen Qualität erarbeitet werden können.
In einer Hochrechnung wurde mit Hilfe starker Vereinfachungen abgeschätzt, dass bei Gebäuden mit erhöhtem Heizwärmeverbrauch im Mittel eine Einsparung von 52 kWh/m²a möglich ist. Das Einsparpotenzial beim Stromverbrauch beträgt für ein Gebäude des Freistaates Thüringen durchschnittlich 44 kWh/m²a. Festzustellen ist, dass die Streuung der Energieeinsparpotenziale sehr hoch ist. Bei einzelnen Gebäuden ist eine deutliche Abweichung von den Durchschnittswerten nach oben bzw. unten zu verzeichnen. Es wird des Weiteren angenommen, dass im Idealfall 28 % der jährlichen Energiekosten des Freistaates i.H.v. rund 35 Mio. Euro eingespart werden können, wenn die betrachteten Gebäude so energetisch saniert werden, dass sie den Richtwerten für die Verbrauchshöhe entsprechen.
BIM-basierte Digitalisierung von Bestandsgebäuden aus Sicht des FM am Beispiel von Heizungsanlagen
(2022)
Das Ziel der Arbeit ist, für das Facility Management relevante Informationen für die mit Building Information Modeling basierende Erstellung von Bestandsgebäuden am Beispiel einer Hei- zungsanlage zu definieren. Darauf basierend sind die notwendigen Arbeitsschritte der Objek- taufnahme abgeleitet. Für die Definition der Arbeitsschritte wurden das grundlegende Vorge- hen bei einer Objektaufnahme sowie die gesetzlichen Gegebenheiten für den Betrieb einer Heizungsanlage dargelegt. Darüber hinaus sind in der vorliegenden Ausarbeitung die Vorteile und Herausforderungen hinsichtlich des Zusammenspiels von Building Information Modeling und Facility Management analysiert. Die definierten Arbeitsschritte sind anhand eines Beispiel- projektes angewendet worden. Im Rahmen des Beispielprojekts sind die entscheidenden Be- triebsdaten je Anlagenteil in Form von Informationsanforderungen nach DIN 17412 definiert. Das Gebäudemodell ist durch Parameter mit den für das Facility Management relevanten In- formationen ergänzt. Die Resultate des Beispielprojektes sind mit aussagekräftigen Schnitten, Plänen sowie 3-D-Visualisierungen dargestellt. Abschließend sind die Ergebnisse in Bezug auf das FM validiert. Aus den Arbeitsschritten und Ergebnissen ist eine Leitlinie erstellt worden für den Digitalisierungsprozess von Bestandsgebäuden für das Facility Management.
Components of structural glazing have to meet different requirements and resist various impacts, depending on the field of application. Within an international research project of the EU innovation program Horizon 2020, special glass panes with a fluid circulating in capillaries are developed exploiting solar energy. Major influences to this glazing are UV irradiation and the fluidic contact, effecting the mechanical and optical durability of the bonding material within the glass setup. Regarding to visual requirements, acrylate adhesives and EVA films are analyzed as possible bonding materials by destructive and non-destructive testing methods. Two types of specimen are presented for obtaining the mechanical behavior and the surface appearances of the bonding material.
Those who ask how social entities relate to the past, enter a field defined by competing interpretations and contested practices of a collectively shared heritage. Dissent and conflict among heritage communities represent productive moments in the negotiation of these varying constructs of the past, identities, and heritage. At the same time, they lead to omissions, the overwriting and amendment of existing constructs. A closer look at all that is suppressed, excluded or rejected opens up new perspectives: It reveals how social groups are formed through public disputes upon the material foundations of heritage constructs.
Taking the concept of censorship, the volume engages with the exclusionary and inclusionary mechanisms that underlie the construction of heritage and thus social identities. Censorship is understood here as a discursive strategy in public debates. In current debates, allegations of censorship surface primarily in cases where the handling of a certain heritage constructs is subjected to critical evaluation, or on the contrary, needs to be protected from criticism or even destruction. The authors trace the connection between heritage and identity and show that identity constructs are not only manifested within heritage but are actively negotiated through it.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
The key objective of this research is to study fracture with a meshfree method, local maximum entropy approximations, and model fracture in thin shell structures with complex geometry and topology. This topic is of high relevance for real-world applications, for example in the automotive industry and in aerospace engineering. The shell structure can be described efficiently by meshless methods which are capable of describing complex shapes as a collection of points instead of a structured mesh. In order to find the appropriate numerical method to achieve this goal, the first part of the work was development of a method based on local maximum entropy (LME)
shape functions together with enrichment functions used in partition of unity methods to discretize problems in linear elastic fracture mechanics. We obtain improved accuracy relative to the standard extended finite element method (XFEM) at a comparable computational cost. In addition, we keep the advantages of the LME shape functions,such as smoothness and non-negativity. We show numerically that optimal convergence (same as in FEM) for energy norm and stress intensity factors can be obtained through the use of geometric (fixed area) enrichment with no special treatment of the nodes
near the crack such as blending or shifting.
As extension of this method to three dimensional problems and complex thin shell structures with arbitrary crack growth is cumbersome, we developed a phase field model for fracture using LME. Phase field models provide a powerful tool to tackle moving interface problems, and have been extensively used in physics and materials science. Phase methods are gaining popularity in a wide set of applications in applied science and engineering, recently a second order phase field approximation for brittle fracture has gathered significant interest in computational fracture such that sharp cracks discontinuities are modeled by a diffusive crack. By minimizing the system energy with respect to the mechanical displacements and the phase-field, subject to an irreversibility condition to avoid crack healing, this model can describe crack nucleation, propagation, branching and merging. One of the main advantages of the phase field modeling of fractures is the unified treatment of the interfacial tracking and mechanics, which potentially leads to simple, robust, scalable computer codes applicable to complex systems. In other words, this approximation reduces considerably the implementation complexity because the numerical tracking of the fracture is not needed, at the expense of a high computational cost. We present a fourth-order phase field model for fracture based on local maximum entropy (LME) approximations. The higher order continuity of the meshfree LME approximation allows to directly solve the fourth-order phase field equations without splitting the fourth-order differential equation into two second order differential equations. Notably, in contrast to previous discretizations that use at least a quadratic basis, only linear completeness is needed in the LME approximation. We show that the crack surface can be captured more accurately in the fourth-order model than the second-order model. Furthermore, less nodes are needed for the fourth-order model to resolve the crack path. Finally, we demonstrate the performance of the proposed meshfree fourth order phase-field formulation for 5 representative numerical examples. Computational results will be compared to analytical solutions within linear elastic fracture mechanics and experimental data for three-dimensional crack propagation.
In the last part of this research, we present a phase-field model for fracture in Kirchoff-Love thin shells using the local maximum-entropy (LME) meshfree method. Since the crack is a natural outcome of the analysis it does not require an explicit representation and tracking, which is advantageous over techniques as the extended finite element method that requires tracking of the crack paths. The geometric description of the shell is based on statistical learning techniques that allow dealing with general point set surfaces avoiding a global parametrization, which can be applied to tackle surfaces of complex geometry and topology. We show the flexibility and robustness of the present methodology for two examples: plate in tension and a set of open connected
pipes.
Urban planning involves many aspects and various disciplines, demanding an asynchronous planning approach. The level of complexity rises with each aspect to be considered and makes it difficult to find universally satisfactory solutions. To improve this situation we propose a new approach, which complement traditional design methods with a computational urban plan- ning method that can fulfil formalizable design requirements automatically. Based on this approach we present a design space exploration framework for complex urban planning projects. For a better understanding of the idea of design space exploration, we introduce the concept of a digital scout which guides planners through the design space and assists them in their creative explorations. The scout can support planners during manual design by informing them about potential im- pacts or by suggesting different solutions that fulfill predefined quality requirements. The planner can change flexibly between a manually controlled and a completely automated design process. The developed system is presented using an exemplary urban planning scenario on two levels from the street layout to the placement of building volumes. Based on Self-Organizing Maps we implemented a method which makes it possible to visualize the multi-dimensional solution space in an easily analysable and comprehensible form.
How does it come to particular structure formations in the cities and which strengths play a role in this process? On which elements can the phenomena be reduced to find the respective combination rules? How do general principles have to be formulated to be able to describe the urban processes so that different structural qualities can be produced? With the aid of mathematic methods, models based on four basic levels are generated in the computer, through which the connections between the elements and the rules of their interaction can be examined. Conclusions on the function of developing processes and the further urban origin can be derived.
At the end of the 1960s, architects at various universities world- wide began to explore the potential of computer technology for their profession. With the decline in prices for PCs in the 1990s and the development of various computer-aided architectural design systems (CAAD), the use of such systems in architectural and planning offices grew continuously. Because today no ar- chitectural office manages without a costly CAAD system and because intensive soſtware training has become an integral part of a university education, the question arises about what influence the various computer systems have had on the design process forming the core of architectural practice. The text at hand devel- ops ten theses about why there has been no success to this day in introducing computers such that new qualitative possibilities for design result. RESTRICTEDNESS
In the last decades, Finite Element Method has become the main method in statics and dynamics analysis in engineering practice. For current problems, this method provides a faster, more flexible solution than the analytic approach. Prognoses of complex engineer problems that used to be almost impossible to solve are now feasible.
Although the finite element method is a robust tool, it leads to new questions about engineering solutions. Among these new problems, it is possible to divide into two major groups: the first group is regarding computer performance; the second one is related to understanding the digital solution.
Simultaneously with the development of the finite element method for numerical solutions, a theory between beam theory and shell theory was developed: Generalized Beam Theory, GBT. This theory has not only a systematic and analytical clear presentation of complicated structural problems, but also a compact and elegant calculation approach that can improve computer performance.
Regrettably, GBT was not internationally known since the most publications of this theory were written in German, especially in the first years. Only in recent years, GBT has gradually become a fertile research topic, with developments from linear to non-linear analysis.
Another reason for the misuse of GBT is the isolated application of the theory. Although recently researches apply finite element method to solve the GBT's problems numerically, the coupling between finite elements of GBT and other theories (shell, solid, etc) is not the subject of previous research. Thus, the main goal of this dissertation is the coupling between GBT and shell/membrane elements. Consequently, one achieves the benefits of both sides: the versatility of shell elements with the high performance of GBT elements.
Based on the assumptions of GBT, this dissertation presents how the separation of variables leads to two calculation's domains of a beam structure: a cross-section modal analysis and the longitudinal amplification axis. Therefore, there is the possibility of applying the finite element method not only in the cross-section analysis, but also the development for an exact GBT's finite element in the longitudinal direction.
For the cross-section analysis, this dissertation presents the solution of the quadratic eigenvalue problem with an original separation between plate and membrane mechanism. Subsequently, one obtains a clearer representation of the deformation mode, as well as a reduced quadratic eigenvalue problem.
Concerning the longitudinal direction, this dissertation develops the novel exact elements, based on hyperbolic and trigonometric shape functions. Although these functions do not have trivial expressions, they provide a recursive procedure that allows periodic derivatives to systematise the development of stiffness matrices. Also, these shape functions enable a single-element discretisation of the beam structure and ensure a smooth stress field.
From these developments, this dissertation achieves the formulation of its primary objective: the connection of GBT and shell elements in a mixed model. Based on the displacement field, it is possible to define the coupling equations applied in the master-slave method. Therefore, one can model the structural connections and joints with finite shell elements and the structural beams and columns with GBT finite element.
As a side effect, the coupling equations limit the displacement field of the shell elements under the assumptions of GBT, in particular in the neighbourhood of the coupling cross-section.
Although these side effects are almost unnoticeable in linear analysis, they lead to cumulative errors in non-linear analysis. Therefore, this thesis finishes with the evaluation of the mixed GBT-shell models in non-linear analysis.
Some caad packages offer additional support for the optimization of spatial configurations, but the possibilities for applying optimization are usually limited either by the complexity of the data model or by the constraints of the underlying caad system. Since we missed a system that allows to experiment with optimization techniques for the synthesis of spatial configurations, we developed a collection of methods over the past years. This collection is now combined in the presented open source library for computational planning synthesis, called CPlan. The aim of the library is to provide an easy to use programming framework with a flat learning curve for people with basic programming knowledge. It offers an extensible structure that allows to add new customized parts for various purposes. In this paper the existing functionality of the CPlan library is described.
Im vorliegenden Buch sind die Ergebnisse eines studentischen Entwurfsprojektes dokumentiert, welches im Wintersemester 2012/13 an der Bauhaus-Universität in Weimar am Lehrstuhl Informatik in der Architektur (InfAR) stattgefunden hat. Das Projekt wurde in Zusammenarbeit mit Psychologen, Kognitions- und Computerwissenschaftlern des DFG geförderten Forschungsprojektes SFB/TR8 „Spatial Cognition“ Bremen/Freiburg konzipiert und durchgeführt.
Design-related reassessment of structures integrating Bayesian updating of model safety factors
(2022)
In the semi-probabilistic approach of structural design, the partial safety factors are defined by considering some degree of uncertainties to actions and resistance, associated with the parameters’ stochastic nature. However, uncertainties for individual structures can be better examined by incorporating measurement data provided by sensors from an installed health monitoring scheme. In this context, the current study proposes an approach to revise the partial safety factor for existing structures on the action side, γE by integrating Bayesian model updating. A simple numerical example of a beam-like structure with artificially generated measurement data is used such that the influence of different sensor setups and data uncertainties on revising the safety factors can be investigated. It is revealed that the health monitoring system can reassess the current capacity reserve of the structure by updating the design safety factors, resulting in a better life cycle assessment of structures. The outcome is furthermore verified by analysing a real life small railway steel bridge ensuring the applicability of the proposed method to practical applications.
As part of an international research project – funded by the European Union – capillary glasses for facades are being developed exploiting storage energy by means of fluids flowing through the capillaries. To meet highest visual demands, acrylate adhesives and EVA films are tested as possible bonding materials for the glass setup. Especially non-destructive methods (visual analysis, analysis of birefringent properties and computed tomographic data) are applied to evaluate failure patterns as well as the long-term behavior considering climatic influences. The experimental investigations are presented after different loading periods, providing information of failure developments. In addition, detailed information and scientific findings on the application of computed tomographic analyses are presented.
Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses.
The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity.
The world society faces a huge challenge to implement the human right of “access to sanitation”. More and more it is accepted that the conventional approach towards providing sanitation services is not suitable to solve this problem. This dissertation examines the possibility to enhance “access to sanitation” for people who are living in areas with underdeveloped water and wastewater infrastructure systems. The idea hereby is to follow an integrated approach for sanitation, which allows for a mutual completion of existing infrastructure with resource-based sanitation systems.
The notion “integrated sanitation system (iSaS)” is defined in this work and guiding principles for iSaS are formulated. Further on the implementation of iSaS is assessed at the example of a case study in the city of Darkhan in Mongolia. More than half of Mongolia’s population live in settlements where yurts (tents of Nomadic people) are predominant. In these settlements (or “ger areas”) sanitation systems are not existent and the hygienic situation is precarious.
An iSaS has been developed for the ger areas in Darkhan and tested over more than two years. Further on a software-based model has been developed with the goal to describe and assess different variations of the iSaS. The results of the assessment of material-flows, monetary-flows and communication-flows within the iSaS are presented in this dissertation. The iSaS model is adaptable and transferable to the socio-economic conditions in other regions and climate zones.
Die Haltungen des Architekten Luigi Snozzi. Untersucht am Beispiel des Projektes Monte Carasso
(2021)
Welche Haltung spricht aus den Werken von Architekt*innen? Lassen sich Werte und Handlungsanweisungen von Mauern und Plänen ablesen? Luigi Snozzis Entwürfe für Monte Carasso werden in dieser Arbeit exemplarisch darauf untersucht. Sie zeugen von der Verantwortung, die jede*r Architekt*in für das Umfeld hat, in dem sie oder er baut.
PLANUNGSUNTERSTÜTZUNG DURCH DIE ANALYSE RÄUMLICHER PROZESSE MITTELS COMPUTERSIMULATIONEN. Erst wenn man – zumindest im Prinzip – versteht, wie eine Stadt mit ihren komplexen, verwobenen Vorgängen im Wesentlichen funktioniert, ist eine sinnvolle Stadtplanung möglich. Denn jede Planung bedeutet einen Eingriff in den komplexen Organismus einer Stadt. Findet dieser Eingriff ohne Wissen über die Funktionsweise des Organismus statt, können auch die Auswirkungen nicht abgeschätzt werden. Dieser Beitrag stellt dar, wie urbane Prozesse mittels Computersimulationen unter Zuhilfenahme so genannter Multi-Agenten-Systeme und Zellulärer Automaten verstanden werden können. von
The concept of information entropy together with the principle of maximum entropy to open channel flow is essentially based on some physical consideration of the problem under consideration. This paper is a discussion on Yeganeh and Heidari (2020)’s paper, who proposed a new approach for measuring vertical distribution of streamwise velocity in open channels. The discussers argue that their approach is conceptually incorrect and thus leads to a physically unrealistic situation. In addition, the discussers found some wrong mathematical expressions (which are assumed to be typos) written in the paper, and also point out that the authors did not cite some of the original papers on the topic.
Die Auseinandersetzung mit der Digitalisierung ist in den letzten Jahren in den Medien, auf Konferenzen und in Ausschüssen der Bau- und Immobilienbranche angekommen. Während manche Bereiche Neuerungen hervorbringen und einige Akteure als Pioniere zu bezeichnen sind, weisen andere Themen noch Defizite hinsichtlich der digitalen Transformation auf. Zu dieser Kategorie kann auch das Baugenehmigungsverfahren gezählt werden. Unabhängig davon, wie Architekten und Ingenieure in den Planungsbüros auf innovative Methoden setzen, bleiben die Bauvorlagen bisher zuhauf in Papierform oder werden nach der elektronischen Einreichung in der Behörde ausgedruckt. Vorhandene Ressourcen, beispielsweise in Form eines Bauwerksinformationsmodells, die Unterstützung bei der Baugenehmigungsfeststellung bieten können, werden nicht ausgeschöpft. Um mit digitalen Werkzeugen eine Entscheidungshilfe für die Baugenehmigungsbehörden zu erarbeiten, ist es notwendig, den Ist-Zustand zu verstehen und Gegebenheiten zu hinterfragen, bevor eine Gesamtautomatisierung der innerbehördlichen Vorgänge als alleinige Lösung zu verfolgen ist.
Mit einer inhaltlich-organisatorischen Betrachtung der relevanten Bereiche, die Einfluss auf die Baugenehmigungsfeststellung nehmen, wird eine Optimierung des Baugenehmigungsverfahrens in den
Behörden angestrebt. Es werden die komplexen Bereiche, wie die Gesetzeslage, der Einsatz von Technologie aber auch die subjektiven Handlungsalternativen, ermittelt und strukturiert. Mit der Entwicklung eines Modells zur Feststellung der Baugenehmigungsfähigkeit wird sowohl ein Verständnis für Einflussfaktoren vermittelt als auch eine Transparenzsteigerung für alle Beteiligten geschaffen.
Neben einer internationalen Literaturrecherche diente eine empirische Studie als Untersuchungsmethode. Die empirische Studie wurde in Form von qualitativen Experteninterviews durchgeführt, um den Ist-Zustand im Bereich der Baugenehmigungsverfahren festzustellen. Das erhobene Datenmaterial wurde aufbereitet und anschließend einer softwaregestützten Inhaltsanalyse unterzogen. Die Ergebnisse wurden in Kombination mit den Erkenntnissen der Literaturrecherche in verschiedenen Analysen als Modellgrundlage aufgearbeitet.
Ergebnis der Untersuchung stellt ein Entscheidungsmodell dar, welches eine Lücke zwischen den gegenwärtigen
Abläufen in den Baubehörden und einer Gesamtautomatisierung der Baugenehmigungsprüfung schließt. Die prozessorientierte Strukturierung entscheidungsrelevanter Sachverhalte im Modell ermöglicht eine Unterstützung bei der Baugenehmigungsfeststellung für Prüfer und Antragsteller. Das theoretische Modell konnte in Form einer Webanwendung in die Praxis übertragen werden.
In der vorliegenden Arbeit wird das Tragverhalten und das Sicherheitsniveau axial belasteter Großbohrpfähle in den pleistozänen Kalkarenit der Küstenregion von Dubai untersucht. Zunächst wird auf der Grundlage von Ergebnissen umfangreicher Baugrundanalysen und Probebelastungen das Tragverhalten detailliert beschrieben. Anschließend wird ein auf der Finiten-Elemente-Methode basierendes Strukturmodell zur Simulation des Last-Setzungsverhaltens von Großbohrpfählen im Sinne eines numerischen Versuchsstandes entwickelt. Um herstellungsbedingte Veränderungen der Baugrundeigenschaften in der Kontaktzone Pfahl-Baugrund zu berücksichtigen, die mit boden- und felsmechanischen Elementversuchen gewöhnlich nicht erfassbar sind, werden die Größen der relevanten konstitutiven Parameterwerte iterativ mittels inverser Optimierungsstrategien bestimmt. Abschließend wird eine methodische Vorgehensweise aufgezeigt, wie das Sicherheitsniveau axial belasteter Großbohrpfählen unter Berücksichtigung der räumlichen Variabilität der Baugrundeigenschaften zuverlässig abgeschätzt werden kann.