56 Bauwesen
Refine
Has Fulltext
- yes (128) (remove)
Document Type
- Doctoral Thesis (46)
- Article (30)
- Master's Thesis (17)
- Conference Proceeding (11)
- Preprint (9)
- Bachelor Thesis (6)
- Report (4)
- Book (2)
- Periodical (2)
- Study Thesis (1)
Institute
- Institut für Strukturmechanik (ISM) (28)
- Junior-Professur Computational Architecture (15)
- Professur Baubetrieb und Bauverfahren (12)
- Professur Informatik in der Architektur (12)
- Professur Modellierung und Simulation - Konstruktion (8)
- F. A. Finger-Institut für Baustoffkunde (FIB) (6)
- Professur Stahl- und Hybridbau (4)
- Institut für Konstruktiven Ingenieurbau (IKI) (3)
- Professur Bauphysik (3)
- Professur Betriebswirtschaftslehre im Bauwesen (3)
Keywords
- Architektur (8)
- OA-Publikationsfonds2022 (7)
- Aerodynamik (5)
- BIM (5)
- Bridge (5)
- OA-Publikationsfonds2020 (5)
- Beton (4)
- CAD (4)
- Erdbeben (4)
- Ingenieurwissenschaften (4)
- Maschinelles Lernen (4)
- Simulation (4)
- Buffeting (3)
- Computational Urban Design (3)
- Concrete (3)
- Denkmalpflege (3)
- Finite-Elemente-Methode (3)
- Gips (3)
- Projektmanagement (3)
- damaged buildings (3)
- machine learning (3)
- simulation (3)
- Bauablauf (2)
- Bauantrag (2)
- Baugenehmigung (2)
- Bauphysik (2)
- Bender elements (2)
- Brückenkappe (2)
- Building Information Modeling (2)
- Building application (2)
- Cement (2)
- Computational Design (2)
- Decision-making (2)
- Energieeffizienz (2)
- Entscheidungsmodell (2)
- Erdbebensicherheit (2)
- Finite Elemente Methode (2)
- Flutter (2)
- Frost (2)
- Fuzzy-Logik (2)
- IFC (2)
- Lebenszyklus (2)
- Lehm (2)
- Lehmbau (2)
- Machine learning (2)
- Modellbildung (2)
- Nanoverbundstruktur (2)
- Optimierung (2)
- Polymere (2)
- Process management (2)
- Project management (2)
- Prozessmanagement (2)
- Quantitative Sozialforschung (2)
- Rissausbreitung (2)
- Shear modulus (2)
- Soil (2)
- Tricalcium silicate (2)
- Umweltbilanz (2)
- Uncertainty (2)
- Vortex particle method (2)
- Vulnerability assessment (2)
- Zement (2)
- earthquake (2)
- earthquake safety assessment (2)
- multiscale (2)
- nanocomposite (2)
- rapid visual screening (2)
- soft computing techniques (2)
- stochastic (2)
- vulnerability assessment (2)
- Ökobilanz (2)
- 0947 (1)
- 0948 (1)
- 0949 (1)
- 2D/3D Adaptive Mesh Refinement (1)
- Abbruch (1)
- Abfall (1)
- Abwasser (1)
- Acceleration (1)
- Adalbert (1)
- Adalbert / Der Nachsommer (1)
- Aerodynamic admittance (1)
- Aerodynamic derivatives (1)
- Aerodynamic nonlinearity (1)
- Aerodynamics (1)
- Aeroelasticity (1)
- Aeroelastizität (1)
- Aktionsraumforschung (1)
- Algorithmus (1)
- Alkali-Kieselsäure-Reaktion (1)
- Alker (1)
- Alterung (1)
- Analytische Lösung (1)
- Arbeitsablauf (1)
- Arbeitsplanung (1)
- Architekturausbildung (1)
- Architekturentwicklung (1)
- Architekturgeschichte Shanghai (1)
- Architekturtheorie (1)
- Artificial coral reefs (1)
- Artificial neural network (1)
- Ausbau (1)
- Autogenous (1)
- Autokorrelationslänge (1)
- Automatisierung (1)
- Autonomous (1)
- BIM <CAD> (1)
- Bau-Ist (1)
- Bauchemie (1)
- Bauhaus (1)
- Bauklimatik (1)
- Baulogistik (1)
- Bauprozess (1)
- Bausoll (1)
- Bausteinbibliothek (1)
- Baustoff (1)
- Bautechnik (1)
- Bauwerkszuordnungskatalog (BWZK) (1)
- Bauwesen (1)
- Bayesian Inference, Uncertainty Quantification (1)
- Bayesian parameter update (1)
- Bayes’schen Inferenz (1)
- Bemessung (1)
- Bending Stiffness of cable elements (1)
- Benutzung (1)
- Berechnung (1)
- Bergbau (1)
- Bergeteich (1)
- Berlin / Ausstellung Exil (1)
- Beschädigung (1)
- Betongelenk (1)
- Betonverflüssiger (1)
- Biegesteifigkeit (1)
- Biegetheorie (1)
- Bogenstaumauer (1)
- Bonding Methods (1)
- Brandschutz (1)
- Bridge aerodynamics (1)
- Bridges (1)
- Brücke (1)
- Brückenbau (1)
- Budgetierung (1)
- Building Permit (1)
- Building performance (1)
- Building permit (1)
- Building safety assessment (1)
- Bühnenbild (1)
- CAAD (1)
- Calciumsulfat (1)
- Calciumsulfatfließestrich (1)
- Cancel Culture (1)
- Category Theory (1)
- Cellular automata (1)
- Censorship (1)
- Climatic conditions (1)
- Cognitive design computing (1)
- Complexation (1)
- Computational Bridge Aerodynamics (1)
- Computational Fluid Dynamics (1)
- Computational fluid dynamics (1)
- Computational modelling (1)
- Computational urban design (1)
- Computer (1)
- Concrete catenary pole (1)
- Construction (1)
- Corrosion (1)
- Corruption (1)
- Coupled-Eulerian–Lagrangian (1)
- Crack (1)
- Cracks 3D Modelling (1)
- Cracks Segmentation (1)
- Cross-Section Distortion (1)
- Cross-Section Warping (1)
- Cryogenic Suction (1)
- Curved thin-walled circular pipes (1)
- Damage (1)
- Damage Identification (1)
- Damage Information Modelling (1)
- Damage accumulation (1)
- Damage mechanism (1)
- Damm (1)
- Data Mining (1)
- Data-driven (1)
- Deal ii C++ code (1)
- Deck cross-sections (1)
- Defekt (1)
- Deformationsverhalten (1)
- Demolition (1)
- Denkmalschutz (1)
- Digital Games Based Learning (1)
- Dresden (1)
- Druckglied (1)
- Dynamic Analysis (1)
- Dynamische Analyse (1)
- EDA <Optimierung> (1)
- Earthquake (1)
- Ecology in design (1)
- Emigration to China (1)
- Emotion (1)
- Energieverbrauch (1)
- Entropie (1)
- Entscheidungsfindung (1)
- Entwicklungsländer (1)
- Entwurf , Psychologie , Orientierung , Methode , Architektur (1)
- Entwurfsmethodik (1)
- Entwurfsprojekt (1)
- Entwurfssystem, computerbasiertes Planen, evolutionäre Optimierung, Grundrisse, generative Methoden (1)
- Ergebnisorientierte Steuerung (1)
- Ergänzungsbaustoffe (1)
- Erinnerungskultur (1)
- Ertüchtigung (1)
- Ethiopia (1)
- Euthanasie <Nationalsozialismus> (1)
- Evolutionäre Algorithmen (1)
- Evolutionärer Algorithmus (1)
- Experiment (1)
- Explicit finite element method (1)
- Extended Finite-Elemente-Methode (1)
- Extreme value distribution (1)
- Facility Management (1)
- Faseroptischer Sensor (1)
- Fatigue life (1)
- Feldversuch (1)
- Feuer (1)
- Finite Element (1)
- Finite Element Method (1)
- First Order Reliability Method (1)
- Fließestrich (1)
- Fließmittel (1)
- Fliplife (1)
- Flucht und Emigration Europäischer Künstler 1933-1945 <1998> (1)
- Fluid memory (1)
- Flächen zweiter Ordnung (1)
- Forschungseinrichtung (1)
- Fracture mechanics (1)
- Frost-Tausalz-Angriff (1)
- Frost-Tausalz-Widerstand (1)
- Fuzzy Logic (1)
- Fuzzy logic (1)
- Fuzzylogik (1)
- GIS (1)
- GaBi Software (1)
- Gebäude (1)
- Gebäudebestand (1)
- Geheimdienst (1)
- Generalized Beam Theory (GBT) (1)
- Generalized Bean Theory (1)
- Geo-Statistical Analysis (1)
- Geografie (1)
- Geometrically nonlinear analysis (1)
- Geotechnik (1)
- Geschwindigkeit (1)
- Gestaltoptimierung (1)
- Gesundheitspolitik (1)
- Gesundheitsverwaltung im Nationalsozialismus (1)
- Glue Spall (1)
- Goal-oriented A Posteriori Error Estimation (1)
- Gravel-bed rivers (1)
- Greater Shanghai (1)
- Großbaustelle (1)
- Grundriss (1)
- Grundrissgenerierung (1)
- Grundstücksumlegung (1)
- Guyed antenna masts (1)
- Hans Poelzig . Architekturgeschichte der DDR (1)
- Hauptachsentransformation (1)
- Healing (1)
- Heritage Studies (1)
- High precision underwater monitoring (1)
- Hitze (1)
- Hochbau (1)
- Hochschulbau (1)
- Hochschule (1)
- Hochschulgebäude (1)
- Holzbau (1)
- Homogeneity (1)
- Homogenität (1)
- Hydraulic geometry (1)
- Hydro-mechanically coupled (1)
- Hydrologie (1)
- Hypoplasticity (1)
- Identifikation (1)
- Immobilienlebenszyklus (1)
- Immobilienportfolio (1)
- Industry Foundation Classes (IFC) (1)
- Informatik (1)
- Infrastruktur (1)
- Interaktion (1)
- Interaktion, Layouts, generierung (1)
- Interaktive numerische Simulation (1)
- Inverse Probleme (1)
- Inverse Problems (1)
- Investition (1)
- Isogeometrische Analyse (1)
- Isovist (1)
- Jahresarbeitsplanung (1)
- Jena (1)
- Judenvernichtung (1)
- K-d Trees (1)
- Kabel (1)
- Kalk (1)
- Kalkulation (1)
- Kapitalisierung (1)
- Kegelschnitt (1)
- Kirchoff--love theory (1)
- Klebstoff-Faser-Verbundwerkstoff; Alu-Carbon-Hybridelement; Glas-Kunststoff-Hybridelement; ANSYS; CFK; Klebverbindungen (1)
- Klebtechnik (1)
- Klima (1)
- Klimawandel (1)
- Klimaänderung (1)
- Koelnbrein (1)
- Kollektives Gedächtnis (1)
- Kopplungsmethode (1)
- Korallenriff (1)
- Korruption (1)
- Kosten- und Leistungsentwicklung (1)
- Kulturerbe (1)
- Kulturgeschichte (1)
- Kölnbrein (1)
- Künstliche Intelligenz (1)
- Künstliche Korallenriffe (1)
- Landesgebäude (1)
- Large deformation (1)
- Layout, generative Methode, evolutionäres Verfahren (1)
- Lebenszyklusorientiertes Management (1)
- Lernspiele (1)
- Life cycle assessment (1)
- Loading sequence (1)
- Local maximum entropy approximants (1)
- Long-span Bridges (1)
- Long-span bridges (1)
- Luci (1)
- Machine Learning (1)
- Machttheorie (1)
- Management (1)
- Marmara Region (1)
- Meeresökologie (1)
- Mesh Refinement (1)
- Messtechnik (1)
- Mikrokapsel (1)
- Mikrostruktur (1)
- Milieuforschung (1)
- Mongolia (1)
- Monte-Carlo-Simulation (1)
- Motion-induced forces (1)
- Multi-criteria decision making (1)
- Multikriterielle Optimierung (1)
- Museum (1)
- Musterjahresganglinien (1)
- NS-Architektur (1)
- NURBS (1)
- Nachbehandlung (1)
- Nachbehandlungsmittel (1)
- Nachkriegsmoderne (1)
- Nachsommer (1)
- Nationalsozialismus (1)
- Netscape Internet Foundation Classes (1)
- Neuronales Netz (1)
- Nichtwohngebäude (1)
- Non-Destructive Testing (1)
- Nonlinear Cable Analysis (1)
- Numerische Berechnung (1)
- Nutzerorientierte Bausanierung (1)
- OA-Publikationsfonds2018 (1)
- OA-Publikationsfonds2021 (1)
- Oberleitungsmasten (1)
- Operante Konditionierung (1)
- Optimization (1)
- PU Enrichment method (1)
- Parameteridentif (1)
- Parameteridentifikation (1)
- Paramter Identification (1)
- Pareto Front (1)
- Pareto optimization (1)
- Partikel-Schwarm-Optimierung (1)
- Paulick (1)
- Pfahl (1)
- Pfahlgründung (1)
- Pfahlprobebelastung (1)
- Phase-field model (1)
- Pile Foundation (1)
- Pile Load Test (1)
- Planspiel (1)
- Plastische Deformation (1)
- Plastizität (1)
- Portfoliomanagement (1)
- Post Occupancy Evaluation (1)
- Potenzialanalyse (1)
- Power Ultrasound (1)
- Preservation (1)
- Priorisierung (1)
- Probabilistik (1)
- Prozessoptimierung (1)
- Public Private Partnership (1)
- Punktwolke (1)
- Quadrik (1)
- Quarz (1)
- RC Buildings (1)
- Rainflow counting algorithm (1)
- Rapid Visual Assessment (1)
- Rapid Visual Screening (1)
- Re-use (1)
- ReCiPe (1)
- Relative acceleration (1)
- Reliability (1)
- Reliability Analysis (1)
- Reliability-based design optimization (1)
- Reliabilität (1)
- Residentielle Mobilität (1)
- Reststoff (1)
- Retardation (1)
- Ric (1)
- Risiko (1)
- Risikoerfassung (1)
- Risikomanagement (1)
- Risk management (1)
- Riss (1)
- Rosenhaus (1)
- Rudolf Hamburger (1)
- SHM (1)
- Safety factor (1)
- Salt frost attack (1)
- Salt-Frost-Attack (1)
- Sanierung (1)
- Schaden (1)
- Schadenerkennung (1)
- Schadensakkumulation (1)
- Schadensanalyse (1)
- Schadensmechanismus (1)
- Schloss (1)
- Schwingung (1)
- Schwingungsanalyse (1)
- Schwingungsdämpfer (1)
- Sediment (1)
- Seismic Vulnerability (1)
- Selbstheilung (1)
- Shape optimization (1)
- Shear wave (1)
- Sicherheitsfaktor (1)
- Simulationsmodell (1)
- Simulationsprozess (1)
- Social Game (1)
- Software (1)
- Soil dynamics (1)
- Sowjetunion / GRU (1)
- St. John's University (1)
- Stabilisierung (1)
- Stadtentwicklung (1)
- Stadtgestaltung (1)
- Stadtklima (1)
- Stahlbeton (1)
- Stampflehm (1)
- Staumauer (1)
- Stifter (1)
- Straßenbeton (1)
- Straßenbetriebsdienst (1)
- Structural health monitoring (1)
- Strukturanalyse (1)
- Strukturdynamik (1)
- Strukturoptimierung (1)
- Studentenarbeiten (1)
- Stuttgart / Sonderforschungsbereich Rechnergestützte Modellierung und Simulation zur Analyse (1)
- Stärkefließmittel (1)
- Suffosion (1)
- Super Healing (1)
- Superplasticizer (1)
- Synthese (1)
- Talsperre (1)
- Termincontrolling (1)
- Thermoelasticity (1)
- Thermoelastizität (1)
- Thin shell (1)
- Thin-walled Structures (1)
- Thüringen (1)
- Thüringen im Nationalsozialismus (1)
- Tragfähigkeit (1)
- Träger (1)
- Tsallis entropy (1)
- UDDT (1)
- UML (1)
- Umweltbelastungspunkte (1)
- Unbewegliche Sache (1)
- Ungewissheit (1)
- Unschärfequantifizierung (1)
- Unterteilungsalgorithmen (1)
- Unterteilungsalgorithmus (1)
- Unterwasserarchitektur (1)
- Verallgemeinerte Technische Biegetheorie (1)
- Verbrauchszähler (1)
- Vergleichswerte (1)
- Vibration (1)
- Vibratory pile driving (1)
- Virtuelle Realität (1)
- Volumenstabilität (1)
- Vortex Particle Method (1)
- Vortex-induced vibration (1)
- Vulnerability (1)
- Wechselwirkung (1)
- Weimar / Bauhaus-Universität (1)
- Wind Energy (1)
- Wind Turbines (1)
- Wind load (1)
- Windenergie (1)
- Windlast (1)
- Windturbine (1)
- Wissenschaftliche Einrichtung (1)
- Wohnstandortentscheidungen (1)
- Wohnstandortpräferenzen (1)
- Zementbeton (1)
- Zementhydratation (1)
- Zensur (1)
- Zusatzmittel (1)
- Zuverlässigkeit (1)
- Zuverlässigkeitsanalyse (1)
- adaptive pushover (1)
- alkali-silica-reaktion (1)
- architecture and urban planning history of Shanghai (1)
- artificial intelligence (1)
- artificial neural networks (1)
- automatic modal analysis (1)
- backcasting (1)
- bauphysikalische Methoden (1)
- big data (1)
- bim; cad; citygml; gbxml; thermal design (1)
- bridge curb (1)
- bridge inspection (1)
- building information modelling (1)
- capitalization (1)
- capsular clustering (1)
- cement (1)
- circumferential contact length (1)
- climate change (1)
- climate loads (1)
- climatic loading (1)
- cluster analysis (1)
- clustering (1)
- complex data analysis (1)
- composite cement (1)
- computational design (1)
- computational hydraulics (1)
- concrete pavements (1)
- construction progress (1)
- cost estimation (1)
- cplan (1)
- critical path monitoring (1)
- curing compounds (1)
- damage identification (1)
- damage information model (1)
- dams (1)
- depthmapx (1)
- design synthesis (1)
- destructive testin (1)
- distributed computing (1)
- durability (1)
- effects of architecture (1)
- ereignisbasiert (1)
- evolutionary optimization (1)
- experiment (1)
- fiber optical Sensor (1)
- finite element method (1)
- fire (1)
- full-waveform inversion (1)
- genetic algorithm (1)
- genetic programming (1)
- grasshopper (1)
- gypsum (1)
- housing (1)
- iPiT® (1)
- iconic architecture (1)
- influence of architecture (1)
- integrated sanitation system (1)
- interactive (1)
- interaktiv (1)
- inverse analysis (1)
- isogeometric analysis (1)
- job description (1)
- long-term examination (1)
- maschinelles Lernen (1)
- material aging (1)
- metakaolin (1)
- microcapsule (1)
- microstructure (1)
- middleware (1)
- modal parameter estimation (1)
- modal tracking (1)
- modern architecture (1)
- mortar method (1)
- multi-objective optimization (1)
- nailed constructions (1)
- nailed trusses (1)
- new alternative sanitation systems NASS (1)
- non-destructive testing (1)
- nonstationarity (1)
- occupant requirements (1)
- occupant satisfaction (1)
- optimization (1)
- passive control (1)
- plasterboard (1)
- power theory of value (1)
- process optimisation (1)
- property sector (1)
- python (1)
- questionnaire (1)
- rapid assessment (1)
- rapid classification (1)
- recovery-based and residual-based error estimators (1)
- reinforcement learning (1)
- residential buildings (1)
- scheduling (1)
- secret service Soviet Union (1)
- seismic control (1)
- seismic hazard analysis (1)
- seismic risk estimation (1)
- self-healing concrete (1)
- semi-probabilistic concept (1)
- sequence (1)
- singular value decomposition (1)
- site-specific spectrum (1)
- slow-late-aggregate (1)
- smooth rectangular channel (1)
- space syntax (1)
- stabilisierter Lehm (1)
- stochastic subspace identification (1)
- strain (1)
- strategic limitation (1)
- stress (1)
- structural analysis (1)
- structural control (1)
- städtische Strukturen (1)
- supervised learning (1)
- tailings dam (1)
- tailings facility (1)
- tall buildings (1)
- tangible user interface (1)
- tuned mass damper (1)
- tuned mass dampers (1)
- urban planning (1)
- urban reagent (1)
- urban regeneration (1)
- urban research-quantitative (1)
- urban simulation (1)
- urban transformation (1)
- urban virus (1)
- wave propagation (1)
- zerstörungsfreie Prüfung (1)
- Äthiopien (1)
- Öffentliche Gebäude (1)
- Öffentliche Liegenschaften (1)
- Öffentlicher Sektor (1)
Die Bauhausstraße 11 war in der NS-Zeit Sitz von zahlreichen Institutionen der Gesundheitspolitik. Jetzt ist das Gebäude zum Gegenstand eines Forschungsprojektes geworden, in Zukunft wird auch vor Ort an seine Einbindung in nationalsozialistische Verbrechen erinnert. Dieses Buch dokumentiert und reflektiert die Erinnerungsarbeit auf dem Campus der Bauhaus-Universität Weimar und darüber hinaus. Anhand der interdisziplinären Beiträge wird das Gebäude in der heutigen Bauhausstraße 11 räumlich in Weimar und Thüringen, erinnerungspolitisch aber in einer seit Jahrzehnten erkämpften Landschaft des Gedenkens an nationalsozialistische Verbrechen verortet.
Those who ask how social entities relate to the past, enter a field defined by competing interpretations and contested practices of a collectively shared heritage. Dissent and conflict among heritage communities represent productive moments in the negotiation of these varying constructs of the past, identities, and heritage. At the same time, they lead to omissions, the overwriting and amendment of existing constructs. A closer look at all that is suppressed, excluded or rejected opens up new perspectives: It reveals how social groups are formed through public disputes upon the material foundations of heritage constructs.
Taking the concept of censorship, the volume engages with the exclusionary and inclusionary mechanisms that underlie the construction of heritage and thus social identities. Censorship is understood here as a discursive strategy in public debates. In current debates, allegations of censorship surface primarily in cases where the handling of a certain heritage constructs is subjected to critical evaluation, or on the contrary, needs to be protected from criticism or even destruction. The authors trace the connection between heritage and identity and show that identity constructs are not only manifested within heritage but are actively negotiated through it.
As machine vision-based inspection methods in the field of Structural Health Monitoring (SHM) continue to advance, the need for integrating resulting inspection and maintenance data into a centralised building information model for structures notably grows. Consequently, the modelling of found damages based on those images in a streamlined automated manner becomes increasingly important, not just for saving time and money spent on updating the model to include the latest information gathered through each inspection, but also to easily visualise them, provide all stakeholders involved with a comprehensive digital representation containing all the necessary information to fully understand the structure’s current condition, keep track of any progressing deterioration, estimate the reduced load bearing capacity of the damaged element in the model or simulate the propagation of cracks to make well-informed decisions interactively and facilitate maintenance actions that optimally extend the service life of the structure. Though significant progress has been recently made in information modelling of damages, the current devised methods for the geometrical modelling approach are cumbersome and time consuming to implement in a full-scale model. For crack damages, an approach for a feasible automated image-based modelling is proposed utilising neural networks, classical computer vision and computational geometry techniques with the aim of creating valid shapes to be introduced into the information model, including related semantic properties and attributes from inspection data (e.g., width, depth, length, date, etc.). The creation of such models opens the door for further possible uses ranging from more accurate structural analysis possibilities to simulation of damage propagation in model elements, estimating deterioration rates and allows for better documentation, data sharing, and realistic visualisation of damages in a 3D model.
This paper presents the development of an assessment scheme for a visual qualitative evaluation of nailed connections in existing structures, such as board trusses. In terms of further use and preservation, a quick visual inspection will help to evaluate the quality of a structure regarding its load-bearing capacity and deformation behaviour. Tests of old and new nailed joints in combination with a rating scheme point out the correlation between the load-bearing capacity and condition of a joint. Old joints of comparatively good condition tend to exhibit better results than those of poor condition. Moreover, aged joints are generally more load-bearing than newly assembled ones.
Die Diskussionen in der Politik und in der Gesellschaft über Klimawandel, globale Erwärmung oder Nachhaltigkeit, die schon noch länger anhält, werden nie ein Ende finden, solange die Probleme, auf denen sie basiert, unlösbar bleiben. Vorgeschlagene Lösungen werden meist nicht richtig umgesetzt. Im Zusammenhang mit dieser Problematik steigt aber das Verantwortungsgefühl für bessere Zukunftsstrategien immer mehr. Die in den letzten Jahren vorgekommenen Umweltkatastrophen, wie im Golf von Mexiko (April 2010) oder im Fukushima (März 2011) die noch aktuell sind, zeigen, dass der Primärenergieeinsatz oder die Transportproblematik nicht mehr nur die Sorge der Entwicklungsländer, sondern auch der Industrieländer ist. Die Bauwelt mit ihrem erheblichen Energiebedarf spielt bei der Festlegung der Zukunftsstrategien eine große Rolle.
Vor allem sind die Forschungen nach umweltfreundlichen Materialien, der Recyclebarkeit der eingesetzten Baumaterialien oder dem vernünftigen Nutzen der Naturressourcen die wichtigsten Schwerpunkte. In dieser Hinsicht bringt Lehm als Baumaterial viele Vorteile mit sich. Bei einem Artikel sagt der Lehmbauexperte Martin Rauch: “In heutiger Zeit und einem Kulturkreis, in dem Baugrund und Arbeitszeit unsere großen Kosten verursachen, findet der tradierte Lehmbau mit dem verbundenen großen Aufwand an menschlicher Arbeitszeit nur schwer seinen Platz. Über die Art der Bauweise wird auch die Entscheidung gefällt, wie und wo die Wertschöpfung erfolgt und ob der Einsatz des Budgets einen gesellschaftlichen Nutzen mit sich bringt. Im Vergleich zu einem Sichtbetonhaus können bei einem Stampflehmhaus 40% der Primärenergie ein gespart und dafür mehr lokale Arbeitsressourcen gebunden werden. Davon profitieren vor allem die lokalen Handwerker und mittelständischen Betriebe” Anatolien ist der Ort, wo man immer noch die tiefsten Wurzeln der Baukultur menschlicher Geschichte findet. Diese Baukultur, die in den vergangenen Jahrzehnten fast verlorengegangen ist, ist die Lehmbaukultur. In dieser Hinsicht beabsichtigt dieser Entwurf die Würde des Lehms in Anatolien wieder herzustellen und dadurch dessen Glaubwürdigkeit zurückzubringen.
Due to the development of new technologies and materials, optimized bridge design has recently gained more attention. The aim is to reduce the bridge components materials and the CO2 emission from the cement manufacturing process. Thus, most long-span bridges are designed to be with high flexibility, low structural damping, and longer and slender spans. Such designs lead, however, to aeroelastic challenges. Moreover, the consideration of both the structural and aeroelastic behavior in bridges leads to contradictory solutions as the structural constraints lead to deck prototypes with high depth which provide high inertia to material volume ratios. On the other hand, considering solely the aerodynamic requirements, slender airfoil-shaped bridge box girders are recommended since they prevent vortex shedding and exhibit minimum drag. Within this framework comes this study which provides approaches to find optimal bridge deck cross-sections while considering the aerodynamic effects. Shape optimization of deck cross-section is usually formulated to minimize the amount of material by finding adequate parameters such as the depth, the height, and the thickness and while ensuring the overall stability of the structure by the application of some constraints. Codes and studies have been implemented to analyze the wind phenomena and the structural responses towards bridge deck cross-sections where simplifications have been adopted due to the complexity and the uniqueness of such components besides the difficulty of obtaining a final model of the aerodynamic behavior. In this thesis, two main perspectives have been studied; the first is fully deterministic and presents a novel framework on generating optimal aerodynamic shapes for streamlined and trapezoidal cross-sections based on the meta-modeling approach. Single and multi-objective optimizations were both carried out and a Pareto Front is generated. The performance of the optimal designs is checked afterwards. In the second part, a new strategy based on Reliability-Based Design Optimization (RBDO) to mitigate the vortex-induced vibration (VIV) on the Trans-Tokyo Bay bridge is proposed. Small changes in the leading and trailing edges are presented and uncertainties are considered in the structural system. Probabilistic constraints based on polynomial regression are evaluated and the problem is solved while applying the Reliability Index Approach (RIA) and the Performance Measure Approach (PMA). The results obtained in the first part showed that the aspect ratio has a significant effect on the aerodynamic behavior where deeper cross-sections have lower resistance against flutter and should be avoided. In the second part, the adopted RBDO approach succeeded to mitigate the VIV, and it is proven that designs with narrow or prolonged bottom-base length and featuring an abrupt surface change in the leading and trailing edges can lead to high vertical vibration amplitude. It is expected that this research will help engineers with the selections of the adequate deck cross-section layout, and encourage researchers to apply concepts of optimization regarding this field and develop the presented approaches for further studies.
The reduction of the cement clinker content is an important prerequisite for the improvement of the CO2-footprint of concrete. Nevertheless, the durability of such concretes must be sufficient to guarantee a satisfactory service life of structures. Salt frost scaling resistance is a critical factor in this regard, as it is often diminished at increased clinker substitution rates. Furthermore, only insufficient long-term experience for such concretes exists. A high salt frost scaling resistance thus cannot be achieved by applying only descriptive criteria, such as the concrete composition. It is therefore to be expected, that in the long term a performance based service life prediction will replace the descriptive concept.
To achieve the important goal of clinker reduction for concretes also in cold and temperate climates it is important to understand the underlying mechanisms for salt frost scaling. However, conflicting damage theories dominate the current State of the Art. It was consequently derived as the goal of this thesis to evaluate existing damage theories and to examine them experimentally. It was found that only two theories have the potential to describe the salt frost attack satisfactorily – the glue spall theory and the cryogenic suction theory.
The glue spall theory attributes the surface scaling to the interaction of an external ice layer with the concrete surface. Only when moderate amounts of deicing salt are present in the test solution the resulting mechanical properties of the ice can cause scaling. However, the results in this thesis indicate that severe scaling also occurs at deicing salt levels, at which the ice is much too soft to damage concrete. Thus, the inability of the glue spall theory to account for all aspects of salt frost scaling was shown.
The cryogenic suction theory is based on the eutectic behavior of salt solutions, which consist of two phases – water ice and liquid brine – between the freezing point and the eutectic temperature. The liquid brine acts as an additional moisture reservoir, which facilitates the growth of ice lenses in the surface layer of the concrete. The experiments in this thesis confirmed, that the ice formation in hardened cement paste increases due to the suction of brine at sub-zero temperatures. The extent of additional ice formation was influenced mainly by the porosity and by the chloride binding capacity of the hardened cement paste.
Consequently, the cryogenic suction theory plausibly describes the actual generation of scaling, but it has to be expanded by some crucial aspects to represent the salt frost scaling attack completely. The most important aspect is the intensive saturation process, which is ascribed to the so-called micro ice lens pump. Therefore a combined damage theory was proposed, which considers multiple saturation processes. Important aspects of this combined theory were confirmed experimentally.
As a result, the combined damage theory constitutes a good basis to understand the salt frost scaling attack on concrete on a fundamental level. Furthermore, a new approach was identified, to account for the reduced salt frost scaling resistance of concretes with reduced clinker content.
For the safe and efficient operation of dams, frequent monitoring and maintenance are required. These are usually expensive, time consuming, and cumbersome. To alleviate these issues, we propose applying a wave-based scheme for the location and quantification of damages in dams.
To obtain high-resolution “interpretable” images of the damaged regions, we drew inspiration from non-linear full-multigrid methods for inverse problems and applied a new cyclic multi-stage full-waveform inversion (FWI) scheme. Our approach is less susceptible to the stability issues faced by the standard FWI scheme when dealing with ill-posed problems. In this paper, we first selected an optimal acquisition setup and then applied synthetic data to demonstrate the capability of our approach in identifying a series of anomalies in dams by a mixture of reflection and transmission tomography. The results had sufficient robustness, showing the prospects of application in the field of non-destructive testing of dams.
A safe and economic structural design based on the semi-probabilistic concept requires statistically representative safety elements, such as characteristic values, design values, and partial safety factors. Regarding climate loads, the safety levels of current design codes strongly reflect experiences based on former measurements and investigations assuming stationary conditions, i.e. involving constant frequencies and intensities. However, due to climate change, occurrence of corresponding extreme weather events is expected to alter in the future influencing the reliability and safety of structures and their components. Based on established approaches, a systematically refined data-driven methodology for the determination of design parameters considering nonstationarity as well as standardized targets of structural reliability or safety, respectively, is therefore proposed. The presented procedure picks up fundamentals of European standardization and extends them with respect to nonstationarity by applying a shifting time window method. Taking projected snow loads into account, the application of the method is exemplarily demonstrated and various influencing parameters are discussed.
Design-related reassessment of structures integrating Bayesian updating of model safety factors
(2022)
In the semi-probabilistic approach of structural design, the partial safety factors are defined by considering some degree of uncertainties to actions and resistance, associated with the parameters’ stochastic nature. However, uncertainties for individual structures can be better examined by incorporating measurement data provided by sensors from an installed health monitoring scheme. In this context, the current study proposes an approach to revise the partial safety factor for existing structures on the action side, γE by integrating Bayesian model updating. A simple numerical example of a beam-like structure with artificially generated measurement data is used such that the influence of different sensor setups and data uncertainties on revising the safety factors can be investigated. It is revealed that the health monitoring system can reassess the current capacity reserve of the structure by updating the design safety factors, resulting in a better life cycle assessment of structures. The outcome is furthermore verified by analysing a real life small railway steel bridge ensuring the applicability of the proposed method to practical applications.
Bolted connections are widely employed in structures like transmission poles, wind turbines, and television (TV) towers. The behaviour of bolted connections is often complex and plays a significant role in the overall dynamic characteristics of the structure. The goal of this work is to conduct a fatigue lifecycle assessment of such a bolted connection block of a 193 m tall TV tower, for which 205 days of real measurement data have been obtained from the installed monitoring devices. Based on the recorded data, the best-fit stochastic wind distribution for 50 years, the decisive wind action, and the locations to carry out the fatigue analysis have been decided. A 3D beam model of the entire tower is developed to extract the nodal forces corresponding to the connection block location under various mean wind speeds, which is later coupled with a detailed complex finite element model of the connection block, with over three million degrees of freedom, for acquiring stress histories on some pre-selected bolts. The random stress histories are analysed using the rainflow counting algorithm (RCA) and the damage is estimated using Palmgren-Miner's damage accumulation law. A modification is proposed to integrate the loading sequence effect into the RCA, which otherwise is ignored, and the differences between the two RCAs are investigated in terms of the accumulated damage.
Revisiting vernacular technique: Engineering a low environmental impact earth stabilisation method
(2022)
The major drawbacks of earth as a construction material — such as its low water stability and moderate strength — have led mankind to stabilize earth. Different civilizations developed vernacular techniques mainly focussing on lime, pozzolan or gypsum stabilization. Recently, cement has become the most commonly used additive in earth stabilization as it improves the strength and durability of plain earth. Also, it is a familiar and globally available construction material. However, using cement as an additive reduces the environmental advantages of earth and run counter to global targets regarding the reduction of CO2 emissions. Alternatives to cement stabilization are currently neither efficient enough to reduce its environmental impact nor allow the possibility of obtaining better results than those of cement. As such, this thesis deals with the rediscovery of a reverse engineering approach for a low environmental impact earth stabilization technique, aiming to replace cement in earth stabilization.
The first step in the method consists in a comprehensive review of earth stabilization with regards to earthen building standards and soil classification, which allows us to identify the research gap. The review showed that there is great potential in using other additives which result in similar improvements as those achieved by cement. However, the studies that have been conducted so far either use expansive soils, which are not suitable for earth constructions or artificial pozzolans that indirectly contribute to CO2 emissions. This is the main research gap.
The key concept for the development in the second step of the method is to combine vernacular additives to both improve the strength and durability of plain earth and to reduce the CO2 emissions. Various earth-mixtures were prepared and both development and performance tests were done to investigate the performance of this technique. The laboratory analyses on mix-design have proven a high durability and the results show a remarkable increase in strength performance. Furthermore, a significant reduction in CO2 emissions in comparison to cement stabilization could be shown.
The third step of the method discusses the results drawn from the experimental programme. In addition, the potential of the new earth mixture with regards to its usability in the field of building construction and architectural design is further elaborated on.
The method used in this study is the first of its kind that allows investors to avoid the very time-consuming processes such as finding a suitable source for soil excavation and soil classification. The developed mixture has significant workability and suitability for production of stabilized earthen panels — the very first of its kind. Such a panel is practically feasible, reasonable, and could be integrated into earthen building standards in general and in particular to DIN 18948, which is related to earthen boards and published in 2018.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
BIM-basierte Digitalisierung von Bestandsgebäuden aus Sicht des FM am Beispiel von Heizungsanlagen
(2022)
Das Ziel der Arbeit ist, für das Facility Management relevante Informationen für die mit Building Information Modeling basierende Erstellung von Bestandsgebäuden am Beispiel einer Hei- zungsanlage zu definieren. Darauf basierend sind die notwendigen Arbeitsschritte der Objek- taufnahme abgeleitet. Für die Definition der Arbeitsschritte wurden das grundlegende Vorge- hen bei einer Objektaufnahme sowie die gesetzlichen Gegebenheiten für den Betrieb einer Heizungsanlage dargelegt. Darüber hinaus sind in der vorliegenden Ausarbeitung die Vorteile und Herausforderungen hinsichtlich des Zusammenspiels von Building Information Modeling und Facility Management analysiert. Die definierten Arbeitsschritte sind anhand eines Beispiel- projektes angewendet worden. Im Rahmen des Beispielprojekts sind die entscheidenden Be- triebsdaten je Anlagenteil in Form von Informationsanforderungen nach DIN 17412 definiert. Das Gebäudemodell ist durch Parameter mit den für das Facility Management relevanten In- formationen ergänzt. Die Resultate des Beispielprojektes sind mit aussagekräftigen Schnitten, Plänen sowie 3-D-Visualisierungen dargestellt. Abschließend sind die Ergebnisse in Bezug auf das FM validiert. Aus den Arbeitsschritten und Ergebnissen ist eine Leitlinie erstellt worden für den Digitalisierungsprozess von Bestandsgebäuden für das Facility Management.
Data acquisition systems and methods to capture high-resolution images or reconstruct 3D point clouds of existing structures are an effective way to document their as-is condition. These methods enable a detailed analysis of building surfaces, providing precise 3D representations. However, for the condition assessment and documentation, damages are mainly annotated in 2D representations, such as images, orthophotos, or technical drawings, which do not allow for the application of a 3D workflow or automated comparisons of multitemporal datasets. In the available software for building heritage data management and analysis, a wide range of annotation and evaluation functions are available, but they also lack integrated post-processing methods and systematic workflows. The article presents novel methods developed to facilitate such automated 3D workflows and validates them on a small historic church building in Thuringia, Germany. Post-processing steps using photogrammetric 3D reconstruction data along with imagery were implemented, which show the possibilities of integrating 2D annotations into 3D documentations. Further, the application of voxel-based methods on the dataset enables the evaluation of geometrical changes of multitemporal annotations in different states and the assignment to elements of scans or building models. The proposed workflow also highlights the potential of these methods for condition assessment and planning of restoration work, as well as the possibility to represent the analysis results in standardised building model formats.
The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency
Das Ziel dieser Arbeit war es, neuartige Fließmittel auf Basis von Stärke als nachwachsenden Rohstoff zu synthetisieren und die Wechselwirkung mit Portlandzement zu charakterisieren. Die Notwendigkeit, Alternativen zu synthetischen Zusatzmittel zu erforschen, ergibt sich aus der benötigten Menge zur Verarbeitung von ca. 4,1 Gt/a, wobei ca. 85 % der Zusatzmittel auf die Fließmittel entfallen.
Um Fließmittel aus Stärke zu synthetisieren, wurden drei Basisstärken unterschiedlicher Herkunft verwendet. Es wurde eine Maniokstärke mit einer niedrigen Molekularmasse und eine Weizenstärke mit einer hohen Molekularmasse verwendet. Darüber hinaus wurde eine Kartoffelstärke mit einer mittleren Molekularmasse, die ein Abfallprodukt der kartoffelverarbeitenden Industrie darstellt, genutzt. Die Stärkefließmittel wurden durch chemische Modifikation in einem zweistufigen Prozess synthetisiert. Im ersten Schritt wurde das Molekulargewicht der Weizen- und Kartoffelstärke durch säurehydrolytischen Abbau verringert. Für die kurzkettige Maniokstärke war eine Degradation der Molekularmasse nicht notwendig. Im zweiten Syntheseschritt wurden anionische Ladungen durch das Versetzen der degradierten Stärken und Maniokstärke mit Natriumvinylsulfonat in die Stärkemoleküle eingeführt.
Beurteilung der Synthesemethode zur Erzeugung von Stärkefließmitteln
In diesem Zusammenhang sollten molekulare Parameter der Stärkefließmittel gezielt eingestellt werden, um eine Fließwirkung im Portlandzement zu erhalten. Insbesondere die Molekularmasse und die Menge anionischer Ladungen sollte variiert werden, um Abhängigkeiten mit der Dispergierleistung zu identifizieren.
1. Es konnte durch GPC-Messungen gezeigt werden, dass die Molekularmasse der langkettigen Weizenstärke durch die gewählten Modifizierungsbedingungen zum säurehydrolytischen Abbau verringert werden konnte. Durch Variation der säurehydrolytischen Bedingungen wurden 4 degradierte Weizenstärken erzeugt, die eine Reduzierung der Molekularmasse um 27,5 – 43 % aufwiesen. Die Molekularmasse der Kartoffelstärke konnte durch säurehydrolytischen Abbau um ca. 26 % verringert werden.
2. Durch PCD-Messungen wurde gezeigt, dass anionische Ladungen durch Sulfoethylierung der freien Hydroxylgruppen in die degradierten Stärken eingeführt werden konnten. Durch Variation der Dauer der Sulfoethylierung konnte die Menge der anionischen Ladungen gesteuert und gezielt variiert werden, so dass Stärkefließmittel mit steigender Ladungsmenge in folgender Reihenfolge synthetisiert wurden:
W-3 < W-2 < K-1 < W¬-4 < W¬1 < M-1
Im Ergebnis der chemischen Modifizierung konnten 6 Stärkefließmittel mit variierten Molekularmassen und anionischen Ladungen erzeugt werden. Es konnte gezeigt werden, dass die Herkunft der Stärke für die chemische Modifizierung unerheblich ist. Die Fließmittel lagen synthesebedingt als basische, wässrige Suspensionen mit Wirkstoffgehalten im Bereich von 23,5 – 50 % vor.
Beurteilung der Dispergierleistung der synthetisierten Stärkefließmittel
Die Dispergierperformance wurde durch rheologische Experimente mit einem Rotationsviskosimeter erfasst. Dabei wurden der Einfluss auf die Fließkurven und die Viskositätskurven betrachtet. Durch Vergleich der Dispergierleistung mit einem Polykondensat- und einem PCE-Fließmittel konnte eine Einordnung und Bewertung der Fließmittel vorgenommen werden.
3. Die rheologische Experimente haben gezeigt, dass die Stärkefließmittel eine vergleichbar hohe Dispergierleistung aufweisen, wie das zum Vergleich herangezogen PCE-Fließmittel. Darüber hinaus zeigte sich, dass die Fließwirkung der 6 Stärkefließmittel gegenüber dem Polykondensatfließmittel deutlich höher ist. Das aus der Literatur bekannte Einbrechen der Dispergierleistung der Polykondensat-fließmittel bei w/z-Werten < 0,4 konnte bestätigt werden.
4. Alle 6 Stärkefließmittel führten zu einer Verringerung der Fließgrenze und der dynamischen Viskosität des Zementleimes bei einem w/z-Wert von 0,35.
5. Der Vergleich der Dispergierleistung der Stärkefließmittel untereinander zeigte, dass die anionische Ladungsmenge einen Schlüsselparameter darstellt. Die Stärkefließmittel M-1, K-1, W-1 und W-4 mit anionischen Ladungsmengen > 6 C/g zeigten die höchste Dispergier¬performance. Die vergleichend herangezogenen klassischen Fließmittel wiesen anionische Ladungsmengen im Bereich von 1,2 C/g (Polycondensat) und 1,6 C/g (PCE) auf. Die Molekularmasse schien für die Dispergierleistung zunächst unerheblich zu sein. Aus diesem Grund wurde die Basisweizenstärke erneut chemisch modifiziert, indem anionische Ladungen eingeführt wurden, ohne die Molekularmasse jedoch zu verringern. Das Stärkederivat wies verdickende Eigenschaften im Zementleim auf. Daraus konnte geschlussfolgert werden, dass eine definierte Grenzmolekularmasse (150.000 Da) existiert, die unterschritten werden muss, um Fließmittel aus Stärke zu erzeugen. Des Weiteren zeigen die Ergebnisse, dass durch die chemische Modifizierung sowohl Fließmittel als auch Verdicker aus Stärke erzeugt werden können.
Beurteilung der Beeinflussung der Hydratation und der Porenlösung des Portlandzementes
Aus der Literatur ist bekannt, dass Fließmittel die Hydratation von Portlandzement maßgeblich beeinflussen können. Aus diesem Grund wurden kalorimetrische und konduktometrische Untersuchungen an Zementsuspensionen, die mit den synthetisierten Stärkefließmitteln versetzt wurden, durchgeführt. Ergänzt wurden die Untersuchungen durch Porenlösungsanalysen zu verschiedenen Zeitpunkten der Hydratation.
6. Die kalorimetrischen Untersuchungen zur Beeinflussung der Hydratation des Portlandzementes zeigten, dass die dormante Periode durch die Zugabe der Stärkefließmittel z.T. erheblich verlängert wird. Es konnte gezeigt werden, dass, je höher die anionische Ladungsmenge der Stärkefließmittel ist, desto länger dauert die dormante Periode andauert. Darüber hinaus zeigte sich, dass eine niedrige Molekularmasse der Stärkefließmittel die Verlängerung der dormanten Periode begünstigt.
7. Durch die konduktometrischen Untersuchungen konnte gezeigt werden, dass alle Stärkefließmittel die Dauer des freien- und diffusionskontrollierten CSH-Phasenwachstums verlangsamen. Insbesondere die Ausfällung des Portlandits, welches mit dem Erstarrungsbeginn korreliert, erfolgt zu deutlich späteren Zeitpunkten. Des Weiteren korrelierten die konduktometrischen Untersuchungen mit der zeitlichen Entwicklung der Calciumkonzentration der Porenlösungen. Der Vergleich der Stärkefließmittel untereinander zeigte, dass die Molekularmasse ein Schlüsselparameter ist. Das Stärkefließmittel M-1 mit der geringsten Molekularmasse, welches geringe Mengen kurzkettiger Anhydroglucoseeinheiten aufweist, verzögert die Hydratphasenbildung am stärksten. Diese Wirkung ist vergleichbar mit der von Zuckern. Darüber hinaus deuteten die Ergebnisse daraufhin, dass die Stärkefließmittel auf den ersten Hydratationsprodukten adsorbieren, wodurch die Hydratphasenbildung verlangsamt wird.
Die kalorimetrischen und konduktometrischen Daten sowie die Ergebnisse der Porenlösungsanalytik des Zementes, erforderten eine genauere Betrachtung der Beeinflussung der Hydratation der Klinkerphasen C3A und C3S, durch die Stärkefließmittel. Demzufolge wurden die Untersuchungen mit den Klinkerphasen C3A und C3S in Analogie zum Portlandzement durchgeführt.
Beurteilung der Beeinflussung der Hydratation und der Porenlösung des C3A
Während die kalorimetrischen Untersuchungen zur C3A-Hydratation eine Tendenz zur verlangsamten Hydratphasenbildung durch die Stärkefließmittel aufzeigten, lieferten die konduktometrischen Ergebnisse grundlegende Erkenntnisse zur Beeinflussung der C3A-Hydratation. Das Stadium I der C3A-Hydratation ist durch einen Abfall der elektrischen Leitfähigkeit geprägt. Dies korreliert mit dem Absinken der Calciumionenkonzentration und dem Anstieg der Aluminiumionenkonzentration in der Porenlösung der C3A-Suspensionen. Im Anschluss an das Stadium I bildet sich ein Plateau in den elektrischen Leitfähigkeitskurven aus.
8. Es konnte gezeigt werden, dass die Stärkefließmittel das Stadium I der C3A-Hydratation, d.h. die Auflösung und Bildung erster Calciumaluminathydrate verlangsamen. Insbesondere die Stärkefließmittel mit höherer Molekularmasse erhöhten die Dauer des Stadium I. Das Stadium II wird durch die Stärkefließmittel in folgender Reihenfolge am stärksten verlängert: M-1 > W-3 > K-1 > W-2 ≥ W-4 und verdeutlicht, dass keine Abhängigkeit von der anionischen Ladungsmenge identifiziert werden konnte. Die Ergebnisse zeigten, dass speziell die kurzkettige Stärke M-1, das Stadium II länger aufrechterhalten.
9. Das Stadium III und IV der C3A-Hydratation wird insbesondere durch die Stärkefließmittel mit höherer Molekularmasse verlängert.
Die Ergebnisse der Porenlösungsanalytik korrelieren mit den Ergebnissen der elektrischen Leitfähigkeit. Speziell die zeitlichen Verläufe der Calciumionenkonzentration bildeten die Verläufe der Konduktivitätskurven der C3A-Hydratation mit großer Übereinstimmung ab.
Beurteilung der Beeinflussung der Hydratation und der Porenlösung des C3S
Die Ergebnisse der kalorimetrischen Untersuchungen zur Beeinflussung der C3S-Hydratation durch die Stärkefließmittel zeigen, dass diese maßgeblich verlangsamt wird. Das Maximum des Haupthydratationspeaks wird zu späteren Zeiten verschoben und auch die Höhe des Maximums wird deutlich verringert. Durch die konduktometrischen Experimente wurde aufgeklärt, welche Stadien der C3S-Hydrataion beeinflusst wurden.
10. Es konnte gezeigt werden, dass sowohl die Menge der eingebrachten anionischen Ladungen als auch das Vorhandensein sehr kleiner Stärkefließmittelmoleküle (Zucker), Schlüsselparameter der verzögerten Hydratationskinetik des C3S sind. Der grundlegende Mechanismus der Hydratationsverzögerung beruht auf einer Kombination aus verminderter CSH-Keimbildung und Adsorptionsprozessen auf den ersten gebildeten CSH-Phasen der C3S-Partikel.
Beurteilung des Adsorptionsverhaltens am Zement, C3A und C3S
Die Bestimmung des Adsorptionsverhaltens der Stärkefließmittel erfolgte mit der Phenol-Schwefelsäure-Methode an Zement,- C3A- und C3S-Suspensionen. Durch den Vergleich der Adsorptionsraten und Adsorptionsmengen in Abhängigkeit von den molekularen Parametern der Stärkefließmittel wurde ein Wechselwirkungsmodell identifiziert.
11. Die Ursache für die hohe Dispergierleistung der Stärkefließmittel liegt in Adsorptionsprozessen an den ersten gebildeten Hydratphasen des Zementes begründet. Die Molekularmasse der Stärkefließmittel ist ein Schlüsselparameter der entscheidend für den Mechanismus der Adsorption ist. Während anionische, langkettige Stärken an mehreren Zementpartikeln gleichzeitig adsorbieren und für eine Vernetzung der Zementpartikel untereinander sorgen (Verdickerwirkung), adsorbieren kurzkettige anionische Stärken lediglich an den ersten gebildeten Hydratphasen der einzelnen Zementpartikel und führen zu elektrostatischer Abstoßung (Fließmittelwirkung).
12. Es konnte gezeigt werden, dass die Stärkefließmittel mit geringerem Molekulargewicht bei höheren Konzentrationen an den Hydratphasen des Zementes adsorbieren. Die Stärkefließmittel mit höherer Molekularmasse erreichen bei einer Zugabemenge von 0,7 % ein Plateau. Daraus wird geschlussfolgert, dass die größeren Fließmittelmoleküle einen erhöhten Platzbedarf erfordern und zur Absättigung der hydratisierenden Oberflächen bei geringeren Zugabemengen führen. Darüber hinaus konnte gezeigt werden, dass die Stärkefließmittel mit höherer anionischer Ladungsmenge zu höheren Adsorptionsmengen auf den Zement-, C3A- und C3S-Partikeln führen.
13. Die Adsorptionsprozesse finden an den ersten gebildeten Hydratphasen der C3A-Partikel statt, wodurch sowohl die Auflösung des C3A als auch die Bildung der Calciumhydroaluminate verlangsamt wird. Darüber hinaus wurde festgestellt, dass die Verlangsamung des freien- und diffusionskontrollierten Hydratphasenwachstums des C3S, durch die Adsorption der Stärkefließmittel auf den ersten gebildeten CSH-Phasen hervorgerufen wird. Des Weiteren wurde festgestellt, dass sehr kleine zuckerähnliche Moleküle in der kurzkettigen Maniokstärke in der Lage sind, die Bildung der ersten CSH-Keime zu unterdrücken. Dadurch kann die langanhaltende Plateauphase der elektrischen Leitfähigkeit der C3S-Hydratation erklärt werden.
Beurteilung der Porenstruktur- und Festigkeitsausbildung
Die Beurteilung der Qualität der Mikrostruktur erfolgte durch die Bestimmung der Rohdichte und der Porenradienverteilung mit Hilfe der Quecksilberhochdruckporosimetrie. Durch das Versetzen der Zementleime mit den Stärkefließmitteln konnten bei gleichbleibender Verarbeitbarkeit Zementsteinprobekörper mit einem um 17,5 % geringeren w/z-Wert von 0,35 hergestellt werden. Die Absenkung des w/z-Wertes führt zu einem Anstieg der Rohdichte des Zementsteins.
14. Durch die Zugabe der Stärkefließmittel und den verringerten w/z-Wert wird die Porenstruktur der Zementsteinproben im Vergleich zum Referenzzementstein verfeinert, da die Gesamtporosität sinkt. Insbesondere der Kapillarporenanteil wird verringert und der Gelporenanteil erhöht. Im Unterschied zu den PCE-Fließmitteln führt die Zugabe der Stärkefließmittel zu keinem erhöhten Eintrag von Luftporen. Dies wiederum hat zur Folge, dass bei der Verwendung der Stärkefließmittel auf Entschäumer verzichtet werden kann.
15. Entsprechend der dichteren Zementsteinmatrix wurden für die Zementsteine mit den Stärkefließmitteln nach 7 d und 28 d, erhöhte Biegezug- und Druckfestigkeiten ermittelt. Insbesondere die 28 d Druckfestigkeit wurde durch den verringerten w/z-Wert um die Faktoren 3,5 - 6,6 erhöht.
In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks.
Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis.
While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models.
As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling.
This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components:
-Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS).
-Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks.
-Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models.
Hitze in der Stadt Jena
(2022)
Die vorliegende Arbeit befasst sich mit den spezifischen Faktoren und Wechselwirkungen des städtischen Klimas und Strategien zur Prävention und Kompensation lokaler Klimaveränderungen. Problematische Merkmale des Stadtklimas werden sich infolge des Klimawandels stärker ausprägen. Insbesondere die Hitzebelastung wird zunehmen und die Lebensbedingungen in der Stadt negativ beeinflussen. Infolge höherer Temperaturen in Städten und einer höheren Temperaturdifferenz zum Umland verändern sich Windströme und die Wasserbilanz. Es sind Strategien notwendig, um den Schadstoffausstoß, die Flächeninanspruchnahme, die Abfallproduktion und den Wasser-, Energie- und Ressourcenverbrauch zu verringern, um sowohl langfristig den Klimawandel als auch dessen bereits unvermeidbaren Auswirkungen auf Städte zu begrenzen.
Beispielhaft untersucht die Arbeit das Stadtklima, dessen zukünftige Veränderungen infolge des Klimawandels, bauliche Maßnahmen und Anpassungsstrategien der Stadt Jena. Jena ist die zweitgrößte Stadt im Bundesland Thüringen und gehört heute zu den wärmsten und trockensten Großstädten Deutschlands.
Die Ergebnisse der Arbeit werden anschließend anhand eines städtebaulichen Konzepts und Entwurfs angewendet. Das Bachstraßenareal liegt in der Innenstadt, dem am stärksten von Hitze betroffenen Stadtteil. Als ehemaliger Hauptstandort des Jenaer Universitätsklinikums, soll es zu einem nachhaltigen Wissenschaftscampus der Lebenswissenschaften umgebaut werden, wobei ein Großteil der denkmalgeschützten, ehemaligen Klinikgebäude erhalten bleibt. Der Fokus liegt dabei auf der Umsetzung der zuvor formulierten, nachhaltigen Strategien zur Verbesserung des lokalen Stadtklimas und einer Abschwächung der Auswirkungen des Klimawandels auf den besonders stark betroffenen Innenstadtbereich Jenas.
Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling.
The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts.
Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem.
For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time.
In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them.
Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design.
In recent years, the discussion of digitalization has arrived in the media, at conferences, and in committees of the construction and real estate industry. While some areas are producing innovations and some contributors can be described as pioneers, other topics still show deficits with regard to digital transformation. The building permit process can also be counted in this category. Regardless of how architects and engineers in planning offices rely on innovative methods, building documents have so far remained in paper form in too many cases, or are printed out after electronic submission to the authority. Existing resources – for example in the form of a building information model, which could provide support in the building permit process – are not being taken advantage of. In order to use digital tools to support decision-making by the building permit authorities, it is necessary to understand the current situation and to question conditions before pursuing the overall automation of internal authority processes as the sole solution.
With a substantive-organizational consideration of the relevant areas that influence building permit determination, an improvement of the building permit procedure within authorities is proposed. Complex areas – such as legal situations, the use of technology, as well as the subjective alternative action – are determined and structured. With the development of a model for the determination of building permitability, both an understanding of influencing factors is conveyed and an increase in transparency for all parties involved is created.
In addition to an international literature review, an empirical study served as the research method. The empirical study was conducted in the form of qualitative expert interviews in order to determine the current state in the field of building permit procedures. The collected data material was processed and subsequently subjected to a software-supported content analysis. The results were processed, in combination with findings from the literature review, in various analyses to form the basis for a proposed model.
The result of the study is a decision model that closes the gap between the current processes within the building authorities and an overall automation of the building permit review process. The model offers support to examiners and applicants in determining building permit eligibility, through its process-oriented structuring of decision-relevant facts. The theoretical model could be transferred into practice in the form of a web application.
The detailed structural analysis of thin-walled circular pipe members often requires the use of a shell or solid-based finite element method. Although these methods provide a very good approximation of the deformations, they require a higher degree of discretization which causes high computational costs. On the other hand, the analysis of thin-walled circular pipe members based on classical beam theories is easy to implement and needs much less computation time, however, they are limited in their ability to approximate the deformations as they cannot consider the deformation of the cross-section.
This dissertation focuses on the study of the Generalized Beam Theory (GBT) which is both accurate and efficient in analyzing thin-walled members. This theory is based on the separation of variables in which the displacement field is expressed as a combination of predetermined deformation modes related to the cross-section, and unknown amplitude functions defined on the beam's longitudinal axis. Although the GBT was initially developed for long straight members, through the consideration of complementary deformation modes, which amend the null transverse and shear membrane strain assumptions of the classical GBT, problems involving short members, pipe bends, and geometrical nonlinearity can also be analyzed using GBT. In this dissertation, the GBT formulation for the analysis of these problems is developed and the application and capabilities of the method are illustrated using several numerical examples. Furthermore, the displacement and stress field results of these examples are verified using an equivalent refined shell-based finite element model.
The developed static and dynamic GBT formulations for curved thin-walled circular pipes are based on the linear kinematic description of the curved shell theory. In these formulations, the complex problem in pipe bends due to the strong coupling effect of the longitudinal bending, warping and the cross-sectional ovalization is handled precisely through the derivation of the coupling tensors between the considered GBT deformation modes. Similarly, the geometrically nonlinear GBT analysis is formulated for thin-walled circular pipes based on the nonlinear membrane kinematic equations. Here, the initial linear and quadratic stress and displacement tangent stiffness matrices are built using the third and fourth-order GBT deformation mode coupling tensors.
Longitudinally, the formulation of the coupled GBT element stiffness and mass matrices are presented using a beam-based finite element formulation. Furthermore, the formulated GBT elements are tested for shear and membrane locking problems and the limitations of the formulations regarding the membrane locking problem are discussed.
Mitigating Risks of Corruption in Construction: A theoretical rationale for BIM adoption in Ethiopia
(2021)
This PhD thesis sets out to investigate the potentials of Building Information Modeling (BIM) to mitigate risks of corruption in the Ethiopian public construction sector. The wide-ranging capabilities and promises of BIM have led to the strong perception among researchers and practitioners that it is an indispensable technology. Consequently, it has become the frequent subject of science and research. Meanwhile, many countries, especially the developed ones, have committed themselves to applying the technology extensively. Increasing productivity is the most common and frequently cited reason for that.
However, both technology developers and adopters are oblivious to the potentials of BIM in addressing critical challenges in the construction sector, such as corruption. This particularly would be significant in developing countries like Ethiopia, where its problems and effects are acute. Studies reveal that bribery and corruption have long pervaded the construction industry worldwide. The complex and fragmented nature of the sector provides an environment for corruption. The Ethiopian construction sector is not immune from this epidemic reality. In fact, it is regarded as one of the most vulnerable sectors owing to varying socio-economic and political factors. Since 2015, Ethiopia has started adopting BIM, yet without clear goals and strategies. As a result, the potential of BIM for combating concrete problems of the sector remains untapped. To this end, this dissertation does pioneering work by showing how collaboration and coordination features of the technology contribute to minimizing the opportunities for corruption. Tracing loopholes, otherwise, would remain complex and ineffective in the traditional documentation processes.
Proceeding from this anticipation, this thesis brings up two primary questions: what are areas and risks of corruption in case of the Ethiopian public construction projects; and how could BIM be leveraged to mitigate these risks? To tackle these and other secondary questions, the research employs a mixed-method approach. The selected main research strategies are Survey, Grounded Theory (GT) and Archival Study. First, the author disseminates an online questionnaire among Ethiopian construction engineering professionals to pinpoint areas of vulnerability to corruption. 155 responses are compiled and scrutinized quantitatively. Then, a semi-structured in-depth interview is conducted with 20 senior professionals, primarily to comprehend opportunities for and risks of corruption in those identified highly vulnerable project stages and decision points. At the same time, open interviews (consultations) are held with 14 informants to be aware of state of the construction documentation, BIM and loopholes for corruption in the country. Consequently, these qualitative data are analyzed utilizing the principles of GT, heat/risk mapping and Social Network Analysis (SNA). The risk mapping assists the researcher in the course of prioritizing corruption risks; whilst through SNA, methodically, it is feasible to identify key actors/stakeholders in the corruption venture. Based on the generated research data, the author constructs a [substantive] grounded theory around the elements of corruption in the Ethiopian public construction sector. This theory, later, guides the subsequent strategic proposition of BIM. Finally, 85 public construction related cases are also analyzed systematically to substantiate and confirm previous findings.
By ways of these multiple research endeavors that is based, first and foremost, on the triangulation of qualitative and quantitative data analysis, the author conveys a number of key findings. First, estimations, tender document preparation and evaluation, construction material as well as quality control and additional work orders are found to be the most vulnerable stages in the design, tendering and construction phases respectively. Second, middle management personnel of contractors and clients, aided by brokers, play most critical roles in corrupt transactions within the prevalent corruption network. Third, grand corruption persists in the sector, attributed to the fact that top management and higher officials entertain their overriding power, supported by the lack of project audits and accountability. Contrarily, individuals at operation level utilize intentional and unintentional 'errors’ as an opportunity for corruption.
In light of these findings, two conceptual BIM-based risk mitigation strategies are prescribed: active and passive automation of project audits; and the monitoring of project information throughout projects’ value chain. These propositions are made in reliance on BIM’s present dimensional capabilities and the promises of Integrated Project Delivery (IPD). Moreover, BIM’s synchronous potentials with other technologies such as Information and Communication Technology (ICT), and Radio Frequency technologies are topics which received a treatment. All these arguments form the basis for the main thesis of this dissertation, that BIM is able to mitigate corruption risks in the Ethiopian public construction sector. The discourse on the skepticisms about BIM that would stem from the complex nature of corruption and strategic as well as technological limitations of BIM is also illuminated and complemented by this work. Thus, the thesis uncovers possible research gaps and lays the foundation for further studies.
Accurate prediction of stable alluvial hydraulic geometry, in which erosion and sedimentation are in equilibrium, is one of the most difficult but critical topics in the field of river engineering. Data mining algorithms have been gaining more attention in this field due to their high performance and flexibility. However, an understanding of
the potential for these algorithms to provide fast, cheap, and accurate predictions of hydraulic geometry is lacking. This study provides the first quantification of this potential. Using at-a-station field data, predictions of flow depth, water-surface width and longitudinal water surface slope are made using three standalone data mining techniques -, Instance-based Learning (IBK), KStar, Locally Weighted Learning (LWL) - along with four types of novel hybrid algorithms in which the standalone models are trained with Vote, Attribute Selected
Classifier (ASC), Regression by Discretization (RBD), and Cross-validation Parameter Selection (CVPS) algorithms (Vote-IBK, Vote-Kstar, Vote-LWL, ASC-IBK, ASC-Kstar, ASC-LWL, RBD-IBK, RBD-Kstar, RBD-LWL, CVPSIBK, CVPS-Kstar, CVPS-LWL). Through a comparison of their predictive performance and a sensitivity analysis of the driving variables, the results reveal: (1) Shield stress was the most effective parameter in the prediction of all geometry dimensions; (2) hybrid models had a higher prediction power than standalone data mining models,
empirical equations and traditional machine learning algorithms; (3) Vote-Kstar model had the highest performance in predicting depth and width, and ASC-Kstar in estimating slope, each providing very good prediction performance. Through these algorithms, the hydraulic geometry of any river can potentially be predicted accurately and with ease using just a few, readily available flow and channel parameters. Thus, the results reveal that these models have great potential for use in stable channel design in data poor catchments, especially in developing nations where technical modelling skills and understanding of the hydraulic and sediment processes occurring in the river system may be lacking.
Die Auseinandersetzung mit der Digitalisierung ist in den letzten Jahren in den Medien, auf Konferenzen und in Ausschüssen der Bau- und Immobilienbranche angekommen. Während manche Bereiche Neuerungen hervorbringen und einige Akteure als Pioniere zu bezeichnen sind, weisen andere Themen noch Defizite hinsichtlich der digitalen Transformation auf. Zu dieser Kategorie kann auch das Baugenehmigungsverfahren gezählt werden. Unabhängig davon, wie Architekten und Ingenieure in den Planungsbüros auf innovative Methoden setzen, bleiben die Bauvorlagen bisher zuhauf in Papierform oder werden nach der elektronischen Einreichung in der Behörde ausgedruckt. Vorhandene Ressourcen, beispielsweise in Form eines Bauwerksinformationsmodells, die Unterstützung bei der Baugenehmigungsfeststellung bieten können, werden nicht ausgeschöpft. Um mit digitalen Werkzeugen eine Entscheidungshilfe für die Baugenehmigungsbehörden zu erarbeiten, ist es notwendig, den Ist-Zustand zu verstehen und Gegebenheiten zu hinterfragen, bevor eine Gesamtautomatisierung der innerbehördlichen Vorgänge als alleinige Lösung zu verfolgen ist.
Mit einer inhaltlich-organisatorischen Betrachtung der relevanten Bereiche, die Einfluss auf die Baugenehmigungsfeststellung nehmen, wird eine Optimierung des Baugenehmigungsverfahrens in den
Behörden angestrebt. Es werden die komplexen Bereiche, wie die Gesetzeslage, der Einsatz von Technologie aber auch die subjektiven Handlungsalternativen, ermittelt und strukturiert. Mit der Entwicklung eines Modells zur Feststellung der Baugenehmigungsfähigkeit wird sowohl ein Verständnis für Einflussfaktoren vermittelt als auch eine Transparenzsteigerung für alle Beteiligten geschaffen.
Neben einer internationalen Literaturrecherche diente eine empirische Studie als Untersuchungsmethode. Die empirische Studie wurde in Form von qualitativen Experteninterviews durchgeführt, um den Ist-Zustand im Bereich der Baugenehmigungsverfahren festzustellen. Das erhobene Datenmaterial wurde aufbereitet und anschließend einer softwaregestützten Inhaltsanalyse unterzogen. Die Ergebnisse wurden in Kombination mit den Erkenntnissen der Literaturrecherche in verschiedenen Analysen als Modellgrundlage aufgearbeitet.
Ergebnis der Untersuchung stellt ein Entscheidungsmodell dar, welches eine Lücke zwischen den gegenwärtigen
Abläufen in den Baubehörden und einer Gesamtautomatisierung der Baugenehmigungsprüfung schließt. Die prozessorientierte Strukturierung entscheidungsrelevanter Sachverhalte im Modell ermöglicht eine Unterstützung bei der Baugenehmigungsfeststellung für Prüfer und Antragsteller. Das theoretische Modell konnte in Form einer Webanwendung in die Praxis übertragen werden.
Für die Verminderung der betonspezifischen CO2-Emissionen wird ein verstärkter Einsatz klinkerreduzierter Zemente bzw. Betone angestrebt. Die Reduzierung des Klinkergehaltes darf jedoch nicht zu einer lebensdauerrelevanten Beeinträchtigung der Betondauerhaftigkeit führen. In diesem Zusammenhang stellt der Frost-Tausalz-Widerstand eine kritische Größe dar, da er bei höheren Klinkersubstitutionsraten häufig negativ beeinflusst wird. Erschwerend kommt hinzu, dass für klinkerreduzierte Betone nur ein unzureichender Erfahrungsschatz vorliegt. Ein hoher Frost-Tausalz-Widerstand kann daher nicht ausschließlich anhand deskriptiver Vorgaben gewährleistet werden. Demgemäß sollte perspektivisch auch für frost-tausalzbeanspruchte Bauteile eine performancebasierte Lebensdauerbetrachtung erfolgen.
Eine unverzichtbare Grundlage für das Erreichen dieser Ziele ist ein Verständnis für die Schadensvorgänge beim Frost-Tausalz-Angriff. Der Forschungsstand ist jedoch geprägt von widersprüchlichen Schadenstheorien. Somit wurde als Zielstellung für diese Arbeit abgeleitet, die existierenden Schadenstheorien unter Berücksichtigung des aktuellen Wissensstandes zu bewerten und mit eigenen Untersuchungen zu prüfen und einzuordnen. Die Sichtung des Forschungsstandes zeigte, dass nur zwei Theorien das Potential haben, den Frost-Tausalz-Angriff umfassend abzubilden – die Glue Spall Theorie und die Cryogenic Suction Theorie.
Die Glue Spall Theorie führt die Entstehung von Abwitterungen auf die mechanische Schädigung der Betonoberfläche durch eine anhaftende Eisschicht zurück. Dabei sollen nur bei moderaten Tausalzkonzentrationen in der einwirkenden Lösung kritische Spannungszustände in der Eisschicht auftreten, die eine Schädigung der Betonoberfläche hervorrufen können. In dieser Arbeit konnte jedoch nachgewiesen werden, dass starke Abwitterungen auch bei Tausalz¬konzentrationen auftreten, bei denen eine mechanische Schädigung des Betons durch das Eis auszuschließen ist. Damit wurde die fehlende Eignung der Glue Spall Theorie aufgezeigt.
Die Cryogenic Suction Theorie fußt auf den eutektischen Eigenschaften von Tausalz-lösungen, die im gefrorenen Zustand immer als Mischung auf festem Wassereis und flüssiger, hochkonzentrierter Salzlösung bestehen, solange ihre Eutektikumstemperatur nicht unter¬schritten wird. Die flüssige Phase im salzhaltigen Eis stellt für gefrorenen Beton ein bisher nicht berücksichtigtes Flüssigkeitsreservoir dar, welches trotz der hohen Salzkonzentration die Eisbildung in der Betonrandzone verstärken und so die Entstehung von Abwitterungen verursachen soll. In dieser Arbeit wurde bestätigt, dass die Eisbildung im Zementstein beim Gefrieren in hochkonzentrierter Tausalzlösung tatsächlich verstärkt wird. Das Ausmaß der zusätzlichen Eisbildung wurde dabei auch von der Fähigkeit des Zementsteins zur Bindung von Chloridionen aus der Tausalzlösung beeinflusst.
Zusammenfassend wurde festgestellt, dass die Cryogenic Suction Theorie eine gute Beschreibung des Frost-Tausalz-Angriffes darstellt, aber um weitere Aspekte ergänzt werden muss. Die Berücksichtigung der intensiven Sättigung von Beton durch den Prozess der Mikroeislinsenpumpe stellt hier die wichtigste Erweiterung dar. Basierend auf dieser Überlegung wurde eine kombinierte Schadenstheorie aufgestellt. Wichtige Annahmen dieser Theorie konnten experimentell bestätigt werden. Im Ergebnis wurde so die Grundlage für ein tiefergehendes Verständnis des Frost-Tausalz-Angriffes geschaffen. Zudem wurde ein neuer Ansatz identifiziert, um die (potentielle) Verringerung des Frost-Tausalz-Widerstandes klinkerreduzierter Betone zu erklären.
In den letzten Jahrzehnten unterlag der Straßenbetriebsdienst tiefgreifenden Veränderungen. Diese Veränderungen schließt auch die betriebliche Steuerungsphilosophie ein, um eine planungsrationale und ökonomische Gestaltung des Straßenbetriebsdienstes zu unterstützen. Dabei erfolgt eine verbindliche Vorgabe der Leistungsinhalte und -umfänge und ermöglicht eine Budgetierung für das vorgesehene Jahresarbeitsprogramm.
Ziel der Untersuchung ist die Entwicklung eines Modells für die Ermittlung von leistungsbezogenen Musterjahresganglinien zur Unterstützung der Jahresarbeitsplanung. Dafür lagen für jede Leistung des Leistungsbereiches „Grünpflege“ jeweils 260 einzelne Jahresganglinien vor.
Im Ergebnis der Untersuchung wird die leistungsbezogene Musterjahresganglinie in vier Schritten ermittelt. Im ersten Schritt erfolgt die Prüfung der Datenqualität; im zweiten Schritt eine Korrelationsanalyse; im dritten Schritt die fachliche Überprüfung der Leistungsausprägung und im vierten Schritt die Ermittlung der leistungsbezogenen Musterjahresganglinie aus den verbliebenen leistungsbezogenen Jahresganglinien.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
The main purpose of the thesis is to ensure the safe demolition of old guyed antenna masts that are located in different parts of Germany. The major problem in demolition of this masts is the falling down of the masts in unexpected direction because of buckling problem. The objective of this thesis is development of a numerical models using finite element method (FEM) and assuring a controlled collapse by coming up with different time setups for the detonation of explosives which are responsible for cutting down the cables. The result of this thesis will avoid unexpected outcomes during the demolition processes and prevent risk of collapsing of the mast over near by structures.
Die Haltungen des Architekten Luigi Snozzi. Untersucht am Beispiel des Projektes Monte Carasso
(2021)
Welche Haltung spricht aus den Werken von Architekt*innen? Lassen sich Werte und Handlungsanweisungen von Mauern und Plänen ablesen? Luigi Snozzis Entwürfe für Monte Carasso werden in dieser Arbeit exemplarisch darauf untersucht. Sie zeugen von der Verantwortung, die jede*r Architekt*in für das Umfeld hat, in dem sie oder er baut.
This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.
Die Arbeit leistet einen wissenschaftlichen Beitrag zur Erforschung der Einsatzmöglichkeiten eines Immobilienportfoliomanagements für öffentliche museale Schlösserverwaltungen in Deutschland. Insbesondere wird ein für deren Organisation spezifisches Modell zur Investitionssteuerung herausgearbeitet und dessen Anwendbarkeit in der Praxis mit Experten diskutiert.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
Although it is impractical to avert subsequent natural disasters, advances in simulation science and seismological studies make it possible to lessen the catastrophic damage. There currently exists in many urban areas a large number of structures, which are prone to damage by earthquakes. These were constructed without the guidance of a national seismic code, either before it existed or before it was enforced. For instance, in Istanbul, Turkey, as a high seismic area, around 90% of buildings are substandard, which can be generalized into other earthquakeprone regions in Turkey. The reliability of this building stock resulting from earthquake-induced collapse is currently uncertain. Nonetheless, it is also not feasible to perform a detailed seismic vulnerability analysis on each building as a solution to the scenario, as it will be too complicated and expensive. This indicates the necessity of a reliable, rapid, and computationally easy method for seismic vulnerability assessment, commonly known as Rapid Visual Screening (RVS). In RVS methodology, an observational survey of buildings is performed, and according to the data collected during the visual inspection, a structural score is calculated without performing any structural calculations to determine the expected damage of a building and whether the building needs detailed assessment. Although this method might save time and resources due to the subjective/qualitative judgments of experts who performed the inspection, the evaluation process is dominated by vagueness and uncertainties, where the vagueness can be handled adequately through the fuzzy set theory but do not cover all sort of uncertainties due to its crisp membership functions. In this study, a novel method of rapid visual hazard safety assessment of buildings against earthquake is introduced in which an interval type-2 fuzzy logic system (IT2FLS) is used to cover uncertainties. In addition, the proposed method provides the possibility to evaluate the earthquake risk of the building by considering factors related to the building importance and exposure. A smartphone app prototype of the method has been introduced. For validation of the proposed method, two case studies have been selected, and the result of the analysis presents the robust efficiency of the proposed method.
In the last decades, Finite Element Method has become the main method in statics and dynamics analysis in engineering practice. For current problems, this method provides a faster, more flexible solution than the analytic approach. Prognoses of complex engineer problems that used to be almost impossible to solve are now feasible.
Although the finite element method is a robust tool, it leads to new questions about engineering solutions. Among these new problems, it is possible to divide into two major groups: the first group is regarding computer performance; the second one is related to understanding the digital solution.
Simultaneously with the development of the finite element method for numerical solutions, a theory between beam theory and shell theory was developed: Generalized Beam Theory, GBT. This theory has not only a systematic and analytical clear presentation of complicated structural problems, but also a compact and elegant calculation approach that can improve computer performance.
Regrettably, GBT was not internationally known since the most publications of this theory were written in German, especially in the first years. Only in recent years, GBT has gradually become a fertile research topic, with developments from linear to non-linear analysis.
Another reason for the misuse of GBT is the isolated application of the theory. Although recently researches apply finite element method to solve the GBT's problems numerically, the coupling between finite elements of GBT and other theories (shell, solid, etc) is not the subject of previous research. Thus, the main goal of this dissertation is the coupling between GBT and shell/membrane elements. Consequently, one achieves the benefits of both sides: the versatility of shell elements with the high performance of GBT elements.
Based on the assumptions of GBT, this dissertation presents how the separation of variables leads to two calculation's domains of a beam structure: a cross-section modal analysis and the longitudinal amplification axis. Therefore, there is the possibility of applying the finite element method not only in the cross-section analysis, but also the development for an exact GBT's finite element in the longitudinal direction.
For the cross-section analysis, this dissertation presents the solution of the quadratic eigenvalue problem with an original separation between plate and membrane mechanism. Subsequently, one obtains a clearer representation of the deformation mode, as well as a reduced quadratic eigenvalue problem.
Concerning the longitudinal direction, this dissertation develops the novel exact elements, based on hyperbolic and trigonometric shape functions. Although these functions do not have trivial expressions, they provide a recursive procedure that allows periodic derivatives to systematise the development of stiffness matrices. Also, these shape functions enable a single-element discretisation of the beam structure and ensure a smooth stress field.
From these developments, this dissertation achieves the formulation of its primary objective: the connection of GBT and shell elements in a mixed model. Based on the displacement field, it is possible to define the coupling equations applied in the master-slave method. Therefore, one can model the structural connections and joints with finite shell elements and the structural beams and columns with GBT finite element.
As a side effect, the coupling equations limit the displacement field of the shell elements under the assumptions of GBT, in particular in the neighbourhood of the coupling cross-section.
Although these side effects are almost unnoticeable in linear analysis, they lead to cumulative errors in non-linear analysis. Therefore, this thesis finishes with the evaluation of the mixed GBT-shell models in non-linear analysis.
Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing.
The concept of information entropy together with the principle of maximum entropy to open channel flow is essentially based on some physical consideration of the problem under consideration. This paper is a discussion on Yeganeh and Heidari (2020)’s paper, who proposed a new approach for measuring vertical distribution of streamwise velocity in open channels. The discussers argue that their approach is conceptually incorrect and thus leads to a physically unrealistic situation. In addition, the discussers found some wrong mathematical expressions (which are assumed to be typos) written in the paper, and also point out that the authors did not cite some of the original papers on the topic.
The development of a hydro-mechanically coupled Coupled-Eulerian–Lagrangian (CEL) method and its application to the back-analysisof vibratory pile driving model tests in water-saturated sand is presented. The predicted pile penetration using this approachis in good agreement with the results of the model tests as well as with fully Lagrangian simulations. In terms of pore water pressure, however, the results of the CEL simulation show a slightly worse accordance with the model tests compared to the Lagrangian simulation. Some shortcomings of the hydro-mechanically coupled CEL method in case of frictional contact problems and pore fluids with high bulk modulus are discussed. Lastly, the CEL method is applied to the simulation of vibratory driving of open-profile piles under partially drained conditions to study installation-induced changes in the soil state. It is concluded that the proposed method is capable of realistically reproducing the most important mechanisms in the soil during the driving process despite its addressed shortcomings.
Transformation of the Environment: Influence of “Urban Reagents.” German and Russian Case Studies
(2021)
An urban regeneration manifests itself through urban objects operating as change agents. The en-tailed diverse effects on the surroundings demonstrate experimental origin - an experiment as a preplanned but unpredictable method. An understanding of influences and features of urban ob-jects requires scrutiny due to a high potential of the elements to force an alteration and reactions. This dissertation explores the transformation of the milieu and mechanisms of this transformation.
In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential.
Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods.
This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed.
As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available.
After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using
adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples.
After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest.
The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion.
In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison.
The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken.
The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history.
In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain.
Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics.
In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied.
Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed.
Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations.
Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control.
Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records.
Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations.
Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties.
In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies.
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.
Synergistic Framework for Analysis and Model Assessment in Bridge Aerodynamics and Aeroelasticity
(2020)
Wind-induced vibrations often represent a major design criterion for long-span bridges. This work deals with the assessment and development of models for aerodynamic and aeroelastic analyses of long-span bridges.
Computational Fluid Dynamics (CFD) and semi-analytical aerodynamic models are employed to compute the bridge response due to both turbulent and laminar free-stream. For the assessment of these models, a comparative methodology is developed that consists of two steps, a qualitative and a quantitative one. The first, qualitative, step involves an extension
of an existing approach based on Category Theory and its application to the field of bridge aerodynamics. Initially, the approach is extended to consider model comparability and completeness. Then, the complexity of the CFD and twelve semi-analytical models are evaluated based on their mathematical constructions, yielding a diagrammatic representation of model quality.
In the second, quantitative, step of the comparative methodology, the discrepancy of a system response quantity for time-dependent aerodynamic models is quantified using comparison metrics for time-histories. Nine metrics are established on a uniform basis to quantify the discrepancies in local and global signal features that are of interest in bridge aerodynamics. These signal features involve quantities such as phase, time-varying frequency and magnitude content, probability density, non-stationarity, and nonlinearity.
The two-dimensional (2D) Vortex Particle Method is used for the discretization of the Navier-Stokes equations including a Pseudo-three dimensional (Pseudo-3D) extension within an existing CFD solver. The Pseudo-3D Vortex Method considers the 3D structural behavior for aeroelastic analyses by positioning 2D fluid strips along a line-like structure. A novel turbulent Pseudo-3D Vortex Method is developed by combining the laminar Pseudo-3D VPM and a previously developed 2D method for the generation of free-stream turbulence. Using analytical derivations, it is shown that the fluid velocity correlation is maintained between the CFD strips.
Furthermore, a new method is presented for the determination of the complex aerodynamic admittance under deterministic sinusoidal gusts using the Vortex Particle Method. The sinusoidal gusts are simulated by modeling the wakes of flapping airfoils in the CFD domain with inflow vortex particles. Positioning a section downstream yields sinusoidal forces that are used for determining all six components of the complex aerodynamic admittance. A closed-form analytical relation is derived, based on an existing analytical model. With this relation, the inflow particles’ strength can be related with the target gust amplitudes a priori.
The developed methodologies are combined in a synergistic framework, which is applied to both fundamental examples and practical case studies. Where possible, the results are verified and validated. The outcome of this work is intended to shed some light on the complex wind–bridge interaction and suggest appropriate modeling strategies for an enhanced design.
The assessment of wind-induced vibrations is considered vital for the design of long-span bridges. The aim of this research is to develop a methodological framework for robust and efficient prediction strategies for complex aerodynamic phenomena using hybrid models that employ numerical analyses as well as meta-models. Here, an approach to predict motion-induced aerodynamic forces is developed using artificial neural network (ANN). The ANN is implemented in the classical formulation and trained with a comprehensive dataset which is obtained from computational fluid dynamics forced vibration simulations. The input to the ANN is the response time histories of a bridge section, whereas the output is the motion-induced forces. The developed ANN has been tested for training and test data of different cross section geometries which provide promising predictions. The prediction is also performed for an ambient response input with multiple frequencies. Moreover, the trained ANN for aerodynamic forcing is coupled with the structural model to perform fully-coupled fluid--structure interaction analysis to determine the aeroelastic instability limit. The sensitivity of the ANN parameters to the model prediction quality and the efficiency has also been highlighted. The proposed methodology has wide application in the analysis and design of long-span bridges.
Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses.
The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity.
The accurate representation of aerodynamic forces is essential for a safe, yet reasonable design of long-span bridges subjected to wind effects. In this paper, a novel extension of the Pseudo-three-dimensional Vortex Particle Method (Pseudo-3D VPM) is presented for Computational Fluid Dynamics (CFD) buffeting analysis of line-like structures. This extension entails an introduction of free-stream turbulent fluctuations, based on the velocity-based turbulence generation. The aerodynamic response of a long-span bridge is obtained by subjecting the 3D dynamic representation of the structure to correlated free-stream turbulence in two-dimensional (2D) fluid planes, which are positioned along the bridge deck. The span-wise correlation of the free-stream turbulence between the 2D fluid planes is established based on Taylor's hypothesis of frozen turbulence. Moreover, the application of the laminar Pseudo-3D VPM is extended to a multimode flutter analysis. Finally, the structural response from the Pseudo-3D flutter and buffeting analyses is verified with the response, computed using the semi-analytical linear unsteady model in the time-domain. Meaningful merits of the turbulent Pseudo-3D VPM with respect to the linear unsteady model are the consideration of the 2D aerodynamic nonlinearity, nonlinear fluid memory, vortex shedding and local non-stationary turbulence effects in the aerodynamic forces. The good agreement of the responses for the two models in the 3D analyses demonstrates the applicability of the Pseudo-3D VPM for aeroelastic analyses of line-like structures under turbulent and laminar free-stream conditions.
Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses.
The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity.
Identifying cable force with vibration-based methods has become widely used in engineering practice due to simplicity of application. The string taut theory provides a simple definition of the relationship between natural frequencies and the tension force of a cable. However, this theory assumes a perfectly flexible non-sagging cable pinned at its ends. These assumptions do not reflect all cases, especially when the cable is short, under low tension forces or the supports are partially flexible. Extradosed bridges, which are distinguished from cable-stayed bridges by their low pylon height, have shorter cables. Therefore the application of the conventional string taut theory to identify cable forces on extradosed bridge cables might be inadequate to identify cable forces.
In this work, numerical modelling of an extradosed bridge cable saddled on a circular deviator at pylon is conducted. The model is validated with the catenary analytical solution and its static and dynamic behaviours are studied. The effect of a saddle support is found to positively affect the cable stiffness by geometric means; longer saddle radius increases the cable stiffness by suppressing the deformations near the saddle. Further, accounting the effects of bending stiffness in the numerical model by using beam elements show considerable deviation from models with truss elements (i.e. zero bending stiffness). This deviation is manifested when comparing the static and dynamic properties. This motivates a more thorough study of bending stiffness effects on short cables.
Bending stiffness effects are studied using two rods connected with several springs along their length. Under bending moments, the springs resist the rods' relative axial displacement by the springs' transverse component. This concept is used to identify bending stiffness values by utilizing the parallel axis theorem to quantify ratios of the second moment of area. These ratios are calculated based on the setup of the springs (e.g. number of springs per unit length, transverse stiffness, etc...). The numerical model based on this concept agrees well with the theoretical values computed using upper and lower bounds of the parallel axis theorem.
The proposed concept of quantifying ratios of the second moment of area using springs as connection between cable rods is applied on an actual extradosed bridge geometry. The model is examined by comparison to the previously validated global numerical model. The two models showed good correlation under various changing parameters. This allowed further study of the effects of stick/slip behaviour between cable rods on an actual bridge geometry.
Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines.
Bei einem marktüblichen Calciumsulfat-Fließestrich wurden in der Praxis schädigende Volu-menexpansionen festgestellt. Diese sind ein Resultat aus dem Zusammenwirken des einge-setzten Bindemittel-Compounds und einer kritischen Gesteinskörnung.
Das Ziel dieser Arbeit ist es, ein Calciumsulfat-Bindemittelsystem zu konfektionieren, welches in der Lage ist, die im Mörtel festgestellten Volumenexpansionen zu unterbinden. Es sollen verschiedene Bindemittel- und Additivzusammensetzungen untersucht werden, welche in Verbindung mit der kritischen Gesteinskörnung die Herstellung eines volumenstabilen Fließestrichs ermöglichen. Dazu soll folgende Fragestellung beantwortet werden: Welche Ursachen hat die Volumenzunahme und wie ist diese zu minimieren bzw. unterbinden?
Dabei werden unterschiedliche Bindemittelrezepturen aus α-Halbhydrat, Thermoanhydrit und Naturanhydrit, sowie verschiedene Additivzusammensetzungen hergestellt und untersucht.
Durch Längenänderungsmessungen in der Schwindrinne werden die Einflüsse der Binde-mittel, der Additivzusammensetzungen und der Wasser/Bindemittel-Werte auf das Län-genänderungsverhalten untersucht. Mittels Variation der einzelnen Compound-Bestandteile kann festgestellt werden, dass der Stabilisierer die Längenänderung negativ beeinflusst. Dieser bindet freies Wasser, welches für eine Reaktion zwischen Bindemittel und Gesteins-körnung im plastischen Zustand nicht mehr zur Verfügung steht. Diese Reaktion kann folglich erst im erhärteten Zustand ablaufen und verursacht die schädigende Volumenexpansion.
Abschließend wurde ein Bindemittel-Compound konfektioniert, welcher ohne Zusatz von Stabilisierern in Zusammenhang mit der kritischen Gesteinskörnung volumenstabil ist und keine Schäden auslöst.
Structural optimization has gained considerable attention in the design of structural engineering structures, especially in the preliminary phase.
This study introduces an unconventional approach for structural optimization by utilizing the Energy method with Integral Material Behavior (EIM), based on the Lagrange’s principle of minimum potential energy. An automated two-level optimization search process is proposed, which integrates the EIM, as an alternative method for nonlinear
structural analysis, and the bilevel optimization. The proposed procedure secures the equilibrium through minimizing the potential energy on one level, and on a higher level, a design objective function. For this, the most robust strategy of bilevel optimization, the nested method is used. The function of the potential energy is investigated along with its instabilities for physical nonlinear analysis through principle examples, by which the advantages and limitations using this method are reviewed. Furthermore, optimization algorithms are discussed.
A numerical fully functional code is developed for nonlinear cross section,
element and 2D frame analysis, utilizing different finite elements and is verified
against existing EIM programs. As a proof of concept, the method is applied on selected
examples using this code on cross section and element level. For the former one a
comparison is made with standard procedure, by employing the equilibrium equations
within the constrains. The validation of the element level was proven by a theoretical
solution of an arch bridge and finally, a truss bridge is optimized. Most of the
principle examples are chosen to be adequate for the everyday engineering practice, to
demonstrate the effectiveness of the proposed method.
This study implies that with further development, this method could become just as
competitive as the conventional structural optimization techniques using the Finite
Element Method.
Components of structural glazing have to meet different requirements and resist various impacts, depending on the field of application. Within an international research project of the EU innovation program Horizon 2020, special glass panes with a fluid circulating in capillaries are developed exploiting solar energy. Major influences to this glazing are UV irradiation and the fluidic contact, effecting the mechanical and optical durability of the bonding material within the glass setup. Regarding to visual requirements, acrylate adhesives and EVA films are analyzed as possible bonding materials by destructive and non-destructive testing methods. Two types of specimen are presented for obtaining the mechanical behavior and the surface appearances of the bonding material.
The vibration control of the tall building during earthquake excitations is a challenging task due to their complex seismic behavior. This paper investigates the optimum placement and properties of the Tuned Mass Dampers (TMDs) in tall buildings, which are employed to control the vibrations during earthquakes. An algorithm was developed to spend a limited mass either in a single TMD or in multiple TMDs and distribute them optimally over the height of the building. The Non-dominated Sorting Genetic Algorithm (NSGA – II) method was improved by adding multi-variant genetic operators and utilized to simultaneously study the optimum design parameters of the TMDs and the optimum placement. The results showed that under earthquake excitations with noticeable amplitude in higher modes, distributing TMDs over the height of the building is more effective in mitigating the vibrations compared to the use of a single TMD system. From the optimization, it was observed that the locations of the TMDs were related to the stories corresponding to the maximum modal displacements in the lower modes and the stories corresponding to the maximum modal displacements in the modes which were highly activated by the earthquake excitations. It was also noted that the frequency content of the earthquake has significant influence on the optimum location of the TMDs.
Die Zonenmethode nach Hertz ist ein vereinfachtes Verfahren zur Heißbemessung von Stahlbetonbauteilen. Um eine händische Bemessung zu ermöglichen, werden daher verschiedene Annahmen und Vereinfachungen getroffen. Insbesondere werden die thermischen Dehnungen vernachlässigt und das mechanische Verhalten durch einen verkleinerten Querschnitt mit konstanten Stoffeigenschaften beschrieben.
Ziel der vorliegenden Arbeit ist, dieses vereinfachte Verfahren in ein nichtlineares Verfahren zur Heißbemessung von Stahlbetondruckgliedern bei Brandbeanspruchung durch die Einheits-Temperaturzeitkurve zu überführen. Dazu werden die wesentlichen Annahmen der Zonenmethode überprüft und ein Vorschlag zur Weiterentwicklung vorgestellt. Dieser beruht im Wesentlichen auf der Modellierung der druckbeanspruchten Bewehrung. Diese weiterentwickelte Zonenmethode wird durch die Nachrechnung von Laborversuchen validiert und das Sicherheitsniveau durch eine vollprobabilistische Analyse und den Vergleich mit dem allgemeinen Verfahren bestimmt.
The capitalization of ‘certified’ sustainable building sector will be investigated over the power theory of value approach of Jonathan Nitzan and Shimshon Bichler. The study will be initiated by questioning why the environment problems are one of the first items on the agenda and by sharing the ideas of scholars who approaches the subject skeptically, because the predominant literature underlying the necessity and prominence of the topic is already well-known and adapted by the majority. Over the theory developed by Nitzan and Bichler, the concepts of capitalization, strategic sabotage, power, legitimacy, and obedience will be discussed. The hypothesis of “the absentee owners of the construction sector, holding the whip hand and capitalizing the ecology, control the growth and the creativity of green building production and make it carbon-dependent, in order to increase their profit margin” will be questioned. To strengthen the arguments in the hypothesis, the factors, the institutional arrangements, value measurement methods, which affect directly the net present value, will be investigated both in corporation and in building scale in detail, because net present value/ capitalization is asserted as the most important criteria by Nitzan and Bichler to make the investment decisions in the capitalist economic system. To trace the implications of power and the strategic sabotage that power caused, as the empirical dimension of this dissertation, an interface exploring the correlational ties between the climate responsive architecture and the ever changing political, economical, and social contexts and building economics praxis by decades will be developed and the expert interviews will be conducted with the design teams and the appraisers.
Occupant needs with regard to residential buildings are not well known due to a lack of representative scientific studies. To improve the lack of data, a large scale study was carried out using a Post Occupancy Evaluation of 1,416 building occupants. Several criteria describing the needs of occupants were evaluated with regard to their subjective level of relevance. Additionally, we investigated the degree to which deficiencies subjectively exist, and the degree to which occupants were able to accept them. From the data obtained, a hierarchy of criteria was created. It was found that building occupants ranked the physiological needs of air quality and thermal comfort the highest. Health hazards such as mould and contaminated building materials were unacceptable for occupants, while other deficiencies were more likely to be tolerated. Occupant satisfaction was also investigated. We found that most occupants can be classified as satisfied, although some differences do exist between different populations. To explain the relationship between the constructs of what we call relevance, acceptance, deficiency and satisfaction, we then created an explanatory model. Using correlation and regression analysis, the validity of the model was then confirmed by applying the collected data. The results of the study are both relevant in shaping further research and in providing guidance on how to maximize tenant satisfaction in real estate management.
As part of an international research project – funded by the European Union – capillary glasses for facades are being developed exploiting storage energy by means of fluids flowing through the capillaries. To meet highest visual demands, acrylate adhesives and EVA films are tested as possible bonding materials for the glass setup. Especially non-destructive methods (visual analysis, analysis of birefringent properties and computed tomographic data) are applied to evaluate failure patterns as well as the long-term behavior considering climatic influences. The experimental investigations are presented after different loading periods, providing information of failure developments. In addition, detailed information and scientific findings on the application of computed tomographic analyses are presented.
Die Festigkeitsentwicklung des Zementbetons basiert auf der chemischen Reaktion des Zementes mit dem Anmachwasser. Durch Nachbehandlungsmaßnahmen muss dafür gesorgt werden, dass dem Zement genügend Wasser für seine Reaktion zur Verfügung steht, da sonst ein Beton mit minderer Qualität entsteht. Die vorliegende Arbeit behandelt die grundsätzlichen Fragen der Betonnachbehandlung bei Anwendung von Straßenbetonen. Im Speziellen wird die Frage des erforderlichen Nachbehandlungsbedarfs von hüttensandhaltigen Kompositzementen betrachtet. Die Wirkung der Nachbehandlung wird anhand des erreichten Frost-Tausalz-Widerstandes und der Gefügeausbildung in der unmittelbaren Betonrandzone bewertet. Der Fokus der Untersuchungen lag auf abgezogenen Betonoberflächen. Es wurde ein Modell zur Austrocknung des jungen Betons erarbeitet. Es konnte gezeigt werden, dass in einer frühen Austrocknung (Kapillarphase) keine kritische Austrocknung der Betonrandzone einsetzt, sondern der Beton annährend gleichmäßig über die Höhe austrocknet. Es wurde ein Nomogramm entwickelt, mit dem die Dauer der Kapillarphase in Abhängigkeit der Witterung für Straßenbetone abgeschätzt werden kann. Eine kritische Austrocknung der wichtigen Randzone setzt nach Ende der Kapillarphase ein. Für Betone unter Verwendung von Zementen mit langsamer Festigkeitsentwicklung ist die Austrocknung der Randzone nach Ende der Kapillarphase besonders ausgeprägt. Im Ergebnis zeigen diese Betone dann einen geringen Frost-Tausalz-Widerstand. Mit Zementen, die eine 2d-Zementdruckfestigkeit ≥ 23,0 N/mm² aufweisen, wurde unabhängig von der Zementart (CEM I oder CEM II/B-S) auch dann ein hoher Frost-Tausalz-Widerstand erreicht, wenn keine oder eine schlechtere Nachbehandlung angewendet wurde. Für die Praxis ergibt sich damit eine einfache Möglichkeit der Vorauswahl von geeigneten Zementen für den Verkehrsflächenbau. Betone, die unter Verwendung von Zementen mit langsamere Festigkeitsentwicklung hergestellt werden, erreichen einen hohen Frost-Tausalz-Widerstand nur mit einer geeigneten Nachbehandlung. Die Anwendung von flüssigen Nachbehandlungsmitteln (NBM gemäß TL NBM-StB) erreicht eine ähnliche Wirksamkeit wie eine 5 tägige Feuchtnachbehandlung. Voraussetzung für die Wirksamkeit der NBM ist, dass sie auf eine Betonoberfläche ohne sichtbaren Feuchtigkeitsfilm (feuchter Glanz) aufgesprüht werden. Besonders wichtig ist die Beachtung des richtigen Auftragszeitpunktes bei kühler Witterung, da hier aufgrund der verlangsamten Zementreaktion der Beton länger Anmachwasser abstößt. Ein zu früher Auftrag des Nachbehandlungsmittels führt zu einer Verschlechterung der Qualität der Betonrandzone. Durch Bereitstellung hydratationsabhängiger Transportkenngrößen (Feuchtetransport im Beton) konnten numerische Berechnungen zum Zusammenspiel zwischen der Austrocknung, der Nachbehandlung und der Gefügeentwicklung durchgeführt werden. Mit dem erstellten Berechnungsmodell wurden Parameterstudien durchgeführt. Die Berechnungen bestätigen die wesentlichen Erkenntnisse der Laboruntersuchungen. Darüber hinaus lässt sich mit dem Berechnungsmodell zeigen, dass gerade bei langsam reagierenden Zementen und kühler Witterung ohne eine Nachbehandlung eine sehr dünne Randzone (ca. 500 µm – 1000 µm) mit stark erhöhter Kapillarporosität entsteht.
The aim of my research is to observe the variance of energy efficiency of a typical multi-story office building under the exposure of different climatic conditions. Energy efficiency requirements in building codes or energy standards are among the most important single measures for buildings’ energy efficiency. Therefore, this study can be set up for a better understanding of how energy efficiency of a building changes under the effect of adverse to moderate climatic conditions which possess a mentionable effect on the operation of a building.
This thesis is structured in three balanced and conceptual steps. Following the aim of the project, the virtual building model is to be analyzed under the effect of seven distinct climatic conditions namely work environment of New Delhi, Mumbai, Berlin, Lisbon, Copenhagen, Dubai and Montreal. Firstly, the task is to do a complete literature research based on the scope of similar researches and studying the problems in detail along with the theoritical background all the concepts which are implemented to get the numerical results. This chapter also comprises a detailed study of the climatic conditions of the above-mentioned cities. Different climatic traits like temperature variations, count of heating and cooling degree days, relative humidity, temperature range and comfort zonal charts for the specified cities are studied in detail. This study helps to understand the effect of these adverse to moderate climates on the operation of the building. On the second step, the virtual building model is prepared on a software platform named Revit Structures. This virtual building model is not necessarily a complete building, but it has the relevant functionalities of a real building. We perform the energy analysis and the heating and cooling analysis on this virtual building model to study the operational outcome of the building under different climatic conditions in detail. By the end of these above two tasks, two scenarios are observed. On one hand, we have a literature research and on the other hand we have the numerical results. Therefore, finally we present a comparative scenario based on the energy efficient performances of the building under such variant climatic conditions. This is followed by the prediction of thermal comfort level inside the building and it based on Fanger’s PMV Model. Understanding the literature and the numerical values in detail helps us to predict the index thermal comfort level inside the building.
The conclusion of this master thesis focuses mainly on the scopes of improvement of energy efficiency requirements in energy codes if any, differentiated according to specific locations. The initial aim of my hypothesis which is to study the impacts of climatic variations on the energy efficient performances of a building is fulfilled but as such topics have very deep and broad roots, the scope of further improvements is always predominant.
The world society faces a huge challenge to implement the human right of “access to sanitation”. More and more it is accepted that the conventional approach towards providing sanitation services is not suitable to solve this problem. This dissertation examines the possibility to enhance “access to sanitation” for people who are living in areas with underdeveloped water and wastewater infrastructure systems. The idea hereby is to follow an integrated approach for sanitation, which allows for a mutual completion of existing infrastructure with resource-based sanitation systems.
The notion “integrated sanitation system (iSaS)” is defined in this work and guiding principles for iSaS are formulated. Further on the implementation of iSaS is assessed at the example of a case study in the city of Darkhan in Mongolia. More than half of Mongolia’s population live in settlements where yurts (tents of Nomadic people) are predominant. In these settlements (or “ger areas”) sanitation systems are not existent and the hygienic situation is precarious.
An iSaS has been developed for the ger areas in Darkhan and tested over more than two years. Further on a software-based model has been developed with the goal to describe and assess different variations of the iSaS. The results of the assessment of material-flows, monetary-flows and communication-flows within the iSaS are presented in this dissertation. The iSaS model is adaptable and transferable to the socio-economic conditions in other regions and climate zones.
Lara Schrijver is an assistant professor at the Faculty of Architecture of the TU Delft. She is one of three program leaders for a new research program in the department of architecture, ‘The Architectural Project and its Foundations’. Schrijver holds degrees in architecture from Princeton University and the TU Delft. She received her Ph.D. from the TU Eindhoven in 2005. Schrijver has taught design and theory courses, and contributed to conferences in the Netherlands as well as abroad. She was an editor for OASE, journal for architecture, for ten years, and was co-organizer of the 2006 conference ‘The Projective Landscape’. Her current work revolves around the role of architecture in the city, and its responsibility in defining the public domain. Her first book, Radical Games, on the influence of the 1960s on contemporary discourse, is forthcoming in the spring of 2009.
The thesis investigates at the computer aided simulation process for operational vibration analysis of complex coupled systems. As part of the internal methods project “Absolute Values” of the BMW Group, the thesis deals with the analysis of the structural dynamic interactions and excitation interactions. The overarching aim of the methods project is to predict the operational vibrations of engines.
Simulations are usually used to analyze technical aspects (e. g. operational vibrations, strength, ...) of single components in the industrial development. The boundary conditions of submodels are mostly based on experiences. So the interactions with neighboring components and systems are neglected. To get physically more realistic results but still efficient simulations, this work wants to support the engineer during the preprocessing phase by useful criteria.
At first suitable abstraction levels based on the existing literature are defined to identify structural dynamic interactions and excitation interactions of coupled systems. So it is possible to separate different effects of the coupled subsystems. On this basis, criteria are derived to assess the influence of interactions between the considered systems. These criteria can be used during the preprocessing phase and help the engineer to build up efficient models with respect to the interactions with neighboring systems. The method was developed by using several models with different complexity levels. Furthermore, the method is proved for the application in the industrial environment by using the example of a current combustion engine.
Ein aktuelles Thema in der Forschung der Betonindustrie ist die gezielte Steuerung des Erstarrens und der Entwicklung der (Früh)Festigkeit von Betonen und Mörteln. Aus ökonomischer Sicht sind außerdem die Reduktion der CO2-Emission und die Schonung von Ressourcen und Energie wichtige Forschungsschwerpunkte. Eine Möglichkeit zum Erreichen dieser Ziele ist es, die Reaktivität/Hydratation der silikatischen Klinkerphasen gezielt anzuregen. Neben den bereits bekannten Möglichkeiten der Hydratationsbeschleunigung (u.a. Wärmebehandlung, Zugabe von Salzen) bietet die Anwendung von Power-Ultraschall (PUS) eine weitere Alternative zur Beschleunigung der Zementhydratation. Da bis zum jetzigen Zeitpunkt noch keine Erfahrungen zum Einsatz von PUS in der Zementchemie vorliegen, sollen mit der vorliegenden Arbeit grundlegende Kenntnisse zum Einfluss von PUS auf das Fließ- und Erstarrungsverhalten von Zementsuspensionen erarbeitet werden.
Dazu wurde die Arbeit in fünf Hauptuntersuchungsabschnitte aufgeteilt.
Im ersten Teil wurden optimale PUS-Parameter wie Amplitude und Energieeintrag ermittelt, die eine effiziente Beschleunigung der Portlandzement(CEM I)hydratation bei kurzen Beschallzeiten und begrenzter Zementleimtemperaturerhöhung erlauben. Mit Hilfe unabhängiger Untersuchungsmethoden (Bestimmung des Erstarrungsbeginns, der Festigkeitsentwicklung, zerstörungsfreier Ultraschallprüfung, isothermer Wärmeflusskalorimetrie, hochauflösender Rasterelektronmikroskopie (REM) wurde die Wirkung von PUS auf den Hydratationsverlauf von CEM I-Suspensionen charakterisiert. Die Ergebnisse zeigen, dass die Behandlung von CEM I-Suspensionen mit PUS grundsätzlich ein beschleunigtes Erstarren und eine beschleunigte (Früh)Festigkeitsentwicklung hervorruft.
Anhand von REM-Untersuchungen konnte eindeutig nachgewiesen werden, dass die Beschleunigung der CEM I-Hydratation mit einer beschleunigten Hydratation der Hauptklinkerphase Alit korreliert. Auf Grundlage dieser Erkenntnisse wurden die Ursachen der Aktivierung der Alithydratation untersucht. Dazu wurden Untersuchungen an Einzelsystemen des CEM I (silikatische Klinkerphase) durchgeführt.
Es ist bekannt, das die Hydratation der Hauptklinkerphase Alit (in der reinen Form Tricalciumsilikat 3CaO*SiO2; C3S) durch Lösungs-/Fällungsreaktionen (Bildung von Calcium-Silikat-Hydrat Phasen, C-S-H Phasen) bestimmt wird. Mit Hilfe von Untersuchungen zur Auflösung (C3S) und Kristallbildung (C-S-H Phasen) in Lösungen und Suspensionen (Aufzeichnung der elektrischen Leitfähigkeit sowie Bestimmung der Ionenkonzentrationen der wässrigen Phase, REM-Charakterisierung der Präzipitate) wurde die Beeinflussung dieser durch eine PUS-Behandlung charakterisiert. Die Ergebnisse zeigen, dass in partikelfreien Lösungen (primäre Keimbildung) eine PUS-Behandlung keinen Einfluss auf die Kinetik der Kristallisation von C-S-H Phasen hervorruft. Das heißt, auch die durch PUS eingetragene Energie reicht offensichtlich nicht aus, um in Abwesenheit von Oberflächen die C-S-H Phasen Bildung zu beschleunigen. Das weist darauf hin, dass die Bildung von C-S-H Phasen nicht durch eine Beschleunigung von Ionen in der Lösung (erhöhte Diffusion durch Anwendung von PUS) hervorgerufen wird. Eine Beschleunigung des Kristallisationsprozesses (Keimbildung und Wachstum von C-S-H Phasen) durch PUS wird nur in Anwesenheit von Partikeln in der Lösung (Suspension) erzielt. Das belegen Ergebnisse, bei denen die Bildung erster C-S-H Phasen bei geringer Übersättigung (heterogene Keimbildung, in Anwesenheit von Oberflächen) erfolgt. Unter diesen Bedingungen konnte gezeigt werden, dass PUS innerhalb der ersten 30 Minuten der Hydratation eine erhöhte Fällung von ersten C-S-H Phasen bewirkt. Diese fungieren dann vermutlich während der Haupthydratation als Keim bzw. geeignete Oberfläche zum beschleunigten Aufwachsen von weiteren C-S-H Phasen. Weiterhin ist vorstellbar, dass (in Analogie zu anderen Bereichen der Sonochemie) PUS durch Kavitation Schockwellen hervorruft, welche Partikel und wässriges Medium beschleunigen und damit erhöhte Partikelbewegungen und -kollisionen induziert. Dies wiederum bewirkt, dass die anfänglich auf der C3S-Oberfläche gebildeten C-S-H Phasen teilweise wieder entfernt werden. Damit ist das Inlösunggehen von Ca- und Si-Ionen aus dem C3S weiterhin möglich. Um den genauen Mechanismus weiter zu charakterisieren sollten mit geeigneten Methoden weitere Untersuchungen durchgeführt werden.
Im zweiten Teil der Arbeit wurde der Einfluss von PUS auf das Fließverhalten von CEM I-Suspensionen untersucht. Aus der Anwendung von PUS in anderen technischen Bereichen sind unter anderem Effekte wie das Entlüften, das Homogenisieren und das Dispergieren von Suspensionen und Emulsionen mittels PUS bekannt. Mit Hilfe der Bestimmung des Luftporengehaltes, Sedimentationsversuchen und cryo-SEM Untersuchungen wurde der Einfluss von PUS auf CEM I-Suspensionen charakterisiert. Die Ergebnisse belegen, dass durch PUS eine verbesserte Homogenität und Dispergierung der CEM I-Suspension erzielt wird. Damit wird für CEM I-Suspensionen unterschiedlichster w/z-Werte eine verbesserte Fließfähigkeit festgestellt. Ergebnisse der Bestimmung von Ausbreitmaßen und Trichterauslaufzeiten zeigen, dass PUS einen direkten Einfluss vor allem auf die Viskosität der CEM I-Suspensionen besitzt. Werden Fließmitteln (FM) der CEM I-Suspension zugegeben, wird nicht in jedem Fall eine verbesserte Fließfähigkeit festgestellt. Hier scheint unter bestimmten Voraussetzungen (w/z-Wert, FM-Gehalt, PUS) die Reaktion zwischen Aluminat- und Sulfatphase des Klinkers gestört. Zur eindeutigen Klärung dieses Sachverhaltes bedarf es jedoch weiterer quantitativer Untersuchungen zum Reaktionsumsatz.
Im dritten Teil der Arbeit wurden die am CEM I gewonnenen Erkenntnisse zum Einfluss von PUS auf die Hydratation an Portland-Hüttensand(HÜS)-Zement-Systemen verifiziert. Dafür
wurden auch in diesem Teil der Arbeit zunächst die optimalen PUS-Parameter festgelegt und der Einfluss auf das Erstarrung- und Erhärtungsverhalten dokumentiert. Untersuchungsmethoden sind unter anderem die Bestimmung des Erstarrungsbeginns und der (Früh)Festigkeitsentwicklung, Temperaturaufzeichnungen und isothermale Wärmeflusskalorimetrie sowie REM. Die Ergebnisse zeigen, dass auch die Reaktion von HÜS-Zementen durch PUS beschleunigt wird. Weiterführende Untersuchungen belegen, dass die erzielte Beschleunigung vorwiegend auf der Beschleunigung der Alitkomponente des CEM I beruht.
Im Fokus der Teile vier und fünf dieser Arbeit stand die Anwendbarkeit der PUS-Technik unter praktischen Bedingungen. Zum einen wurde die Anwendbarkeit von PUS in fertig gemischten Mörteln beurteilt. Anhand des Vergleichs wichtiger Frisch- und Festmörteleigenschaften unterschiedlich hergestellter Mörtel (beschallt im Anschluss an konventionelle Mischtechnik, beschallt im Anschluss an Suspensionsmischtechnik mit anschließender Zumischung der Gesteinskörnung und nicht beschallt) wird gezeigt, dass im Fall von Mörteln mit hohem Leimanteil eine durch PUS induzierte beschleunigte Festigkeitsentwicklung auch mit herkömmlichen Mischabläufen (ohne aufwendige Umstellung des Mischprozesses) möglich ist.
Abschließend wird untersucht, ob der Herstellungsprozess von Wandbauteilen im Fertigteilwerk durch den Einsatz von PUS optimiert werden kann und ob eine Einbindung der PUS-Technik in den Fertigungsprozess ohne größeren Aufwand möglich ist. Dazu wurden in einem ersten Schritt die Frisch- und Festbetoneigenschaften eines aktuell angewendeten selbstverdichtenden Betons im Labormaßstab (Mörtel) in Abhängigkeit einer PUS-Behandlung dokumentiert und mit der seiner unbeschallten Referenz verglichen. Aufgrund der durch PUS verursachten verbesserten Fließ- und Festigkeitseigenschaften kann die beschallte Mörtelrezeptur hinsichtlich Fließmittelgehalt und Dauer der Wärmebehandlung optimiert werden. Somit werden ca. 30 % der Fließmittelzugabe und 40 % der Dauer der Wärmebehandlung eigespart. Eine Einbindung der PUS-Technik in das betrachtete Fertigteilwerk ist nach Überprüfung der konstruktiven Gegebenheiten der Fertigungsstrukturen ohne größeren Aufwand möglich.
The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration.
An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof.
The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback.
The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested.
Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model.
The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed.
When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure.
A parametric method for building design optimization based on Life Cycle Assessment - Appendix
(2016)
The building sector is responsible for a large share of human environmental impacts, over which architects and planners have a major influence. The main objective of this thesis is to develop a method for environmental building design optimization based on Life Cycle Assessment (LCA) that is applicable as part of the design process. The research approach includes a thorough analysis of LCA for buildings in relation to the architectural design stages and the establishment of a requirement catalogue. The key concept of the novel method called Parametric Life Cycle Assessment(PLCA) is to combine LCA with parametric design. The application of this method to three examples shows that building designs can be optimized time-efficiently and holistically from the beginning of the most influential early design stages, an achievement which has not been possible until now.
In this study, the behavior of a widely graded soil prone to suffusion and necessity of homogeneity quantifi cation for such a soil in internal stability considerations are discussed. With the help of suffusion tests, the dependency of the particle washout to homogeneity of sample is shown. The validity of the great infl uence of homogeneity on suffusion processes by the presentation of arguments and evidences are established. It is emphasized that the internal stability of a widely graded soil cannot be directly correlated to the common geotechnical parameters such as dry density or permeability. The initiation and propagation of the suffusion processes are clearly a particle scale phenomenon, so the homogeneity of particle assemblies (micro-scale) has a decisive effect on particle rearrangement and washout processes. It is addressed that the guidelines for assessing internal stability lack a fundamental, scientifi c basis for quantifi cation of homogeneity. The observation of the segregation processes within the sample in an ascending layered order (for downwards fl ow) inspired the author to propose a new packing model for granular materials which are prone to internally instability.
It is shown that the particle arrangement, especially the arrangement of soil skeleton particles or the so-called primary fabric has the main role in suffusiv processes. Therefore, an experimental approach for identifi cation of the skeleton in the soil matrix is proposed. 3D models of Sequential Fill Tests using Discrete Element Method (DEM) and 3D models of granular packings for relative, stochastically and ideal homogeneous particle assemblies were generated, and simulations have been carried out.
Based on the numerical investigations and in dependency on the soil skeleton behavior, an approach for measurement of relevant scale, the so-called Representative Elementary Volume (REV) for homogeneity investigation is proposed. The development of a new testing method for quantifi cation of homogeneity is introduced (in-situ). An approach for quantifi cation of homogeneity in numerically or experimentally generated packings (samples) based on image processing method of MATLAB has been introduced. A generalized experimental method for assessment of internal stability for widely graded soils with dominant coarse matrix is developed, and a new suffusion criterion based on ideal homogeneous internally stable granular packing is designed.
My research emphasizes that in a widely graded soils with dominant coarse matrix, the soil fractions with diameters bigger than D60 build essentially the soil skeleton. The mass and spatial distribution of these fractions governs the internal stability, and the mass and distribution of the fi ll fractions are a secondary matter. For such a soil, the homogeneity of the skeleton must be cautiously measured and verified.
Augmented Urban Model: Ein Tangible User Interface zur Unterstützung von Stadtplanungsprozessen
(2011)
Im architektonischen und städtebaulichen Kontext erfüllen physische und digitale Modelle aufgrund ihrer weitgehend komplementären Eigenschaften und Qualitäten unterschiedliche, nicht verknüpfte Aufgaben und Funktionen im Entwurfs- und Planungsprozess. Während physische Modelle vor allem als Darstellungs- und Kommunikationsmittel aber auch als Arbeitswerkzeug genutzt werden, unterstützen digitale Modelle darüber hinaus die Evaluation eines Entwurfs durch computergestützte Analyse- und Simulationstechniken.
Analysiert wurden im Rahmen der in diesem Arbeitspapier vorgestellten Arbeit neben dem Einsatz des Modells als analogem und digitalem Werkzeug im Entwurf die Bedeutung des Modells für den Arbeitsprozess sowie Vorbilder aus dem Bereich der Tangible User Interfaces mit Bezug zu Architek¬tur und Städtebau. Aus diesen Betrachtungen heraus wurde ein Prototyp entwickelt, das Augmented Urban Model, das unter anderem auf den frühen Projekten und Forschungsansätzen aus dem Gebiet der Tangible User Interfaces aufsetzt, wie dem metaDESK von Ullmer und Ishii und dem Urban Planning Tool Urp von Underkoffler und Ishii.
Das Augmented Urban Model zielt darauf ab, die im aktuellen Entwurfs- und Planungsprozess fehlende Brücke zwischen realen und digitalen Modellwelten zu schlagen und gleichzeitig eine neue tangible Benutzerschnittstelle zu schaffen, welche die Manipulation von und die Interaktion mit digitalen Daten im realen Raum ermöglicht.
Dieses Arbeitspapier beschreibt, wie ausgehend von einem vorhandenen Straßennetzwerk Bebauungsareale mithilfe von Unterteilungsalgorithmen automatisch umgelegt, d.h. in Grundstücke unterteilt, und anschließend auf Basis verschiedener städtebaulicher Typen bebaut werden können. Die Unterteilung von Bebauungsarealen und die Generierung von Bebauungsstrukturen unterliegen dabei bestimmten stadtplanerischen Einschränkungen, Vorgaben und Parametern. Ziel ist es aus den dargestellten Untersuchungen heraus ein Vorschlagssystem für stadtplanerische Entwürfe zu entwickeln, das anhand der Umsetzung eines ersten Softwareprototyps zur Generierung von Stadtstrukturen weiter diskutiert wird.
Aktionsräume in Dresden
(2012)
In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten.
In vorliegender Studie werden die Wohnstandortpräferenzen der Sinus-Milieugruppen in Dresden über eine standardisierte Befragung (n=318) untersucht. Es wird unterschieden zwischen handlungsleitenden Wohnstandortpräferenzen, die durch Anhaltspunkte auf der Handlungsebene stärker in Betracht gezogen werden sollten, und Wohnstandortpräferenzen, welche eher orientierenden Charakter haben. Die Wohnstandortpräferenzen werden untersucht anhand der Kategorien Ausstattung/Zustand der Wohnung/des näheren Wohnumfeldes, Versorgungsstruktur, soziales Umfeld, Baustrukturtyp, Ortsgebundenheit sowie des Aspektes des Images eines Stadtviertels. Um die Befragten den Sinus-Milieugruppen zuordnen zu können, wird ein Lebensweltsegment-Modell entwickelt, welches den Anspruch hat, die Sinus-Milieugruppen in der Tendenz abzubilden. Die Studie kommt zu dem Ergebnis, dass die Angehörigen der verschiedenen Lebensweltsegmente in jeder Kategorie - wenn auch z.T. auf geringerem Niveau - signifikante Unterschiede in der Bewertung einzelner Wohnstandortpräferenzen aufweisen.
Wissen wer wo wohnt
(2012)
In cities people live together in neighbourhoods. Here they can find the infrastructure they need, starting with shops for the daily purpose to the life-cycle based infrastructures like kindergartens or nursing homes. But not all neighbourhoods are identical. The infrastructure mixture varies from neighbourhood to neighbourhood, but different people have different needs which can change e.g. based on the life cycle situation or their affiliation to a specific milieu. We can assume that a person or family tries to settle in a specific neighbourhood that satisfies their needs. So, if the residents are happy with a neighbourhood, we can further assume that this neighbourhood satisfies their needs. The socio-oeconomic panel (SOEP) of the German Institute for Economy (DIW) is a survey that investigates the economic structure of the German population. Every four years one part of this survey includes questions about what infrastructures can be found in the respondents neighbourhood and the satisfaction of the respondent with their neighbourhood. Further, it is possible to add a milieu estimation for each respondent or household. This gives us the possibility to analyse the typical neighbourhoods in German cities as well as the infrastructure profiles of the different milieus. Therefore, we take the environment variables from the dataset and recode them into a binary variable – whether an infrastructure is available or not. According to Faust (2005), these sets can also be understood, as a network of actors in a neighbourhood, which share two, three or more infrastructures. Like these networks, this neighbourhood network can also be visualized as a bipartite affiliation network and therefore analysed using correspondence analysis. We will show how a neighbourhood analysis will benefit from an upstream correspondence analysis and how this could be done. We will also present and discuss the results of such an analysis.
K-dimensionale Bäume, im Englischen verkürzt auch K-d Trees genannt, sind binäre Such- und Partitionierungsbäume, die eine Menge von n Punkten in einem multidimensionalen Raum repräsentieren. Ihren Einsatz finden K-d Tree Datenstrukturen vor allem bei der Suche nach den nächsten Nachbarn, der Nearest Neighbor Query, und in weiteren Suchalgorithmen für beispielsweise Datenbankapplikationen. Im Rahmen des Forschungsprojekts Kremlas wurde die Raumpartitionierung durch K-d Trees als eine Teillösung zur Generierung von Layouts bei der Entwicklung einer kreativen evolutionären Entwurfsmethode für Layoutprobleme in Architektur und Städtebau entwickelt. Der Entwurf und die Entwicklung von Layouts, d.h. die Anordnung von Räumen, Baukörpern und Gebäudekomplexen im architektonischen und städtischen Kontext stellt eine zentrale Aufgabe in Architektur und Stadtplanung dar. Sie erfordert von Architekten und Planern funktionale sowie kreative Problemlösungen. Das Forschungsprojekt beschäftigt sich folglich nicht nur mit der Optimierung von Grundrissen sondern bindet auch gestalterische Aspekte mit ein. In der entwickelten Teillösung dient der K-d Tree Algorithmus zunächst zur Unterteilung einer vorgegebenen Fläche, wobei die Schnittlinien möglichen Raumgrenzen entsprechen. Durch die Kombination des K-d Tree Algorithmus mit genetischen Algorithmen und evolutionären Strategien werden Layouts hinsichtlich der Kriterien Raumgröße und Nachbarschaften optimiert. Durch die Interaktion des Nutzers können die Lösungen dynamisch angepasst und zur Laufzeit nach gestalterischen Kriterien verändert werden. Das Ergebnis ist ein generativer Mechanismus, der bei der kreativen algorithmischen Lösung von Layoutaufgaben in Architektur und Städtebau eine vielversprechende Variante zu bereits bekannten Algorithmen darstellt.
The key objective of this research is to study fracture with a meshfree method, local maximum entropy approximations, and model fracture in thin shell structures with complex geometry and topology. This topic is of high relevance for real-world applications, for example in the automotive industry and in aerospace engineering. The shell structure can be described efficiently by meshless methods which are capable of describing complex shapes as a collection of points instead of a structured mesh. In order to find the appropriate numerical method to achieve this goal, the first part of the work was development of a method based on local maximum entropy (LME)
shape functions together with enrichment functions used in partition of unity methods to discretize problems in linear elastic fracture mechanics. We obtain improved accuracy relative to the standard extended finite element method (XFEM) at a comparable computational cost. In addition, we keep the advantages of the LME shape functions,such as smoothness and non-negativity. We show numerically that optimal convergence (same as in FEM) for energy norm and stress intensity factors can be obtained through the use of geometric (fixed area) enrichment with no special treatment of the nodes
near the crack such as blending or shifting.
As extension of this method to three dimensional problems and complex thin shell structures with arbitrary crack growth is cumbersome, we developed a phase field model for fracture using LME. Phase field models provide a powerful tool to tackle moving interface problems, and have been extensively used in physics and materials science. Phase methods are gaining popularity in a wide set of applications in applied science and engineering, recently a second order phase field approximation for brittle fracture has gathered significant interest in computational fracture such that sharp cracks discontinuities are modeled by a diffusive crack. By minimizing the system energy with respect to the mechanical displacements and the phase-field, subject to an irreversibility condition to avoid crack healing, this model can describe crack nucleation, propagation, branching and merging. One of the main advantages of the phase field modeling of fractures is the unified treatment of the interfacial tracking and mechanics, which potentially leads to simple, robust, scalable computer codes applicable to complex systems. In other words, this approximation reduces considerably the implementation complexity because the numerical tracking of the fracture is not needed, at the expense of a high computational cost. We present a fourth-order phase field model for fracture based on local maximum entropy (LME) approximations. The higher order continuity of the meshfree LME approximation allows to directly solve the fourth-order phase field equations without splitting the fourth-order differential equation into two second order differential equations. Notably, in contrast to previous discretizations that use at least a quadratic basis, only linear completeness is needed in the LME approximation. We show that the crack surface can be captured more accurately in the fourth-order model than the second-order model. Furthermore, less nodes are needed for the fourth-order model to resolve the crack path. Finally, we demonstrate the performance of the proposed meshfree fourth order phase-field formulation for 5 representative numerical examples. Computational results will be compared to analytical solutions within linear elastic fracture mechanics and experimental data for three-dimensional crack propagation.
In the last part of this research, we present a phase-field model for fracture in Kirchoff-Love thin shells using the local maximum-entropy (LME) meshfree method. Since the crack is a natural outcome of the analysis it does not require an explicit representation and tracking, which is advantageous over techniques as the extended finite element method that requires tracking of the crack paths. The geometric description of the shell is based on statistical learning techniques that allow dealing with general point set surfaces avoiding a global parametrization, which can be applied to tackle surfaces of complex geometry and topology. We show the flexibility and robustness of the present methodology for two examples: plate in tension and a set of open connected
pipes.
Previous models for the explanation of settlement processes pay little attention to the interactions between settlement spreading and road networks. On the basis of a dielectric breakdown model in combination with cellular automata, we present a method to steer precisely the generation of settlement structures with regard to their global and local density as well as the size and number of forming clusters. The resulting structures depend on the logic of how the dependence of the settlements and the road network is implemented to the simulation model. After analysing the state of the art we begin with a discussion of the mutual dependence of roads and land development. Next, we elaborate a model that permits the precise control of permeability in the developing structure as well as the settlement density, using the fewest necessary control parameters. On the basis of different characteristic values, possible settlement structures are analysed and compared with each other. Finally, we reflect on the theoretical contribution of the model with regard to the context of urban dynamics.
How does it come to particular structure formations in the cities and which strengths play a role in this process? On which elements can the phenomena be reduced to find the respective combination rules? How do general principles have to be formulated to be able to describe the urban processes so that different structural qualities can be produced? With the aid of mathematic methods, models based on four basic levels are generated in the computer, through which the connections between the elements and the rules of their interaction can be examined. Conclusions on the function of developing processes and the further urban origin can be derived.
PLANUNGSUNTERSTÜTZUNG DURCH DIE ANALYSE RÄUMLICHER PROZESSE MITTELS COMPUTERSIMULATIONEN. Erst wenn man – zumindest im Prinzip – versteht, wie eine Stadt mit ihren komplexen, verwobenen Vorgängen im Wesentlichen funktioniert, ist eine sinnvolle Stadtplanung möglich. Denn jede Planung bedeutet einen Eingriff in den komplexen Organismus einer Stadt. Findet dieser Eingriff ohne Wissen über die Funktionsweise des Organismus statt, können auch die Auswirkungen nicht abgeschätzt werden. Dieser Beitrag stellt dar, wie urbane Prozesse mittels Computersimulationen unter Zuhilfenahme so genannter Multi-Agenten-Systeme und Zellulärer Automaten verstanden werden können. von
At the end of the 1960s, architects at various universities world- wide began to explore the potential of computer technology for their profession. With the decline in prices for PCs in the 1990s and the development of various computer-aided architectural design systems (CAAD), the use of such systems in architectural and planning offices grew continuously. Because today no ar- chitectural office manages without a costly CAAD system and because intensive soſtware training has become an integral part of a university education, the question arises about what influence the various computer systems have had on the design process forming the core of architectural practice. The text at hand devel- ops ten theses about why there has been no success to this day in introducing computers such that new qualitative possibilities for design result. RESTRICTEDNESS
The structure and development of cities can be seen and evaluated from different points of view. By replicating the growth or shrinkage of a city using historical maps depicting different time states, we can obtain momentary snapshots of the dynamic mechanisms of the city. An examination of how these snapshots change over the course of time and a comparison of the different static time states reveals the various interdependencies of population density, technical infrastructure and the availability of public transport facilities. Urban infrastructure and facilities are not distributed evenly across the city – rather they are subject to different patterns and speeds of spread over the course of time and follow different spatial and temporal regularities. The reasons and underlying processes that cause the transition from one state to another result from the same recurring but varyingly pronounced hidden forces and their complex interactions. Such forces encompass a variety of economic, social, cultural and ecological conditions whose respective weighting defines the development of a city in general. Urban development is, however, not solely a product of the different spatial distribution of economic, legal or social indicators but also of the distribution of infrastructure. But to what extent is the development of a city affected by the changing provision of infrastructure? As
In the Space Syntax community, the standard tool for computing all kinds of spatial graph network measures is depthmapX (Turner, 2004; Varoudis, 2012). The process of evaluating many design variants of networks is relatively complicated, since they need to be drawn in a separated CAD system, exported and imported in depthmapX via dxf file format. This procedure disables a continuous integration into a design process. Furthermore, the standalone character of depthmapX makes it impossible to use its network centrality calculation for optimization processes. To overcome this limitations, we present in this paper the first steps of experimenting with a Grasshopper component (reference omitted until final version) that can access the functions of depthmapX and integrate them into Grasshopper/Rhino3D. Here the component is implemented in a way that it can be used directly for an evolutionary algorithm (EA) implemented in a Python scripting component in Grasshopper
The described study aims to find correlations between urban spatial configurations and human emotions. To this end, the authors measured people’s emotions while they walk along a path in an urban area using an instrument that measures skin conductance and skin temperature. The corresponding locations of the test persons were measured recorded by using a GPS-tracker (n=13). The results are interpreted and categorized as measures for positive and negative emotional arousal. To evaluate the technical and methodological process. The test results offer initial evidence that certain spaces or spatial sequences do cause positive or negative emotional arousal while others are relatively neutral. To achieve the goal of the study, the outcome was used as a basis for the study of testing correlations between people’s emotional responses and urban spatial configurations represented by Isovist properties of the urban form. By using their model the authors can explain negative emotional arousal for certain places, but they couldn’t find a model to predict emotional responses for individual spatial configurations.
Urban planning involves many aspects and various disciplines, demanding an asynchronous planning approach. The level of complexity rises with each aspect to be considered and makes it difficult to find universally satisfactory solutions. To improve this situation we propose a new approach, which complement traditional design methods with a computational urban plan- ning method that can fulfil formalizable design requirements automatically. Based on this approach we present a design space exploration framework for complex urban planning projects. For a better understanding of the idea of design space exploration, we introduce the concept of a digital scout which guides planners through the design space and assists them in their creative explorations. The scout can support planners during manual design by informing them about potential im- pacts or by suggesting different solutions that fulfill predefined quality requirements. The planner can change flexibly between a manually controlled and a completely automated design process. The developed system is presented using an exemplary urban planning scenario on two levels from the street layout to the placement of building volumes. Based on Self-Organizing Maps we implemented a method which makes it possible to visualize the multi-dimensional solution space in an easily analysable and comprehensible form.
It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.
In this paper we introduce LUCI, a Lightweight Urban Calculation Interchange system, designed to bring the advantages of a calculation and content co-ordination system to small planning and design groups by the means of an open source middle-ware. The middle-ware focuses on problems typical to urban planning and therefore features a geo-data repository as well as a job runtime administration, to coordinate simulation models and its multiple views. The described system architecture is accompanied by two exemplary use cases that have been used to test and further develop our concepts and implementations.
Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex.
The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials.
This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties
of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major
goal, the following tasks are carried out:
At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs.
At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length
scales. In particular, we homogenized the RVE into an equivalent fiber.
The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale.
Stochastic modeling and uncertainty quantification consist of the following ingredients:
- Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively.
- Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data.
- Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance.
In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided.
The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results.
Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.
Some caad packages offer additional support for the optimization of spatial configurations, but the possibilities for applying optimization are usually limited either by the complexity of the data model or by the constraints of the underlying caad system. Since we missed a system that allows to experiment with optimization techniques for the synthesis of spatial configurations, we developed a collection of methods over the past years. This collection is now combined in the presented open source library for computational planning synthesis, called CPlan. The aim of the library is to provide an easy to use programming framework with a flat learning curve for people with basic programming knowledge. It offers an extensible structure that allows to add new customized parts for various purposes. In this paper the existing functionality of the CPlan library is described.
The increasing success of BIM (Building Information Model) and the emergence of its implementation in 3D construction models have paved a way for improving scheduling process. The recent research on application of BIM in scheduling has focused on quantity take-off, duration estimation for individual trades, schedule visualization, and clash detection.
Several experiments indicated that the lack of detailed planning causes about 30% non-productive time and stacking of trades. However, detailed planning still has not been implemented in practice despite receiving a lot of interest from researchers. The reason is associated with the huge amount and complexity of input data. In order to create a detailed planning, it is time consuming to manually decompose activities, collect and calculate the detailed information in relevant. Moreover, the coordination of detailed activities requires much effort for dealing with their complex constraints.
This dissertation aims to support the generation of detailed schedules from a rough schedule. It proposes a model for automated detailing of 4D schedules by integrating BIM, simulation and Pareto-based optimization.
Viele Baudenkmale sind dem Konflikt aus baulichem Instandsetzungsbedarf für eine zeitgemäße Nutzung und einer sich möglicherweise daraus ergebenden Gefährdung der Denkmalsubstanz ausgesetzt. Gründe sind steigende Energiekosten für den Gebäudebetrieb, zeitgemäße Anforderungen an Behaglichkeit und Arbeitsschutz, sowie die Vermeidung von Schäden an der Substanz aufgrund baulicher Mängel des konstruktiven Wärme- und Feuchteschutzes. Gleichzeitig gilt für viele Bauten aber auch die Notwendigkeit regelmäßiger Nutzung und Bewirtschaftung, um den Erhalt überhaupt zu sichern. Die energetische Ertüchtigung von Baudenkmalen scheitert in diesem Spannungsfeld oft am unlösbaren Konflikt zwischen dem Erhalt der bauzeitlichen Substanz auf der einen und der notwendigen energetischen Optimierung der Gebäudehülle auf der anderen Seite. Zielsetzung dieser Fallstudie ist die beispielhafte Entwicklung einer bauklimatischen und denkmalgerechten Ertüchtigungsstrategie am Beispiel eines Verwaltungsgebäudes der Nachkriegsmoderne als Beitrag zur Lösung dieses Konfliktes.
Baulogistische Vorgänge sind in einer modern angelegten Baustelle der Schlüssel zu einer wirtschaftlichen Abwicklung. Dieses gilt nicht nur für den Rohbau, bei dem die sehr enge Verzahnung zwischen den Fertigungs- und Logistikprozessen auf der Baustelle zu beobachten ist, sondern noch mehr für die Ausbauphase, bei der vermeintlich unabhängig voneinander agierende Einzelunternehmen des Ausbaus auf engem Raum miteinander um die jeweils besten Liefer- und Montagebedingungen konkurrieren.
Ausgehend von einer aktuellen Großbaustelle in Jena werden verschiedene Varianten einer leistungsfähigen Baulogistik entwickelt und deren Implementierung auf der Baustelle vorbereitet werden.
The focus of the thesis is to process measurements acquired from a continuous
monitoring system at a railway bridge. Temperature, strain and ambient vibration
records are analysed and two main directions of investigation are pursued.
The first and the most demanding task is to develop processing routines able to extract modal parameters from ambient vibration measurements. For this purpose, reliable experimental models are achieved on the basis of a stochastic system identification(SSI) procedure. A fully automated algorithm based on a three-stage clustering is implemented to perform a modal parameter estimation for every single measurement. After selecting a baseline of modal parameters, the evolution of eigenfrequencies is
studied and correlated to environmental and operational factors.
The second aspect deals with the structural response to passing trains. Corresponding
triggered records of strain and temperature are processed and their assessment is
accomplished using the average strains induced by each train as the reference parameter.
Three influences due to speed, temperature and loads are distinguished and treated individually. An attempt to estimate the maximum response variation due to each factor is also carried out.