600 Technik, Medizin, angewandte Wissenschaften
Refine
Document Type
- Doctoral Thesis (57) (remove)
Institute
- Institut für Strukturmechanik (ISM) (18)
- Professur Baubetrieb und Bauverfahren (5)
- Bauhaus-Institut für zukunftsweisende Infrastruktursysteme (b.is) (4)
- F. A. Finger-Institut für Baustoffkunde (FIB) (4)
- Professur Bauchemie und Polymere Werkstoffe (2)
- Professur Baumanagement und Bauwirtschaft (2)
- Professur Baustatik und Bauteilfestigkeit (2)
- Professur Betriebswirtschaftslehre im Bauwesen (2)
- Professur Informatik in der Architektur (2)
- Professur Modellierung und Simulation - Konstruktion (2)
Keywords
- Finite-Elemente-Methode (6)
- Isogeometric Analysis (5)
- Beton (3)
- Finite Element Method (3)
- Isogeometrische Analyse (3)
- Phasenfeldmodell (3)
- Simulation (3)
- Batterie (2)
- Bauantrag (2)
- Baugenehmigung (2)
- Biogas (2)
- Building application (2)
- Concrete (2)
- Decision-making (2)
- Entscheidungsmodell (2)
- FEM (2)
- Frost (2)
- Hydrodynamik (2)
- Nachhaltigkeit (2)
- Optimierung (2)
- Partielle Differentialgleichung (2)
- Peridynamik (2)
- Probabilistik (2)
- Process management (2)
- Project management (2)
- Projektmanagement (2)
- Prozessmanagement (2)
- Variational principle (2)
- Vesikel (2)
- 0947 (1)
- 0948 (1)
- 0949 (1)
- 2D/3D Adaptive Mesh Refinement (1)
- ATZ (1)
- Abfall (1)
- Abwasser (1)
- Adaptive central high resolution schemes (1)
- Adaptives System (1)
- Aerodynamics (1)
- Aerodynamik (1)
- Aeroelastic energy harvesting (1)
- Aeroelastic instabilities (1)
- Aeroelasticity (1)
- Aeroelastizität (1)
- Akkumulator (1)
- Algorithmus (1)
- Alterung (1)
- Ammonium-Ton (1)
- Arbeitsplanung (1)
- Architektur (1)
- Artificial coral reefs (1)
- BIM (1)
- Battery (1)
- Battery development (1)
- Bau-Ist (1)
- Bauprozess (1)
- Baustoff (1)
- Bautechnik (1)
- Bayesian Inference, Uncertainty Quantification (1)
- Bayes’schen Inferenz (1)
- Bentonit (1)
- Betriebswirtschaft (1)
- Biegetheorie (1)
- Biogasanlage (1)
- Biogasproduktion (1)
- Biomass (1)
- Biomass Technology (1)
- Biomasse (1)
- Biotechnologie (1)
- Brandschutz (1)
- Bridge (1)
- Brikettierung (1)
- Briquette Production (1)
- Briquetting (1)
- Brücke (1)
- Budgetierung (1)
- Building Permit (1)
- Building permit (1)
- CAD (1)
- Category Theory (1)
- Computational Fluid Dynamics (1)
- Computational modelling (1)
- Computational wind engineering (1)
- Computer-Aided Design (1)
- Concrete catenary pole (1)
- Coupled numerical methods (1)
- Cross-Section Distortion (1)
- Cross-Section Warping (1)
- Cryogenic Suction (1)
- Curved thin-walled circular pipes (1)
- Damage Identification (1)
- Damage mechanism (1)
- Data (1)
- Data-driven (1)
- Deal ii C++ code (1)
- Deammonisierung (1)
- Denkmalpflege (1)
- Design (1)
- Designforschung (1)
- Developing Countries (1)
- Dienstliegenschaft (1)
- Dispersionskeramik (1)
- Druckglied (1)
- Dynamic Analysis (1)
- Dynamische Analyse (1)
- Ecology in design (1)
- Eisenbahn (1)
- Electrochemical properties (1)
- Elektrochemische Eigenschaft (1)
- Elektrode (1)
- Elektrodenmaterial (1)
- Elektromechanik (1)
- Energieerzeugung (1)
- Energiespeichersystem (1)
- Energiewirtschaft (1)
- Entwicklungsländer (1)
- Erdbeben (1)
- Erfahrungswissen (1)
- Ergebnisorientierte Steuerung (1)
- Facility Management (1)
- Feuer (1)
- Fiber Reinforced Composite (1)
- Finite Element (1)
- Finite Elemente Methode (1)
- Flexoelectricity (1)
- Frost-Tausalz-Angriff (1)
- GIXRD (1)
- Gashochdruckleitungen (1)
- Generalized Beam Theory (GBT) (1)
- Generalized Bean Theory (1)
- Geometric Partial Differential Equations (1)
- Geometrically nonlinear analysis (1)
- Geometry Independent Field Approximation (1)
- Gestaltoptimierung (1)
- Gips (1)
- Gleisanlage (1)
- Gleisdynamik (1)
- Glue Spall (1)
- Goal-oriented A Posteriori Error Estimation (1)
- Grauguss (1)
- HPC (1)
- Haushaltsrecht (1)
- High precision underwater monitoring (1)
- Hyperbolic PDEs (1)
- Hüftgelenkprothese (1)
- Identifikation (1)
- Immobilienmanagement (1)
- Informal Sector (1)
- Infrastruktur (1)
- Inverse Probleme (1)
- Inverse Problems (1)
- Inverse problems (1)
- Investition (1)
- Isogeometrc Analysis (1)
- Isogeometric analysis (1)
- Jahresarbeitsplanung (1)
- Kationenaustausch (1)
- Keramik (1)
- Klebstoff-Faser-Verbundwerkstoff; Alu-Carbon-Hybridelement; Glas-Kunststoff-Hybridelement; ANSYS; CFK; Klebverbindungen (1)
- Knoop Mikrohärte (1)
- Korallenriff (1)
- Korrosion (1)
- Kulturerbe (1)
- Kupferfixierung (1)
- Kupfersilicate (1)
- Künstliche Korallenriffe (1)
- Lebensdauerabschätzung (1)
- Maissilage (1)
- Makroalgen (1)
- Mechanical properties (1)
- Mechanische Eigenschaft (1)
- Medienforschung (1)
- Meeresökologie (1)
- Mesh Refinement (1)
- Mikrostruktur (1)
- Modellierung (1)
- Mongolia (1)
- Monte-Carlo-Simulation (1)
- Montmorillonite (1)
- Multi-scale modeling (1)
- Multiphysics (1)
- Museum (1)
- Musterjahresganglinien (1)
- NRA (1)
- NURBS (1)
- Nachhaltigkeitsbewertung (1)
- Nanomaterial (1)
- Nanostructures (1)
- Nanostrukturiertes Material (1)
- Nanoverbundstruktur (1)
- Neighborhood Effects (1)
- Neue Institutionenökonomik (1)
- Neues Steuerungsmodell (1)
- Neuronales Netz (1)
- New Public Management (1)
- Nichtlokale Operatormethode (1)
- Numerical (1)
- Numerisches Verfahren (1)
- Oberleitungsmasten (1)
- Operante Konditionierung (1)
- Optimization (1)
- Optimization problems (1)
- Organoleptik (1)
- PDEs (1)
- Parameteridentifikation (1)
- Pareto optimization (1)
- Partial Differential Equations (1)
- Peridynamics (1)
- Phase field method (1)
- Phase-Field (1)
- Phase-field modeling (1)
- Phasenfeldanalyse (1)
- Phenomeology (1)
- Photobioreaktor (1)
- Photobioreaktorsystem (1)
- Physics informed neural network (1)
- Phänomenologie (1)
- Piezoelectricity (1)
- Polymere (1)
- Polynomial Splines over Hierarchical T-meshes (1)
- Portfoliomanagement (1)
- Primärenergie (1)
- Primärenergiefaktor (1)
- Public Real Estate Management (1)
- Qualitative Evaluation (1)
- RHT-splines (1)
- Railway (1)
- Recovery Based Error Estimator (1)
- Residual-based variational multiscale method (1)
- Resource Management (1)
- SHM (1)
- Salt frost attack (1)
- Salt-Frost-Attack (1)
- Schadenerkennung (1)
- Schadensmechanismus (1)
- Schloss (1)
- Schottersäule (1)
- Shape Optimization (1)
- Sicherheit (1)
- Social Networks (1)
- Socio-spatial Segregation (1)
- Software (1)
- Sorption (1)
- Soziales Netzwerk (1)
- Sprödbruch (1)
- Stadthotels; Energieeffizienz; Wirtschaftlichkeit (1)
- Stadtplanung (1)
- Stahlbeton (1)
- Stone Columns (1)
- Straßenbetriebsdienst (1)
- Strukturdynamik (1)
- Surface effects (1)
- Taylor Series Expansion (1)
- Thermoelastic damping (1)
- Thermoelasticity (1)
- Thermoelastizität (1)
- Thin-walled Structures (1)
- Thin-walled structures (1)
- Tichonov-Regularisierung (1)
- Tikhonov regularization (1)
- Topologieoptimierung (1)
- Topology Optimization (1)
- Tribologie (1)
- UDDT (1)
- Ultraschall (1)
- Umweltleistung (1)
- Unbewegliche Sache (1)
- Unschärfequantifizierung (1)
- Unterwasserarchitektur (1)
- Urban Space (1)
- Variationsprinzip (1)
- Verallgemeinerte Technische Biegetheorie (1)
- Verkehrslast (1)
- Vesicle Doublet (1)
- Vesicle Hydrodynamics (1)
- Vesicle dynamics (1)
- Vesicles Electromechanics (1)
- Vesikel Elektromechanik (1)
- Vesikel Hydrodynamik (1)
- Vesikel-Doublette (1)
- Videokonferenz (1)
- Videotelefonie (1)
- Waste to Energy (1)
- Wavelet (1)
- Wavelet based adaptation (1)
- Werkstoffdämpfung (1)
- Werkstoffprüfung (1)
- Windenergie (1)
- Wärmebedarf (1)
- Wärmebehandlung (1)
- XFEM (1)
- Zement (1)
- brittle fracture (1)
- cement (1)
- ceramics (1)
- composite cement (1)
- damage (1)
- decay experiments (1)
- deep neural network (1)
- domain decomposition (1)
- durability (1)
- earthquake (1)
- energy dissipation (1)
- ereignisbasiert (1)
- erneuerbare Energie (1)
- finite element method (1)
- fire (1)
- gradient elasticity (1)
- gypsum (1)
- heritage (1)
- heterogeneous material (1)
- high-performance computing (1)
- iPiT® (1)
- integrated sanitation system (1)
- level set method (1)
- machine learning (1)
- metakaolin (1)
- modal damping (1)
- multi-objective optimization (1)
- multiphase (1)
- multiscale (1)
- nanocomposite (1)
- new alternative sanitation systems NASS (1)
- numerical methods (1)
- phase field (1)
- phase field fracture method (1)
- plasterboard (1)
- recovery-based and residual-based error estimators (1)
- reinforcement learning (1)
- sanitation planning (1)
- sanitation planning, sustainable technology, sustainability assessment (1)
- scalable smeared crack analysis (1)
- scheduling (1)
- seismic control (1)
- smoothed particle hydrodynamics (1)
- solver (1)
- stochastic (1)
- structural control (1)
- sustainability assessment (1)
- sustainable technology (1)
- tuned mass damper (1)
- videocall (1)
- videochat (1)
- videoconference (1)
- weighted residual method (1)
- Öffentliche Verwaltung (1)
- Ökobilanz (1)
- Ökoeffizienz (1)
Diese Arbeit soll einen Beitrag zum Neuen Steuerungsmodell der Öffentlichen Verwaltung auf staatlicher Ebene in Deutschland leisten. Sie dient der Untersuchung zum Aufbau eines ökonomischen Modells zur Koordination von Dienstliegenschaften auf staatlicher Ebene.
Die Untersuchung der Dienstliegenschaften zeigt, dass diese eine interne Dienstbarkeit des Staates als Wirtschaftssubjekt an den Staat als Hoheitsträger fingieren.
Die Untersuchung der Liegenschaftsverwaltung belegt, dass sie vor allem ein Controlling-Instrument für den Informationsfluss zu Entscheidungen über Dienstliegenschaften zwischen dem Hoheitsträger und dem Wirtschaftssubjekt darstellt.
Die Untersuchung der Transaktionskosten beweist, dass eine Koordination am effizientesten über die eigene Organisation mittels dezentraler Aufgabenkonzentration in Form eines SSC erreicht werden kann.
Die Untersuchung der Handlungs- und Verfügungsrechte ergab, dass die Verfügungsrechte an den Dienstliegenschaften weiter den Ressorts bzw. Nutzern obliegen. Allein das Handlungsrecht der Aufgabenwahrnehmung ist an die Liegenschaftsverwaltung übergegangen.
Die Untersuchung der Prinzipal-Agent-Verhältnisse teilt der Liegenschaftsverwaltung die Rolle eines Erfüllungsgehilfen des Wirtschaftssubjektes zu. Die monetäre Abwicklung der Ge-schäfte zwischen dem Wirtschaftssubjekt und dem Hoheitsträger obliegt als zuarbeitende Organisationseinheit ohne eigene Entscheidungsgewalt der Liegenschaftsverwaltung.
Aus diesen Thesen ergeben sich monetäre Handlungsmöglichkeiten, die in den Aufbau des Modells einfließen. Es rückt das Wirtschaftssubjekt als Entscheidungsträger über den Ressourceneinsatz in den Mittelpunkt der Betrachtung. Der Hoheitsträger ist hierbei nur der Forderungsberechtigte an das Wirtschaftssubjekt und die Liegenschaftsverwaltung dessen immobilienwirtschaftlicher Vertreter.
Diese Konstellation berücksichtigend erfolgt dem Credo des NPM gemäß die Beschreibung des Modells und des institutionellen Arrangements nach den gängigen Modellen aus der Privatwirtschaft. Anhand der Untersuchungsergebnisse wird eine Liegenschaftsverwaltung skizziert, welche einerseits den tatsächlichen betriebswirtschaftlichen Anforderungen der Immobilienwirtschaft und andererseits den Gegebenheiten des haushaltswirtschaftlichen Umfeldes der öffentlichen Verwaltung gerecht werden kann.
Das aufgebaute Realmodell dient gleichzeitig dem Vergleich mit dem bestehenden und in der Praxis angewandten Mieter-Vermieter-Modell als Idealmodell. Der Vergleich zeigt, dass sich aus der Untersuchung zivilrechtlicher Institutionen zwar kein schuldrechtliches, wohl aber ein dingliches Recht ableiten lässt. Damit einher geht die Vermutung, dass es sich bei dem Mieter-Vermieter-Modell im staatlichen Bereich um einen Fall von Modellplatonismus handelt.
Structures under wind action can exhibit various aeroelastic interaction phenomena, which can lead to destructive and catastrophic events. Such unstable interaction can be beneficially used for small-scale aeroelastic energy harvesting. Proper understanding and prediction of fluid−structure interactions (FSI) phenomena are therefore crucial in many engineering fields. This research intends to develop coupled FSI models to extend the applicability of Vortex Particle Methods (VPM) for numerically analysing the complex FSI of thin-walled flexible structures under steady and fluctuating incoming flows. In this context, the flow around deforming thin bodies is analysed using the two-dimensional and pseudo-three-dimensional implementations of VPM. The structural behaviour is modelled and analysed using the Finite Element Method. The partitioned coupling approach is considered because of the flexibility of using different mathematical procedures for solving fluid and solid mechanics. The developed coupled models are validated with several benchmark FSI problems in the literature. Finally, the models are applied to several fundamental and application field of FSI problems of different thin-walled flexible structures irrespective of their size.
Biomembranes are selectively permeable barriers that separate the internal components of the cell from its surroundings. They have remarkable mechanical behavior which is characterized by many phenomena, but most noticeably their fluid-like in-plane behavior and solid-like out-of-plane behavior. Vesicles have been studied in the context of discrete models, such as Molecular Dynamics, Monte Carlo methods, Dissipative Particle Dynamics, and Brownian Dynamics. Those methods, however, tend to have high computational costs, which limited their uses for studying atomistic details. In order to broaden the scope of this research, we resort to the continuum models, where the atomistic details of the vesicles are neglected, and the focus shifts to the overall morphological evolution. Under the umbrella of continuum models, vesicles morphology has been studied extensively. However, most of those studies were limited to the mechanical response of vesicles by considering only the bending energy and aiming for the solution by minimizing the total energy of the system. Most of the literature is divided between two geometrical representation methods; the sharp interface methods and the diffusive interface methods. Both of those methods track the boundaries and interfaces implicitly. In this research, we focus our attention on solving two non-trivial problems. In the first one, we study a constrained Willmore problem coupled with an electrical field, and in the second one, we investigate the hydrodynamics of a vesicle doublet suspended in an external viscous fluid flow.
For the first problem, we solve a constrained Willmore problem coupled with an electrical field using isogeometric analysis to study the morphological evolution of vesicles subjected to static electrical fields. The model comprises two phases, the lipid bilayer, and the electrolyte. This two-phase problem is modeled using the phase-field method, which is a subclass of the diffusive interface methods mentioned earlier. The bending, flexoelectric, and dielectric energies of the model are reformulated using the phase-field parameter. A modified Augmented-Lagrangian (ALM) approach was used to satisfy the constraints while maintaining numerical stability and a relatively large time step. This approach guarantees the satisfaction of the constraints at each time step over the entire temporal domain.
In the second problem, we study the hydrodynamics of vesicle doublet suspended in an external viscous fluid flow. Vesicles in this part of the research are also modeled using the phase-field model. The bending energy and energies associated with enforcing the global volume and area are considered. In addition, the local inextensibility condition is ensured by introducing an additional equation to the system. To prevent the vesicles from numerically overlapping, we deploy an interaction energy definition to maintain a short-range repulsion between the vesicles. The fluid flow is modeled using the incompressible Navier-Stokes equations and the vesicle evolution in time is modeled using two advection equations describing the process of advecting each vesicle by the fluid flow. To overcome the velocity-pressure saddle point system, we apply the Residual-Based Variational MultiScale (RBVMS) method to the Navier-Stokes equations and solve the coupled systems using isogeometric analysis. We study vesicle doublet hydrodynamics in shear flow, planar extensional flow, and parabolic flow under various configurations and boundary conditions.
The results reveal several interesting points about the electrodynamics and hydrodynamics responses of single vesicles and vesicle doublets. But first, it can be seen that isogeometric analysis as a numerical tool has the ability to model and solve 4th-order PDEs in a primal variational framework at extreme efficiency and accuracy due to the abilities embedded within the NURBS functions without the need to reduce the order of the PDE by creating an intermediate environment. Refinement whether by knot insertion, order increasing or both is far easier to obtain than traditional mesh-based methods. Given the wide variety of phenomena in natural sciences and engineering that are mathematically modeled by high-order PDEs, the isogeometric analysis is among the most robust methods to address such problems as the basis functions can easily attain high global continuity.
On the applicational side, we study the vesicle morphological evolution based on the electromechanical liquid-crystal model in 3D settings. This model describing the evolution of vesicles is composed of time-dependent, highly nonlinear, high-order PDEs, which are nontrivial to solve. Solving this problem requires robust numerical methods, such as isogeometric analysis. We concluded that the vesicle tends to deform under increasing magnitudes of electric fields from the original sphere shape to an oblate-like shape. This evolution is affected by many factors and requires fine-tuning of several parameters, mainly the regularization parameter which controls the thickness of the diffusive interface width. But it is most affected by the method used for enforcing the constraints. The penalty method in presence of an electrical field tends to lock on the initial phase-field and prevent any evolution while a modified version of the ALM has proven to be sufficiently stable and accurate to let the phase-field evolve while satisfying the constraints over time at each time step. We show additionally the effect of including the flexoelectric nature of the Biomembranes in the computation and how it affects the shape evolution as well as the effect of having different conductivity ratios. All the examples were solved based on a staggered scheme, which reduces the computational cost significantly.
For the second part of the research, we consider vesicle doublet suspended in a shear flow, in a planar extensional flow, and in a parabolic flow. When the vesicle doublet is suspended in a shear flow, it can either slip past each other or slide on top of each other based on the value of the vertical displacement, that is the vertical distance between the center of masses between the two vesicles, and the velocity profile applied. When the vesicle doublet is suspended in a planar extensional flow in a configuration that resembles a junction, the time in which both vesicles separate depends largely on the value of the vertical displacement after displacing as much fluid from between the two vesicles. However, when the vesicles are suspended in a tubular channel with a parabolic fluid flow, they develop a parachute-like shape upon converging towards each other before exiting the computational domain from the predetermined outlets. This shape however is affected largely by the height of the tubular channel in which the vesicle is suspended. The velocity essential boundary conditions are imposed weakly and strongly. The weak implementation of the boundary conditions was used when the velocity profile was defined on the entire boundary, while the strong implementation was used when the velocity profile was defined on a part of the boundary. The strong implementation of the essential boundary conditions was done by selectively applying it to the predetermined set of elements in a parallel-based code. This allowed us to simulate vesicle hydrodynamics in a computational domain with multiple inlets and outlets. We also investigate the hydrodynamics of oblate-like shape vesicles in a parabolic flow. This work has been done in 2D configuration because of the immense computational load resulting from a large number of degrees of freedom, but we are actively seeking to expand it to 3D settings and test a broader set of parameters and geometrical configurations.
Aktuell findet aufgrund gesellschaftspolitischer Forderungen in vielen Industriezweigen ein Umdenken in Richtung Effizienz und Ökologie aber auch Digitalisierung und Industrie 4.0 statt. In dieser Hinsicht steht die Bauindustrie, im Vergleich zu Industrien wie IT, Automobil- oder Maschinenbau, noch am Anfang.
Dabei sind die Potentiale zur Einsparung und Optimierung gerade in der Bauindustrie aufgrund der großen Mengen an zu verarbeiteten Materialien besonders hoch. Die internationale Ressourcen- und Klimadebatte führt verstärkt dazu, dass auch in der Zement- und Betonherstellung neue Konzepte erstellt und geprüft werden. Einerseits erfolgt intensive Forschung und Entwicklung im Bereich alternativer, klimafreundlicher Zemente. Andererseits werden auch auf Seiten der Betonherstellung innovative materialsparende Konzepte geprüft, wie die aktuelle Entwicklung von 3D-Druck mit Beton zeigt.
Aufgrund der hohen Anforderungen an Konstruktion, Qualität und Langlebigkeit von Bauwerken, besitzen Betonfertigteile oftmals Vorteile gegenüber Ortbeton. Die hohe Oberflächenqualität und Dauerhaftigkeit aber auch die Gleichmäßigkeit und witterungsunabhängige Herstellung sind Merkmale, die im Zusammenhang mit Betonfertigteilen immer wieder erwähnt werden. Dabei ist es essenziell, dass auch der Betonherstellungsprozess im Fertigteilwerk kritisch hinterfragt wird, damit eine effizientere und nachhaltigere Produktion von Betonfertigteilen möglich wird.
Bei der Herstellung von Betonteilen im Fertigteilwerk liegt ein besonderer Fokus auf der Optimierung der Frühfestigkeitsentwicklung. Hohe Frühfestigkeiten sind Voraussetzung für einen hochfrequenten Schalungszyklus, was Arbeiten im 2- bzw. 3-Schichtbetrieb ermöglicht. Oft werden zur Sicherstellung hoher Frühfestigkeiten hochreaktive Zemente in Kombination mit hohen Zementgehalten im Beton und/oder einer Wärmebehandlung eingesetzt. Unter dieser Prämisse ist eine ökologisch nachhaltige Betonproduktion mit verminderter CO2 Bilanz nicht möglich.
In der vorliegenden Arbeit wird ein neues Verfahren zur Beschleunigung von Beton eingeführt. Hierbei werden die Bestandteile Zement und Wasser (Zementsuspension) mit Ultraschall vorbehandelt. Ausgangspunkt der Arbeit sind vorangegangene Untersuchungen zum Einfluss von Ultraschall auf die Hydration von Zement bzw. dessen Hauptbestandteil Tricalciumsilikat (C3S), die im Rahmen dieser Arbeit weiter vertieft werden. Darüber hinaus wird die Produktion von Beton mit Ultraschall im Technikumsmaßstab betrachtet. Die so erlangten Erfahrungen dienten dazu, das Ultraschall-Betonmischsystem weiterzuentwickeln und erstmalig zur industriellen Betonproduktion zu nutzen.
In der vorliegenden Arbeit werden die Auswirkungen von Ultraschall auf die Hydratation von C3S zunächst weitergehend und grundlegend untersucht. Dies erfolgte mittels Messung der elektrischen Leitfähigkeit, Analyse der Ionenkonzentration (ICP-OES), Thermoanalyse, Messung der BET-Oberfläche sowie einer optischen Auswertung mittels Rasterelektronenmikroskopie (REM). Der Fokus liegt auf den ersten Stunden der Hydratation, also der Zeit, die durch die Ultraschallbehandlung am stärksten beeinflusst wird.
In den Untersuchungen zeigt sich, dass die Beschleunigungswirkung von Ultraschall in verdünnten C3S Suspensionen (w/f-Wert = 50) stark von der Portlanditkonzentration der Lösung abhängt. Je niedriger die Portlanditkonzentration, desto größer ist die Beschleunigung. Ergänzende Untersuchungen der Ionenkonzentration der Lösung sowie Untersuchungen am hydratisierten C3S zeigen, dass unmittelbar nach der Beschallung (nach ca. 15 Minuten Hydratation) erste Hydratphasen vorliegen. Die durch Ultraschall initiiere Beschleunigung ist in den ersten 24 Stunden am stärksten und klingt dann sukzessive ab. Die Untersuchungen schließen mit Experimenten an C3S-Pasten (w/f-Wert = 0,50), die die Beobachtungen an den verdünnten Suspensionen bestätigen und infolge der Beschallung ein früheres Auftreten und einen größeren Anteil an C-S-H Phasen zeigen. Es wird gefolgert, dass die unmittelbar infolge von Ultraschall erzeugten C-S-H Phasen als Kristallisationskeim während der folgenden Reaktion dienen und daher Ultraschall als in-situ Keimbildungstechnik angesehen werden kann. Optisch zeigt sich, dass die C-S-H Phasen der beschallten Pasten nicht nur viel früher auftreten, sondern kleiner sind und fein verteilt über die Oberfläche des C3S vorliegen. Auch dieser Effekt wird als vorteilhaft für den sich anschließenden regulären Strukturaufbau angesehen.
Im nächsten Schritt wird daher der Untersuchungsfokus vom Modellsystem mit C3S auf Portlandzement erweitert. Hierbei wird der Frage nachgegangen, wie sich eine Änderung der Zusammensetzung der Zementsuspension (w/z-Wert, Fließmittelmenge) beziehungsweise eine Änderung des Ultraschallenergieeintrag auf die Fließeigenschaften und das Erhärtungsverhalten auswirken.
Um den Einfluss verschiedener Faktoren gleichzeitig zu betrachten, werden mit Hilfe von statistischen Versuchsplänen Modelle erstellt, die das Verhalten der einzelnen Faktoren beschreiben. Zur Beschreibung der Fließeigenschaften wurde das Setzfließ- und Ausbreitmaß von Zementsuspensionen herangezogen. Die Beschleunigung der Erhärtung wurde mit Hilfe der Ermittlung des Zeitpunkts des normalen Erstarrens der Zementsuspension bestimmt.
Die Ergebnisse dieser Untersuchungen zeigen deutlich, dass die Fließeigenschaften und der Erstarrungsbeginn nicht linear mit steigendem Ultraschall-Energieeintrag verändert werden. Es zeigt sich, dass es besonders bei den Verarbeitungseigenschaften der Portlandzementsuspensionen zur Ausbildung eines spezifischen Energieeintrages kommt, bis zu welchem das Setzfließ- und das Ausbreitmaß erhöht werden. Bei Überschreiten dieses Punktes, der als kritischer Energieeintrag definiert wurde, nimmt das Setzfließ- und Ausbreitmaß wieder ab. Das Auftreten dieses Punktes ist im besonderen Maße abhängig vom w/z-Wert. Mit sinkendem w/z-Wert wird der Energieeintrag, der eine Verbesserung der Fließeigenschaften hervorruft, reduziert. Bei sehr niedrigen w/z-Werten (< 0,35), kann keine Verbesserung mehr beobachtet werden.
Wird Fließmittel vor der Beschallung zur Zementsuspension zugegeben, können die Eigenschaften der Zementsuspension maßgeblich beeinflusst werden. In beschallten Suspensionen mit Fließmittel, konnte in Abhängigkeit des Energieeintrages die fließmittelbedingte Verzögerung des Erstarrungsbeginns deutlich reduziert werden. Weiterhin zeigt sich, dass der Energieeintrag, der notwendig ist um den Erstarrungsbeginn um einen festen Betrag zu reduzieren, bei Suspensionen mit Fließmittel deutlich reduziert ist.
Auf Grundlage der Beobachtungen an Zementsuspensionen wird der Einfluss von Ultraschall in einen dispergierenden und einen beschleunigenden Effekt unterteilt. Bei hohen w/z-Werten dominiert der dispergierende Einfluss von Ultraschall und der Erstarrungsbeginn wird moderat verkürzt. Bei niedrigeren w/z-Werten der Zementsuspension, dominiert der beschleunigende Effekt wobei kein oder sogar ein negativer Einfluss auf die Verarbeitungseigenschaften beobachtet werden kann.
Im nächsten Schritt werden die Untersuchungen auf den Betonmaßstab mit Hilfe einer Technikumsanlage erweitert und der Einfluss eines zweistufigen Mischens (also dem Herstellen einer Zementsuspension im ersten Schritt und dem darauffolgenden Vermischen mit der Gesteinskörnung im zweiten Schritt) mit Ultraschall auf die Frisch- und Festbetoneigenschaften betrachtet. Durch die Anlagentechnik, die mit der Beschallung größerer Mengen Zementsuspension einhergeht, kommen weitere Einflussfaktoren auf die Zementsuspension hinzu (z. B. Pumpgeschwindigkeit, Temperatur, Druck). Im Rahmen der Untersuchungen wurde eine Betonrezeptur mit und ohne Ultraschall hergestellt und die Frisch- und Festbetoneigenschaften verglichen. Darüber hinaus wurde ein umfangreiches Untersuchungsprogramm zur Ermittlung wesentlicher Dauerhaftigkeitsparameter durchgeführt. Aufbauend auf den Erfahrungen mit der Technikumsanlage wurde das Ultraschall-Vormischsystem in mehreren Stufen weiterentwickelt und abschließend in einem Betonwerk zur Betonproduktion verwendet.
Die Untersuchungen am Beton zeigen eine deutliche Steigerung der Frühdruckfestigkeiten des Portlandzementbetons. Hierbei kann die zum Entschalen von Betonbauteilen notwendige Druckfestigkeit von 15 MPa deutlich früher erreicht werden. Das Ausbreitmaß der Betone (w/z-Wert = 0,47) wird infolge der Beschallung leicht reduziert, was sich mit den Ergebnissen aus den Untersuchungen an reinen Zementsuspensionen deckt. Bei Applikation eines Überdruckes in der Beschallkammer oder einer Kühlung der Suspension während der Beschallung, kann das Ausbreitmaß leicht gesteigert werden. Allerdings werden die hohen Frühdruckfestigkeiten der ungekühlten beziehungsweise drucklosen Variante nicht mehr erreicht.
In den Untersuchungen kann gezeigt werden, dass das Potential durch die Ultraschall-Beschleunigung genutzt werden kann, um entweder die Festigkeitsklasse des Zementes leitungsneutral zu reduzieren (von CEM I 52,5 R auf CEM I 42,5 R) oder eine 4-stündige Wärmebehandlung vollständig zu substituieren. Die Dauerhaftigkeit der Betone wird dabei nicht negativ beeinflusst. In den Untersuchungen zum Sulfat-, Karbonatisierung-, Chlorideindring- oder Frost/Tauwiderstand kann weder ein positiver noch ein negativer Einfluss durch die Beschallung abgeleitet werden. Ebenso kann in einer Untersuchung zur Alkali-Kieselsäure-Reaktion kein negativer Einfluss durch die Ultraschallbehandlung beobachtet werden.
In den darauf aufbauenden Untersuchungen wird die Anlagentechnik weiterentwickelt, um die Ultraschallbehandlung stärker an eine reale Betonproduktion anzupassen. In der ersten Iterationsstufe wird das in den Betonuntersuchungen verwendete Anlagenkonzept 1 modifiziert (von der In-line-Beschallung zur Batch-Beschallung) und als Analgenkonzept 2 für weitere Untersuchungen genutzt. Hierbei wird eine neue Betonrezeptur mit höherem w/z-Wert (0,52) verwendet, wobei die Druckfestigkeiten ebenfalls deutlich gesteigert werden können. Im Gegensatz zum ersten Beton, wird das Ausbreitmaß dieser Betonzusammensetzung gesteigert, was zur Reduktion von Fließmittel genutzt wird. Dies deckt sich ebenfalls mit den Beobachtungen an reinen Portlandzementsuspensionen, wo eine deutliche Verbesserung der Fließfähigkeit bei höheren w/z-Werten beschrieben wird.
Für diese Betonrezeptur wird ein Vergleich mit einem kommerziell erhältlichen Erhärtungsbeschleuniger (synthetische C-S-H-Keime) angestellt. Hierbei zeigt sich, dass die Beschleunigungswirkung beider Technologien vergleichbar ist. Eine Kombination beider Technologien führt zu einer weiteren deutlichen Steigerung der Frühfestigkeiten, so dass hier von einem synergistischen Effekt ausgegangen werden kann.
In der letzten Iterationsstufe, dem Anlagenkonzept 3, wird beschrieben, wie das Mischsystem im Rahmen einer universitären Ausgründung signifikant weiterentwickelt wird und erstmals in einem Betonwerk zur Betonproduktion verwendet wird. Bei den Überlegungen zur Weiterentwicklung des Ultraschall-Mischsystems wird der Fokus auf die Praktikabilität gelegt und gezeigt, dass das ultraschallgestütze Mischsystem die Druckfestigkeitsentwicklung auch im Werksmaßstab deutlich beschleunigen kann. Damit ist die Voraussetzung für eine ökologisch nachhaltige Optimierung eines Fertigteilbetons unter realen Produktionsbedingungen geschaffen worden.
Inhaltlich beschäftigt sich die Arbeit, die im Rahmen des Promotionsstudiengangs Kunst und Gestaltung an der Bauhaus-Universität entstand, mit der Erforschung sozio-interaktiver Potentiale der Videotelefonie im Kontext von Nähe und Verbundenheit mit Fokus auf Eigenbild, Embodiment sowie den Rederechtswechsel.
Die Videotelefonie als Kommunikationsform hat sich – und darauf deuten die Erfahrungen der Co- vid-19-Pandemie hin – im lebensweltlichen Alltag der Menschen etabliert und wird dort in naher Zukunft nicht mehr wegzudenken sein. Auf Basis ihrer Möglichkeiten und Errungenschaften ist es inzwischen Realität und Lebenswirklichkeit, dass die Kommunikation sowohl im privaten als auch im geschäftlichen Kontext mittels verschiedenster Kanäle stattfindet. Der Videotelefonie kommt hierbei als solche nicht nur eine tragende Funktion, sondern auch eine herausragende Rolle bei der vermeintlichen Reproduktion der Face-to-Face-Kommunikation im digitalen Raum zu und wird wie selbstverständlich zum zwischenmenschlichen Austausch genutzt. Just an diesem Punkt knüpft die Forschungsarbeit an. Zentral stand dabei das Vorhaben einer dezidierte Untersuchung des Forschungsgegenstandes Videotelefonie, sowohl aus Kultur- als auch Technikhistorischer, aber auch Medien-, Wahrnehmungs- wie Kommunikations- theoretischer Perspektive, indem analytische und phänosemiotische Perspektiven miteinander in Beziehung gesetzt werden (z.B. Wahrnehmungsbedingungen, Interaktionsmerkmale, realisierte Kommunikationsprozesse etc.). Damit verbundenes, wünschenswertes Ziel war es, eine möglichst zeitgemäße wie relevante Forschungsfrage zu adressieren, die neben den kulturellen Technisierungs- und Mediatisierungstendenzen in institutionellen und privaten Milieus ebenfalls eine conditio sine qua non der pandemischen (Massen-)Kommunikation entwirft.
Die Arbeit ist damit vor allem im Bereich des Produkt- und Interactiondesigns zu verorten. Darüber hinaus hatte sie das Ziel der Darlegung und Begründung der Videotelefonie als eigenständige Kommunikationsform, welche durch eigene, kommunikative Besonderheiten, die sich in ihrer jeweiligen Ingebrauchnahme sowie durch spezielle Wahrnehmungsbedingungen äußern, und die die Videotelefonie als »Rederechtswechselmedium« avant la lettre konsolidieren, gekennzeichnet ist. Dabei sollte der Beweis erbracht werden, dass die Videotelefonie nicht als Schwundstufe einer Kommunikation Face-to-Face, sondern als ein eigenständiges Mediatisierungs- und Kommunikationsereignis zu verstehen sei. Und eben nicht als eine beliebige – sich linear vom Telefon ausgehende – entwickelte Form der audio-visuellen Fernkommunikation darstellt, sondern die gestalterische (Bewegtbild-)Technizität ein eigenständiges Funktionsmaß offeriert, welches wiederum ein innovatives Kommunikationsmilieu im Kontext einer Rederechtswechsel-Medialität stabilisiert.
The reduction of the cement clinker content is an important prerequisite for the improvement of the CO2-footprint of concrete. Nevertheless, the durability of such concretes must be sufficient to guarantee a satisfactory service life of structures. Salt frost scaling resistance is a critical factor in this regard, as it is often diminished at increased clinker substitution rates. Furthermore, only insufficient long-term experience for such concretes exists. A high salt frost scaling resistance thus cannot be achieved by applying only descriptive criteria, such as the concrete composition. It is therefore to be expected, that in the long term a performance based service life prediction will replace the descriptive concept.
To achieve the important goal of clinker reduction for concretes also in cold and temperate climates it is important to understand the underlying mechanisms for salt frost scaling. However, conflicting damage theories dominate the current State of the Art. It was consequently derived as the goal of this thesis to evaluate existing damage theories and to examine them experimentally. It was found that only two theories have the potential to describe the salt frost attack satisfactorily – the glue spall theory and the cryogenic suction theory.
The glue spall theory attributes the surface scaling to the interaction of an external ice layer with the concrete surface. Only when moderate amounts of deicing salt are present in the test solution the resulting mechanical properties of the ice can cause scaling. However, the results in this thesis indicate that severe scaling also occurs at deicing salt levels, at which the ice is much too soft to damage concrete. Thus, the inability of the glue spall theory to account for all aspects of salt frost scaling was shown.
The cryogenic suction theory is based on the eutectic behavior of salt solutions, which consist of two phases – water ice and liquid brine – between the freezing point and the eutectic temperature. The liquid brine acts as an additional moisture reservoir, which facilitates the growth of ice lenses in the surface layer of the concrete. The experiments in this thesis confirmed, that the ice formation in hardened cement paste increases due to the suction of brine at sub-zero temperatures. The extent of additional ice formation was influenced mainly by the porosity and by the chloride binding capacity of the hardened cement paste.
Consequently, the cryogenic suction theory plausibly describes the actual generation of scaling, but it has to be expanded by some crucial aspects to represent the salt frost scaling attack completely. The most important aspect is the intensive saturation process, which is ascribed to the so-called micro ice lens pump. Therefore a combined damage theory was proposed, which considers multiple saturation processes. Important aspects of this combined theory were confirmed experimentally.
As a result, the combined damage theory constitutes a good basis to understand the salt frost scaling attack on concrete on a fundamental level. Furthermore, a new approach was identified, to account for the reduced salt frost scaling resistance of concretes with reduced clinker content.
Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment.
This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy.
The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping.
Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation.
The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential.
The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes.
Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons.
Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form.
The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows:
-The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method.
-A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation.
-A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved.
The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales.
In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis.
Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems).
The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended.
At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied.
The Finite Element Method (FEM) is widely used in engineering for solving Partial Differential Equations (PDEs) over complex geometries. To this end, it is required to provide the FEM software with a geometric model that is typically constructed in a Computer-Aided Design (CAD) software. However, FEM and CAD use different approaches for the mathematical description of the geometry. Thus, it is required to generate a mesh, which is suitable for FEM, based on the CAD model. Nonetheless, this procedure is not a trivial task and it can be time consuming. This issue becomes more significant for solving shape and topology optimization problems, which consist in evolving the geometry iteratively. Therefore, the computational cost associated to the mesh generation process is increased exponentially for this type of applications.
The main goal of this work is to investigate the integration of CAD and CAE in shape and topology optimization. To this end, numerical tools that close the gap between design and analysis are presented. The specific objectives of this work are listed below:
• Automatize the sensitivity analysis in an isogeometric framework for applications in shape optimization. Applications for linear elasticity are considered.
• A methodology is developed for providing a direct link between the CAD model and the analysis mesh. In consequence, the sensitivity analysis can be performed in terms of the design variables located in the design model.
• The last objective is to develop an isogeometric method for shape and topological optimization. This method should take advantage of using Non-Uniform Rational B-Splines (NURBS) with higher continuity as basis functions.
Isogeometric Analysis (IGA) is a framework designed to integrate the design and analysis in engineering problems. The fundamental idea of IGA is to use the same basis functions for modeling the geometry, usually NURBS, for the approximation of the solution fields. The advantage of integrating design and analysis is two-fold. First, the analysis stage is more accurate since the system of PDEs is not solved using an approximated geometry, but the exact CAD model. Moreover, providing a direct link between the design and analysis discretizations makes possible the implementation of efficient sensitivity analysis methods. Second, the computational time is significantly reduced because the mesh generation process can be avoided.
Sensitivity analysis is essential for solving optimization problems when gradient-based optimization algorithms are employed. Automatic differentiation can compute exact gradients, automatically by tracking the algebraic operations performed on the design variables. For the automation of the sensitivity analysis, an isogeometric framework is used. Here, the analysis mesh is obtained after carrying out successive refinements, while retaining the coarse geometry for the domain design. An automatic differentiation (AD) toolbox is used to perform the sensitivity analysis. The AD toolbox takes the code for computing the objective and constraint functions as input. Then, using a source code transformation approach, it outputs a code for computing the objective and constraint functions, and their sensitivities as well. The sensitivities obtained from the sensitivity propagation method are compared with analytical sensitivities, which are computed using a full isogeometric approach.
The computational efficiency of AD is comparable to that of analytical sensitivities. However, the memory requirements are larger for AD. Therefore, AD is preferable if the memory requirements are satisfied. Automatic sensitivity analysis demonstrates its practicality since it simplifies the work of engineers and designers.
Complex geometries with sharp edges and/or holes cannot easily be described with NURBS. One solution is the use of unstructured meshes. Simplex-elements (triangles and tetrahedra for two and three dimensions respectively) are particularly useful since they can automatically parameterize a wide variety of domains. In this regard, unstructured Bézier elements, commonly used in CAD, can be employed for the exact modelling of CAD boundary representations. In two dimensions, the domain enclosed by NURBS curves is parameterized with Bézier triangles. To describe exactly the boundary of a two-dimensional CAD model, the continuity of a NURBS boundary representation is reduced to C^0. Then, the control points are used to generate a triangulation such that the boundary of the domain is identical to the initial CAD boundary representation. Thus, a direct link between the design and analysis discretizations is provided and the sensitivities can be propagated to the design domain.
In three dimensions, the initial CAD boundary representation is given as a collection of NURBS surfaces that enclose a volume. Using a mesh generator (Gmsh), a tetrahedral mesh is obtained. The original surface is reconstructed by modifying the location of the control points of the tetrahedral mesh using Bézier tetrahedral elements and a point inversion algorithm. This method offers the possibility of computing the sensitivity analysis using the analysis mesh. Then, the sensitivities can be propagated into the design discretization. To reuse the mesh originally generated, a moving Bézier tetrahedral mesh approach was implemented.
A gradient-based optimization algorithm is employed together with a sensitivity propagation procedure for the shape optimization cases. The proposed shape optimization approaches are used to solve some standard benchmark problems in structural mechanics. The results obtained show that the proposed approach can compute accurate gradients and evolve the geometry towards optimal solutions. In three dimensions, the moving mesh approach results in faster convergence in terms of computational time and avoids remeshing at each optimization step.
For considering topological changes in a CAD-based framework, an isogeometric phase-field based shape and topology optimization is developed. In this case, the diffuse interface of a phase-field variable over a design domain implicitly describes the boundaries of the geometry. The design variables are the local values of the phase-field variable. The descent direction to minimize the objective function is found by using the sensitivities of the objective function with respect to the design variables. The evolution of the phase-field is determined by solving the time dependent Allen-Cahn equation.
Especially for topology optimization problems that require C^1 continuity, such as for flexoelectric structures, the isogeometric phase field method is of great advantage. NURBS can achieve the desired continuity more efficiently than the traditional employed functions. The robustness of the method is demonstrated when applied to different geometries, boundary conditions, and material configurations. The applications illustrate that compared to piezoelectricity, the electrical performance of flexoelectric microbeams is larger under bending. In contrast, the electrical power for a structure under compression becomes larger with piezoelectricity.
In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks.
Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis.
While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models.
As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling.
This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components:
-Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS).
-Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks.
-Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models.
Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling.
The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts.
Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem.
For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time.
In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them.
Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design.
In recent years, the discussion of digitalization has arrived in the media, at conferences, and in committees of the construction and real estate industry. While some areas are producing innovations and some contributors can be described as pioneers, other topics still show deficits with regard to digital transformation. The building permit process can also be counted in this category. Regardless of how architects and engineers in planning offices rely on innovative methods, building documents have so far remained in paper form in too many cases, or are printed out after electronic submission to the authority. Existing resources – for example in the form of a building information model, which could provide support in the building permit process – are not being taken advantage of. In order to use digital tools to support decision-making by the building permit authorities, it is necessary to understand the current situation and to question conditions before pursuing the overall automation of internal authority processes as the sole solution.
With a substantive-organizational consideration of the relevant areas that influence building permit determination, an improvement of the building permit procedure within authorities is proposed. Complex areas – such as legal situations, the use of technology, as well as the subjective alternative action – are determined and structured. With the development of a model for the determination of building permitability, both an understanding of influencing factors is conveyed and an increase in transparency for all parties involved is created.
In addition to an international literature review, an empirical study served as the research method. The empirical study was conducted in the form of qualitative expert interviews in order to determine the current state in the field of building permit procedures. The collected data material was processed and subsequently subjected to a software-supported content analysis. The results were processed, in combination with findings from the literature review, in various analyses to form the basis for a proposed model.
The result of the study is a decision model that closes the gap between the current processes within the building authorities and an overall automation of the building permit review process. The model offers support to examiners and applicants in determining building permit eligibility, through its process-oriented structuring of decision-relevant facts. The theoretical model could be transferred into practice in the form of a web application.
The detailed structural analysis of thin-walled circular pipe members often requires the use of a shell or solid-based finite element method. Although these methods provide a very good approximation of the deformations, they require a higher degree of discretization which causes high computational costs. On the other hand, the analysis of thin-walled circular pipe members based on classical beam theories is easy to implement and needs much less computation time, however, they are limited in their ability to approximate the deformations as they cannot consider the deformation of the cross-section.
This dissertation focuses on the study of the Generalized Beam Theory (GBT) which is both accurate and efficient in analyzing thin-walled members. This theory is based on the separation of variables in which the displacement field is expressed as a combination of predetermined deformation modes related to the cross-section, and unknown amplitude functions defined on the beam's longitudinal axis. Although the GBT was initially developed for long straight members, through the consideration of complementary deformation modes, which amend the null transverse and shear membrane strain assumptions of the classical GBT, problems involving short members, pipe bends, and geometrical nonlinearity can also be analyzed using GBT. In this dissertation, the GBT formulation for the analysis of these problems is developed and the application and capabilities of the method are illustrated using several numerical examples. Furthermore, the displacement and stress field results of these examples are verified using an equivalent refined shell-based finite element model.
The developed static and dynamic GBT formulations for curved thin-walled circular pipes are based on the linear kinematic description of the curved shell theory. In these formulations, the complex problem in pipe bends due to the strong coupling effect of the longitudinal bending, warping and the cross-sectional ovalization is handled precisely through the derivation of the coupling tensors between the considered GBT deformation modes. Similarly, the geometrically nonlinear GBT analysis is formulated for thin-walled circular pipes based on the nonlinear membrane kinematic equations. Here, the initial linear and quadratic stress and displacement tangent stiffness matrices are built using the third and fourth-order GBT deformation mode coupling tensors.
Longitudinally, the formulation of the coupled GBT element stiffness and mass matrices are presented using a beam-based finite element formulation. Furthermore, the formulated GBT elements are tested for shear and membrane locking problems and the limitations of the formulations regarding the membrane locking problem are discussed.
Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics.
As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects.
As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models.
Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated.
Das Ziel der Arbeit ist, eine mögliche Verbesserung der Güte der Lebensdauervorhersage für Gusseisenwerkstoffe mit Kugelgraphit zu erreichen, wobei die Gießprozesse verschiedener Hersteller berücksichtigt werden.
Im ersten Schritt wurden Probenkörper aus GJS500 und GJS600 von mehreren Gusslieferanten gegossen und daraus Schwingproben erstellt.
Insgesamt wurden Schwingfestigkeitswerte der einzelnen gegossenen Proben sowie der Proben des Bauteils von verschiedenen Gussherstellern weltweit entweder durch direkte Schwingversuche oder durch eine Sammlung von Betriebsfestigkeitsversuchen bestimmt.
Dank der metallografischen Arbeit und Korrelationsanalyse konnten drei wesentliche Parameter zur Bestimmung der lokalen Dauerfestigkeit festgestellt werden: 1. statische Festigkeit, 2. Ferrit- und Perlitanteil der Mikrostrukturen und 3. Kugelgraphitanzahl pro Flächeneinheit.
Basierend auf diesen Erkenntnissen wurde ein neues Festigkeitsverhältnisdiagramm (sogenanntes Sd/Rm-SG-Diagramm) entwickelt.
Diese neue Methodik sollte vor allem ermöglichen, die Bauteildauerfestigkeit auf der Grundlage der gemessenen oder durch eine Gießsimulation vorhersagten lokalen Zugfestigkeitswerte sowie Mikrogefügenstrukturen besser zu prognostizieren.
Mithilfe der Versuche sowie der Gießsimulation ist es gelungen, unterschiedliche Methoden der Lebensdauervorhersage unter Berücksichtigung der Herstellungsprozesse weiterzuentwickeln.
Die vorliegende Arbeit richtet sich an Ingenieur*innen und Wissenschaftler*innen der technischen Gebäudeausrüstung. Sie greift einen sich abzeichnenden Änderungsbedarf in der Umwelt- und Nachhaltigkeitsbewertung von Gebäuden und wärmetechnischen Anlagen auf. Der aktuell genutzte nicht erneuerbare Primärenergiebedarf wird insbesondere hinsichtlich künftiger politischer Klima- und Umweltschutzziele als alleinige Bewertungsgröße nicht ausreichend sein. Die mit dieser Arbeit vorgestellte Ökoeffizienzbewertungsmethode kann als geeignetes Instrument zur Lösung der Probleme beitragen. Sie ermöglicht systematische, ganzheitliche Bewertungen und reproduzierbare Vergleiche wärmetechnischer Anlagen bezüglich ihrer ökologischen und ökonomischen Nachhaltigkeit. Die wesentlichsten Neuentwicklungen sind die spezifische Umweltleistung, in Erweiterung zum genutzten Primärenergiefaktor, und der Ökoeffizienzindikator UWI.
Die Auseinandersetzung mit der Digitalisierung ist in den letzten Jahren in den Medien, auf Konferenzen und in Ausschüssen der Bau- und Immobilienbranche angekommen. Während manche Bereiche Neuerungen hervorbringen und einige Akteure als Pioniere zu bezeichnen sind, weisen andere Themen noch Defizite hinsichtlich der digitalen Transformation auf. Zu dieser Kategorie kann auch das Baugenehmigungsverfahren gezählt werden. Unabhängig davon, wie Architekten und Ingenieure in den Planungsbüros auf innovative Methoden setzen, bleiben die Bauvorlagen bisher zuhauf in Papierform oder werden nach der elektronischen Einreichung in der Behörde ausgedruckt. Vorhandene Ressourcen, beispielsweise in Form eines Bauwerksinformationsmodells, die Unterstützung bei der Baugenehmigungsfeststellung bieten können, werden nicht ausgeschöpft. Um mit digitalen Werkzeugen eine Entscheidungshilfe für die Baugenehmigungsbehörden zu erarbeiten, ist es notwendig, den Ist-Zustand zu verstehen und Gegebenheiten zu hinterfragen, bevor eine Gesamtautomatisierung der innerbehördlichen Vorgänge als alleinige Lösung zu verfolgen ist.
Mit einer inhaltlich-organisatorischen Betrachtung der relevanten Bereiche, die Einfluss auf die Baugenehmigungsfeststellung nehmen, wird eine Optimierung des Baugenehmigungsverfahrens in den
Behörden angestrebt. Es werden die komplexen Bereiche, wie die Gesetzeslage, der Einsatz von Technologie aber auch die subjektiven Handlungsalternativen, ermittelt und strukturiert. Mit der Entwicklung eines Modells zur Feststellung der Baugenehmigungsfähigkeit wird sowohl ein Verständnis für Einflussfaktoren vermittelt als auch eine Transparenzsteigerung für alle Beteiligten geschaffen.
Neben einer internationalen Literaturrecherche diente eine empirische Studie als Untersuchungsmethode. Die empirische Studie wurde in Form von qualitativen Experteninterviews durchgeführt, um den Ist-Zustand im Bereich der Baugenehmigungsverfahren festzustellen. Das erhobene Datenmaterial wurde aufbereitet und anschließend einer softwaregestützten Inhaltsanalyse unterzogen. Die Ergebnisse wurden in Kombination mit den Erkenntnissen der Literaturrecherche in verschiedenen Analysen als Modellgrundlage aufgearbeitet.
Ergebnis der Untersuchung stellt ein Entscheidungsmodell dar, welches eine Lücke zwischen den gegenwärtigen
Abläufen in den Baubehörden und einer Gesamtautomatisierung der Baugenehmigungsprüfung schließt. Die prozessorientierte Strukturierung entscheidungsrelevanter Sachverhalte im Modell ermöglicht eine Unterstützung bei der Baugenehmigungsfeststellung für Prüfer und Antragsteller. Das theoretische Modell konnte in Form einer Webanwendung in die Praxis übertragen werden.
Für die Verminderung der betonspezifischen CO2-Emissionen wird ein verstärkter Einsatz klinkerreduzierter Zemente bzw. Betone angestrebt. Die Reduzierung des Klinkergehaltes darf jedoch nicht zu einer lebensdauerrelevanten Beeinträchtigung der Betondauerhaftigkeit führen. In diesem Zusammenhang stellt der Frost-Tausalz-Widerstand eine kritische Größe dar, da er bei höheren Klinkersubstitutionsraten häufig negativ beeinflusst wird. Erschwerend kommt hinzu, dass für klinkerreduzierte Betone nur ein unzureichender Erfahrungsschatz vorliegt. Ein hoher Frost-Tausalz-Widerstand kann daher nicht ausschließlich anhand deskriptiver Vorgaben gewährleistet werden. Demgemäß sollte perspektivisch auch für frost-tausalzbeanspruchte Bauteile eine performancebasierte Lebensdauerbetrachtung erfolgen.
Eine unverzichtbare Grundlage für das Erreichen dieser Ziele ist ein Verständnis für die Schadensvorgänge beim Frost-Tausalz-Angriff. Der Forschungsstand ist jedoch geprägt von widersprüchlichen Schadenstheorien. Somit wurde als Zielstellung für diese Arbeit abgeleitet, die existierenden Schadenstheorien unter Berücksichtigung des aktuellen Wissensstandes zu bewerten und mit eigenen Untersuchungen zu prüfen und einzuordnen. Die Sichtung des Forschungsstandes zeigte, dass nur zwei Theorien das Potential haben, den Frost-Tausalz-Angriff umfassend abzubilden – die Glue Spall Theorie und die Cryogenic Suction Theorie.
Die Glue Spall Theorie führt die Entstehung von Abwitterungen auf die mechanische Schädigung der Betonoberfläche durch eine anhaftende Eisschicht zurück. Dabei sollen nur bei moderaten Tausalzkonzentrationen in der einwirkenden Lösung kritische Spannungszustände in der Eisschicht auftreten, die eine Schädigung der Betonoberfläche hervorrufen können. In dieser Arbeit konnte jedoch nachgewiesen werden, dass starke Abwitterungen auch bei Tausalz¬konzentrationen auftreten, bei denen eine mechanische Schädigung des Betons durch das Eis auszuschließen ist. Damit wurde die fehlende Eignung der Glue Spall Theorie aufgezeigt.
Die Cryogenic Suction Theorie fußt auf den eutektischen Eigenschaften von Tausalz-lösungen, die im gefrorenen Zustand immer als Mischung auf festem Wassereis und flüssiger, hochkonzentrierter Salzlösung bestehen, solange ihre Eutektikumstemperatur nicht unter¬schritten wird. Die flüssige Phase im salzhaltigen Eis stellt für gefrorenen Beton ein bisher nicht berücksichtigtes Flüssigkeitsreservoir dar, welches trotz der hohen Salzkonzentration die Eisbildung in der Betonrandzone verstärken und so die Entstehung von Abwitterungen verursachen soll. In dieser Arbeit wurde bestätigt, dass die Eisbildung im Zementstein beim Gefrieren in hochkonzentrierter Tausalzlösung tatsächlich verstärkt wird. Das Ausmaß der zusätzlichen Eisbildung wurde dabei auch von der Fähigkeit des Zementsteins zur Bindung von Chloridionen aus der Tausalzlösung beeinflusst.
Zusammenfassend wurde festgestellt, dass die Cryogenic Suction Theorie eine gute Beschreibung des Frost-Tausalz-Angriffes darstellt, aber um weitere Aspekte ergänzt werden muss. Die Berücksichtigung der intensiven Sättigung von Beton durch den Prozess der Mikroeislinsenpumpe stellt hier die wichtigste Erweiterung dar. Basierend auf dieser Überlegung wurde eine kombinierte Schadenstheorie aufgestellt. Wichtige Annahmen dieser Theorie konnten experimentell bestätigt werden. Im Ergebnis wurde so die Grundlage für ein tiefergehendes Verständnis des Frost-Tausalz-Angriffes geschaffen. Zudem wurde ein neuer Ansatz identifiziert, um die (potentielle) Verringerung des Frost-Tausalz-Widerstandes klinkerreduzierter Betone zu erklären.
In den letzten Jahrzehnten unterlag der Straßenbetriebsdienst tiefgreifenden Veränderungen. Diese Veränderungen schließt auch die betriebliche Steuerungsphilosophie ein, um eine planungsrationale und ökonomische Gestaltung des Straßenbetriebsdienstes zu unterstützen. Dabei erfolgt eine verbindliche Vorgabe der Leistungsinhalte und -umfänge und ermöglicht eine Budgetierung für das vorgesehene Jahresarbeitsprogramm.
Ziel der Untersuchung ist die Entwicklung eines Modells für die Ermittlung von leistungsbezogenen Musterjahresganglinien zur Unterstützung der Jahresarbeitsplanung. Dafür lagen für jede Leistung des Leistungsbereiches „Grünpflege“ jeweils 260 einzelne Jahresganglinien vor.
Im Ergebnis der Untersuchung wird die leistungsbezogene Musterjahresganglinie in vier Schritten ermittelt. Im ersten Schritt erfolgt die Prüfung der Datenqualität; im zweiten Schritt eine Korrelationsanalyse; im dritten Schritt die fachliche Überprüfung der Leistungsausprägung und im vierten Schritt die Ermittlung der leistungsbezogenen Musterjahresganglinie aus den verbliebenen leistungsbezogenen Jahresganglinien.
“How to understand the interaction between urban space and social processes” is a significant question in urban studies. To answer that, the city needs to be recognized as both a physical and a social entity and urban theory and practice need to connect these (Hillier 2007). The present research aims to re-examine the complex correlation between spatial and social inequality manifestations in the city of Tehran regarding the concept of segregation.
It observes the causes and consequences of segregation in Tehran and provides an insight into both concepts of socio-spatial segregation and neighborhood effects and creates a link between them. First, I argue when, where, and for whom spatial locations affect the chances of social networks in Tehran. Then, I discuss how neighborhood effects can emerge via social network mechanisms and thus affect the perceptions of residents in the neighborhoods.
This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.
Die Arbeit leistet einen wissenschaftlichen Beitrag zur Erforschung der Einsatzmöglichkeiten eines Immobilienportfoliomanagements für öffentliche museale Schlösserverwaltungen in Deutschland. Insbesondere wird ein für deren Organisation spezifisches Modell zur Investitionssteuerung herausgearbeitet und dessen Anwendbarkeit in der Praxis mit Experten diskutiert.
In the last two decades, Peridynamics (PD) attracts much attention in the field of fracture mechanics. One key feature of PD is the nonlocality, which is quite different from the ideas in conventional methods such as FEM and meshless method. However, conventional PD suffers from problems such as constant horizon, explicit algorithm, hourglass mode. In this thesis, by examining the nonlocality with scrutiny, we proposed several new concepts such as dual-horizon (DH) in PD, dual-support (DS) in smoothed particle hydrodynamics (SPH), nonlocal operators and operator energy functional. The conventional PD (SPH) is incorporated in the DH-PD (DS-SPH), which can adopt an inhomogeneous discretization and inhomogeneous support domains. The DH-PD (DS-SPH) can be viewed as some fundamental improvement on the conventional PD (SPH). Dual formulation of PD and SPH allows h-adaptivity while satisfying the conservations of linear momentum, angular momentum and energy. By developing the concept of nonlocality further, we introduced the nonlocal operator method as a generalization of DH-PD. Combined with energy functional of various physical models, the nonlocal forms based on dual-support concept are derived. In addition, the variation of the energy functional allows implicit formulation of the nonlocal theory. At last, we developed the higher order nonlocal operator method which is capable of solving higher order partial differential equations on arbitrary domain in higher dimensional space. Since the concepts are developed gradually, we described our findings chronologically.
In chapter 2, we developed a DH-PD formulation that includes varying horizon sizes and solves the "ghost force" issue. The concept of dual-horizon considers the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly with arbitrary particle discretization. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. A simple adaptive refinement procedure (h-adaptivity) is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method.
In chapter 3, a nonlocal operator method (NOM) based on the variational principle is proposed for the solution of waveguide problem in computational electromagnetic field. Common differential operators as well as the variational forms are defined within the context of nonlocal operators. The present nonlocal formulation allows the assembling of the tangent stiffness matrix with ease, which is necessary for the eigenvalue analysis of the waveguide problem. The present formulation is applied to solve 1D Schrodinger equation, 2D electrostatic problem and the differential electromagnetic vector wave equations based on electric fields.
In chapter 4, a general nonlocal operator method is proposed which is applicable for solving partial differential equations (PDEs) of mechanical problems. The nonlocal operator can be regarded as the integral form, ``equivalent'' to the differential form in the sense of a nonlocal interaction model. The variation of a nonlocal operator plays an equivalent role as the derivatives of the shape functions in the meshless methods or those of the finite element method. Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease. The nonlocal operator method is enhanced here also with an operator energy functional to satisfy the linear consistency of the field. A highlight of the present method is the functional derived based on the nonlocal operator can convert the construction of residual and stiffness matrix into a series of matrix multiplications using the predefined nonlocal operators. The nonlocal strong forms of different functionals can be obtained easily via the concept of support and dual-support. Several numerical examples of different types of PDEs are presented.
In chapter 5, we extended the NOM to higher order scheme by using a higher order Taylor series expansion of the unknown field. Such a higher order scheme improves the original NOM in chapter 3 and chapter 4, which can only achieve one-order convergence. The higher order NOM obtains all partial derivatives with specified maximal order simultaneously without resorting to shape functions. The functional based on the nonlocal operators converts the construction of residual and stiffness matrix into a series of matrix multiplication on the nonlocal operator matrix. Several numerical examples solved by strong form or weak form are presented to show the capabilities of this method.
In chapter 6, the NOM proposed as a particle-based method in chapter 3,4,5, has difficulty in imposing accurately the boundary conditions of various orders. In this paper, we converted the particle-based NOM into a scheme with interpolation property. The new scheme describes partial derivatives of various orders at a point by the nodes in the support and takes advantage of the background mesh for numerical integration. The boundary conditions are enforced via the modified variational principle. The particle-based NOM can be viewed a special case of NOM with interpolation property when nodal integration is used. The scheme based on numerical integration greatly improves the stability of the method, as a consequence, the operator energy functional in particle-based NOM is not required. We demonstrated the capabilities of current method by solving the gradient solid problems and comparing the numerical results with the available exact solutions.
In chapter 7, we derived the DS-SPH in solid within the framework of variational principle. The tangent stiffness matrix of SPH can be obtained with ease, and can be served as the basis for the present implicit SPH. We proposed an hourglass energy functional, which allows the direct derivation of hourglass force and hourglass tangent stiffness matrix. The dual-support is {involved} in all derivations based on variational principles and is automatically satisfied in the assembling of stiffness matrix. The implementation of stiffness matrix comprises with two steps, the nodal assembly based on deformation gradient and global assembly on all nodes. Several numerical examples are presented to validate the method.
This dissertation investigates the interactions between urban form, allocation of activities, and pedestrian movement in the context of urban planning. The ability to assess the long-term impact of urban planning decisions on what people do and how they get there is of central importance, with various disciplines addressing this topic. This study focuses on approaches proposed by urban morphologists, urban economists, and transportation planners, each aiming the attention at a different part of the form-activity-movement interaction. Even though there is no doubt about the advantages of these highly focused approaches, it remains unclear what is the cost of ignoring the effect of some interactions while considering others. The general aim of this dissertation is to empirically test the validity of the individual models and quantify the impact of this isolationist approach on their precision and bias.
For this purpose, we propose a joined form-activity-movement interaction model and conduct an empirical study in Weimar, Germany. We estimate how the urban form and activities affect movement as well as how movement and urban form affect activities. By estimating these effects in isolation and simultaneously, we assess the bias of the individual models.
On the one hand, the empirical study results confirm the significance of all interactions suggested by the individual models. On the other hand, we were able to show that when these interactions are estimated in isolation, the resulting predictions are biased. To conclude, we do not question the knowledge brought by transportation planners, urban morphologists, and urban economists. However, we argue that it might be of little use on its own.
We see the relevance of this study as being twofold. On the one hand, we proposed a novel methodological framework for the simultaneous estimation of the form-activity-movement interactions. On the other hand, we provide empirical evidence about the strengths and limitations of current approaches.
In the last decades, Finite Element Method has become the main method in statics and dynamics analysis in engineering practice. For current problems, this method provides a faster, more flexible solution than the analytic approach. Prognoses of complex engineer problems that used to be almost impossible to solve are now feasible.
Although the finite element method is a robust tool, it leads to new questions about engineering solutions. Among these new problems, it is possible to divide into two major groups: the first group is regarding computer performance; the second one is related to understanding the digital solution.
Simultaneously with the development of the finite element method for numerical solutions, a theory between beam theory and shell theory was developed: Generalized Beam Theory, GBT. This theory has not only a systematic and analytical clear presentation of complicated structural problems, but also a compact and elegant calculation approach that can improve computer performance.
Regrettably, GBT was not internationally known since the most publications of this theory were written in German, especially in the first years. Only in recent years, GBT has gradually become a fertile research topic, with developments from linear to non-linear analysis.
Another reason for the misuse of GBT is the isolated application of the theory. Although recently researches apply finite element method to solve the GBT's problems numerically, the coupling between finite elements of GBT and other theories (shell, solid, etc) is not the subject of previous research. Thus, the main goal of this dissertation is the coupling between GBT and shell/membrane elements. Consequently, one achieves the benefits of both sides: the versatility of shell elements with the high performance of GBT elements.
Based on the assumptions of GBT, this dissertation presents how the separation of variables leads to two calculation's domains of a beam structure: a cross-section modal analysis and the longitudinal amplification axis. Therefore, there is the possibility of applying the finite element method not only in the cross-section analysis, but also the development for an exact GBT's finite element in the longitudinal direction.
For the cross-section analysis, this dissertation presents the solution of the quadratic eigenvalue problem with an original separation between plate and membrane mechanism. Subsequently, one obtains a clearer representation of the deformation mode, as well as a reduced quadratic eigenvalue problem.
Concerning the longitudinal direction, this dissertation develops the novel exact elements, based on hyperbolic and trigonometric shape functions. Although these functions do not have trivial expressions, they provide a recursive procedure that allows periodic derivatives to systematise the development of stiffness matrices. Also, these shape functions enable a single-element discretisation of the beam structure and ensure a smooth stress field.
From these developments, this dissertation achieves the formulation of its primary objective: the connection of GBT and shell elements in a mixed model. Based on the displacement field, it is possible to define the coupling equations applied in the master-slave method. Therefore, one can model the structural connections and joints with finite shell elements and the structural beams and columns with GBT finite element.
As a side effect, the coupling equations limit the displacement field of the shell elements under the assumptions of GBT, in particular in the neighbourhood of the coupling cross-section.
Although these side effects are almost unnoticeable in linear analysis, they lead to cumulative errors in non-linear analysis. Therefore, this thesis finishes with the evaluation of the mixed GBT-shell models in non-linear analysis.
This thesis presents the advances and applications of phase field modeling in fracture analysis. In this approach, the sharp crack surface topology in a solid is approximated by a diffusive crack zone governed by a scalar auxiliary variable. The uniqueness of phase field modeling is that the crack paths are automatically determined as part of the solution and no interface tracking is required. The damage parameter varies continuously over the domain. But this flexibility comes with associated difficulties: (1) a very fine spatial discretization is required to represent sharp local gradients correctly; (2) fine discretization results in high computational cost; (3) computation of higher-order derivatives for improved convergence rates and (4) curse of dimensionality in conventional numerical integration techniques. As a consequence, the practical applicability of phase field models is severely limited.
The research presented in this thesis addresses the difficulties of the conventional numerical integration techniques for phase field modeling in quasi-static brittle fracture analysis. The first method relies on polynomial splines over hierarchical T-meshes (PHT-splines) in the framework of isogeometric analysis (IGA). An adaptive h-refinement scheme is developed based on the variational energy formulation of phase field modeling. The fourth-order phase field model provides increased regularity in the exact solution of the phase field equation and improved convergence rates for numerical solutions on a coarser discretization, compared to the second-order model. However, second-order derivatives of the phase field are required in the fourth-order model. Hence, at least a minimum of C1 continuous basis functions are essential, which is achieved using hierarchical cubic B-splines in IGA. PHT-splines enable the refinement to remain local at singularities and high gradients, consequently reducing the computational cost greatly. Unfortunately, when modeling complex geometries, multiple parameter spaces (patches) are joined together to describe the physical domain and there is typically a loss of continuity at the patch boundaries. This decrease of smoothness is dictated by the geometry description, where C0 parameterizations are normally used to deal with kinks and corners in the domain. Hence, the application of the fourth-order model is severely restricted. To overcome the high computational cost for the second-order model, we develop a dual-mesh adaptive h-refinement approach. This approach uses a coarser discretization for the elastic field and a finer discretization for the phase field. Independent refinement strategies have been used for each field.
The next contribution is based on physics informed deep neural networks. The network is trained based on the minimization of the variational energy of the system described by general non-linear partial differential equations while respecting any given law of physics, hence the name physics informed neural network (PINN). The developed approach needs only a set of points to define the geometry, contrary to the conventional mesh-based discretization techniques. The concept of `transfer learning' is integrated with the developed PINN approach to improve the computational efficiency of the network at each displacement step. This approach allows a numerically stable crack growth even with larger displacement steps. An adaptive h-refinement scheme based on the generation of more quadrature points in the damage zone is developed in this framework. For all the developed methods, displacement-controlled loading is considered. The accuracy and the efficiency of both methods are studied numerically showing that the developed methods are powerful and computationally efficient tools for accurately predicting fractures.
In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential.
Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods.
This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed.
As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available.
After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using
adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples.
After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest.
The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion.
In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison.
Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods.
Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method.
Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD.
Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA.
The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries.
Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials.
Marine Makroalgen besitzen vielversprechende Eigenschaften und Inhaltsstoffe für die Verwendung als Energieträger, Nahrungsmittel oder als Ausgangsstoff für Pharmazeutika. Dass die Quantität und Qualität der in natürlicher Umgebung wachsenden Makroalgen schwankt, reduziert jedoch deren Verwertbarkeit und erschwert die Erschließung hochpreisiger Marktsegmente. Zudem ist eine Ausweitung der Zucht in marinen und küstennahen Aquakulturen in Europa gegenwärtig wenig aussichtsreich, da vielversprechende Areale bereits zum Fischfang oder als Erholungs- bzw. Naturschutzgebiete ausgewiesen sind. Im Rahmen dieser Arbeit wird demzufolge ein geschlossenes Photobioreaktorsystem zur Makroalgenkultivierung entwickelt, welches eine umfassende Kontrolle der abiotischen Kultivierungsparameter und eine effektive Aufbereitung des Kulturmediums vorsieht, um eine standortunabhängige Algenproduktion zu ermöglichen. Zur Bilanzierung des Gesamtkonzeptes einer Kultivierung und Verwertung (stofflich oder energetisch) werden die spezifischen Wachstumsraten und Methanbildungspotentiale der Algenarten Ulva intestinalis, Fucus vesiculosus und Palmaria palmata in praktischen Versuchen ermittelt.
Im Ergebnis wird für den gegenwärtigen Entwicklungsstand der Kultivierungsanlage eine positive Bilanz für die stoffliche Verwertung der Algenart Ulva intestinalis und eine negative Bilanz für die energetische Verwertung aller untersuchten Algenarten erzielt. Wird ein Optimalszenario betrachtet, indem die Besatzdichten und Wachstumsraten der Algen in der Zucht erhöht werden, bleibt die Energiebilanz negativ. Allerdings summieren sich die finanzielle Einnahmen durch einen Verkauf der Algen als Produkt auf jährlich 460.869€ für Ulva intestinalis, 4.010€ für Fucus vesiculosus und 16.913€ für Palmaria palmata. Im Ergebnis ist insbesondere eine stoffliche Verwertung der gezüchteten Grünalge Ulva intestinalis anzustreben und die Produktivität der Zuchtanlage im Sinne des Optimalszenarios zu steigern.
Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed.
Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations.
Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control.
Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records.
Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations.
Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties.
Synergistic Framework for Analysis and Model Assessment in Bridge Aerodynamics and Aeroelasticity
(2020)
Wind-induced vibrations often represent a major design criterion for long-span bridges. This work deals with the assessment and development of models for aerodynamic and aeroelastic analyses of long-span bridges.
Computational Fluid Dynamics (CFD) and semi-analytical aerodynamic models are employed to compute the bridge response due to both turbulent and laminar free-stream. For the assessment of these models, a comparative methodology is developed that consists of two steps, a qualitative and a quantitative one. The first, qualitative, step involves an extension
of an existing approach based on Category Theory and its application to the field of bridge aerodynamics. Initially, the approach is extended to consider model comparability and completeness. Then, the complexity of the CFD and twelve semi-analytical models are evaluated based on their mathematical constructions, yielding a diagrammatic representation of model quality.
In the second, quantitative, step of the comparative methodology, the discrepancy of a system response quantity for time-dependent aerodynamic models is quantified using comparison metrics for time-histories. Nine metrics are established on a uniform basis to quantify the discrepancies in local and global signal features that are of interest in bridge aerodynamics. These signal features involve quantities such as phase, time-varying frequency and magnitude content, probability density, non-stationarity, and nonlinearity.
The two-dimensional (2D) Vortex Particle Method is used for the discretization of the Navier-Stokes equations including a Pseudo-three dimensional (Pseudo-3D) extension within an existing CFD solver. The Pseudo-3D Vortex Method considers the 3D structural behavior for aeroelastic analyses by positioning 2D fluid strips along a line-like structure. A novel turbulent Pseudo-3D Vortex Method is developed by combining the laminar Pseudo-3D VPM and a previously developed 2D method for the generation of free-stream turbulence. Using analytical derivations, it is shown that the fluid velocity correlation is maintained between the CFD strips.
Furthermore, a new method is presented for the determination of the complex aerodynamic admittance under deterministic sinusoidal gusts using the Vortex Particle Method. The sinusoidal gusts are simulated by modeling the wakes of flapping airfoils in the CFD domain with inflow vortex particles. Positioning a section downstream yields sinusoidal forces that are used for determining all six components of the complex aerodynamic admittance. A closed-form analytical relation is derived, based on an existing analytical model. With this relation, the inflow particles’ strength can be related with the target gust amplitudes a priori.
The developed methodologies are combined in a synergistic framework, which is applied to both fundamental examples and practical case studies. Where possible, the results are verified and validated. The outcome of this work is intended to shed some light on the complex wind–bridge interaction and suggest appropriate modeling strategies for an enhanced design.
Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low.
Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs.
Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite.
It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs.
This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines).
In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required.
The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost.
Railway systems are highly competitive compared with other means of transportation because of their distinct advantages in speed, convenience and safety. Therefore, the demand for railway transportation is increasing around the world. Constructing railway tracks and related engineering structures in areas with loose or soft cohesive subgrade usually leads to problems, such as excessive settlement, deformation and instability. Several remedies have been proposed to avoid or reduce such problems, including the replacement of soft soil and the construction of piles or stone columns.
This thesis aims to expand the geotechnical knowledge of how to improve subgrade ballasted railway tracks, using stone columns and numerical modeling for the railway infrastructure. Three aspects are considered: i) railway track dynamics modeling and validation by field measurements, ii) modeling and parametric studies on stone columns, and iii) studies on the linear and non-linear behavior of stone columns under the dynamic load of trains.
The first step of this research was to develop a reliable numerical model of a railway track. The finite element method in a time domain was used for either a 2D plane strain or 3D analysis. Individual methods for modeling a train load in 2D and 3D were implemented and are discussed in this thesis. The developed loading method was validated with three different railway tracks using obtained vibration measurements. Later, these numerical models were used to analyze the influence of stone column length and train speed in the stress field.
The performance of the treated ground depends on various parameters, such as the strength of stone columns, spacing, length and diameter of the columns. Therefore, the second step was devoted to a parameter study of stone columns as a unit cell with an axisymmetric condition. The results showed that even short stone columns were effective for settlement reduction, and area of replacement was the main influential parameter in their performance.
The third part of this thesis focuses on a hypothetical railway-track response to the passage of various train speeds and the influence of stone-column length. The stress-strain response of subgrade is analyzed under either an elastic–perfectly plastic or advanced constitutive model. The non-linear soil response in the finite element method and the impact of train speed and stone column length on railway tracks are also evaluated. Moreover, the reductions of induced vibration – in both a horizontal and a vertical direction – after improvement are investigated.
Gashochdruckleitungen aus Stahl werden mit Hilfe eines deterministischen Sicherheitskonzeptes bemessen. Im unveränderten Bemessungszustand und im bestimmungsgemäßem Betrieb ist die statische Tragfähigkeit der Gashochdruckleitungen gegeben.
Mit den Jahren unterliegen Gashochdruckleitungen aus Stahl geometrischen Veränderungen, die häufig durch Korrosion hervorgerufen werden. Die Beurteilung der statischen Tragfähigkeit erfolgt dann unter Berücksichtigung dieser geometrischen Änderung.
Deterministische Sicherheitsbeiwerte der Bemessung neuer Gashochdruckleitungen können für die Bemessung bestehender korrosionsgeschädigter Gashochdruckleitungen nicht herangezogen werden, da diese einen definierten Beanspruchungs- und Geometriezustand unterstellen, welcher durch den geometrischen Einfluss der Korrosion so nicht mehr besteht.
Die Arbeit befasst sich mit der Ermittlung deterministischer Sicherheitsbeiwerte für die Bemessung korrosionsgeschädigter Gashochdruckleitungen auf Basis von Versagenswahrscheinlichkeiten und stellt ein Anwendungskonzept zu deren Nutzung vor.
The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties.
Primary novel factors of this work center on two aspects.
1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners.
2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions.
Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis.
This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space.
Landwirtschaftliche Biogasanlagen leisten mit ca. 9.300 Anlagen und einem Anteil von 5,3% an der Stromerzeugung, einen Beitrag zur Erzeugung Erneuer-barer Energien in Deutschland. Die Optimierung dieser Anlagen fördert die nachhaltige Bereitstellung von Strom, Wärme und BioErdgas.
Das Ergebnis dieser Forschungsarbeit ist die Entwicklung eines mehrmethodi-schen Bewertungsansatzes zur Beschreibung der Qualität der Eingangs-substrate als Teil einer ganzheitlichen Prozessoptimierung. Dies gelingt durch die kombinierte Nutzung klassischer Analysesätze, der Nutzung organolepti-scher Parameter – der humansensorischen Sinnenprüfung – und der Integration von prozess- und substratspezifischem Erfahrungswissen. Anhand von halbtechnischen Versuchen werden Korrelationen und Kausalitäten zwi-schen chemisch-physikalischen, biologischen, organoleptischen und erfahrungsbezogenen Parametern erforscht. Die Entwicklung einer Fallbasis mit Hilfe des Fallbasierten Schließens, einer Form Künstlicher Intelligenz, zeigt das Entwicklungs- und Integrationspotenzial der Automatisierung auf, insbesondere auch im Hinblick auf neue Ansätze z.B. Industrie 4.0. Erste Lösungen zur Bewältigung der identifizierten Herausforderungen der mehrmethodischen Prozessbewertung werden vorgestellt.
Abschließend wird ein Ausblick auf den weiteren Forschungsbedarf gegeben und die Übertragbarkeit des mehrmethodischen Bewertungsansatzes auf andere Anwendungsfelder z.B. Bioabfallbehandlung, Kläranlagen angeregt.
Die Zonenmethode nach Hertz ist ein vereinfachtes Verfahren zur Heißbemessung von Stahlbetonbauteilen. Um eine händische Bemessung zu ermöglichen, werden daher verschiedene Annahmen und Vereinfachungen getroffen. Insbesondere werden die thermischen Dehnungen vernachlässigt und das mechanische Verhalten durch einen verkleinerten Querschnitt mit konstanten Stoffeigenschaften beschrieben.
Ziel der vorliegenden Arbeit ist, dieses vereinfachte Verfahren in ein nichtlineares Verfahren zur Heißbemessung von Stahlbetondruckgliedern bei Brandbeanspruchung durch die Einheits-Temperaturzeitkurve zu überführen. Dazu werden die wesentlichen Annahmen der Zonenmethode überprüft und ein Vorschlag zur Weiterentwicklung vorgestellt. Dieser beruht im Wesentlichen auf der Modellierung der druckbeanspruchten Bewehrung. Diese weiterentwickelte Zonenmethode wird durch die Nachrechnung von Laborversuchen validiert und das Sicherheitsniveau durch eine vollprobabilistische Analyse und den Vergleich mit dem allgemeinen Verfahren bestimmt.
Living heritage sites are strongly connected to their historical, geographical, socio-political and cultural context. A descriptive narrative of the evolutionary process of the living heritage site of a Sufi shrine is undertaken in this research. It focuses on the changing relationship between the spatial and socio-cultural aspects over time. The larger or macro regional context is interrelated to the micro architectural context. The tangible heritage is defined by and intimately tied to the intangible aspects of the heritage. It is these constituting macro and micro elements and their interrelationships particularly through space and architecture that the research thesis explores in its documentation and analysis.
The Sufi shrine in the South Asian Pakistani context is representative of a larger culture in the precolonial era. It is an expression of an indigenous modernity, belonging to a certain time period, place and community. The Sufi shrine as a building type has evolved from the precolonial time period, particularly starting at the golden ages of the Muslim Empire in the world (9th – 12th century), through the colonial age when western modernity arrived until the current neoliberal paradigm within the post independence period. Continued and evolved use of space, ritualistic performances, multiple social groups using the site are various elements whose documentation and analysis can establish the essential co-relations that contribute to continuity of its historical living. Physical and social relation of the historic site to its immediate settlement context is also a significant element that preserves the socio-cultural context.
The chosen case of the Shrine of Shah Abdul Latif Bhitai, situated in the small town of Bhitshah in the province of Sindh, Pakistan forms a unique example where the particular physical and socio-cultural environment forms the context within which the Sufi heritage lives and survives. It is well integrated within its context at multiple levels. What are these levels and how do the constituting elements integrate is a major subject of research? These form the background to defining some of the basic issues and questions addressed in this doctoral thesis.
Given that living heritage sites are unique due to their particular association to the context, the case study method was used to gain deeper insight and understanding on the topic.
The construction and operation of a sanitary landfill (SLF) in the Philippines presents concerns on the regulation of the activities of the informal sector in the area. In anticipation of these directives, an association of informal waste reclaimers group called Uswag Calajunan Livelihood Association, Inc. (UCLA) was formed in May 2009. One option identified was the waste-to-energy activity through the production of fuel briquettes. With the availability of raw materials in the area, what was lacking then was an appropriate technology that would cater to their needs. This study, therefore, presented the case of UCLA on how socio-economic and technical aspects was integrated for the development and improvement of a briquetting technology needed in the production of quality briquettes as part of their income generating activities. A non-experimental posttest only design was utilized for the collection of descriptive information. Descriptions and discussions were also made on the enhancement of the briquetting machine from the first hand-press molder developed until the finalized design was attained.
Results revealed that the improved briquetting technology withstood the wear and tear of operation showing a significant (P<0.01) increase on the production rate (220 pcs/hr; 4 kg/hr) and bulk density (444.83 kg/m3) of briquettes produced. The quality of cylindrical briquettes produced in terms of bulk density, heating value (15.13 MJ/kg), moisture (6.2%), N and S closely met or has met the requirements of DIN 51731. Based on the operating expenses, the briquettes may be marked-up to Php0.25/pc (USD0.006) or Php15.00/kg (USD0.34) for profit generation. The potential daily earnings of Php130.00 (USD2.95) to Php288.56 (USD6.56) generated in producing briquettes are higher when compared to the majority of waste reclaimers’ daily income of Php124.00 (USD2.82). The high positive response (93%) on the usability of briquettes and the willingness of the respondents (81%) to buy them when sold in the market indicates its promising potential as fuel in the nearby communities. Results of briquette production citing the case of UCLA could be considered as potential source of income given the social, technical, economic and environmental feasibility of the experiment. This method of utilizing wastes in an urban setting of a developing country with similar socio-economic and physical set-ups may also be recommended for testing or replication.
The world society faces a huge challenge to implement the human right of “access to sanitation”. More and more it is accepted that the conventional approach towards providing sanitation services is not suitable to solve this problem. This dissertation examines the possibility to enhance “access to sanitation” for people who are living in areas with underdeveloped water and wastewater infrastructure systems. The idea hereby is to follow an integrated approach for sanitation, which allows for a mutual completion of existing infrastructure with resource-based sanitation systems.
The notion “integrated sanitation system (iSaS)” is defined in this work and guiding principles for iSaS are formulated. Further on the implementation of iSaS is assessed at the example of a case study in the city of Darkhan in Mongolia. More than half of Mongolia’s population live in settlements where yurts (tents of Nomadic people) are predominant. In these settlements (or “ger areas”) sanitation systems are not existent and the hygienic situation is precarious.
An iSaS has been developed for the ger areas in Darkhan and tested over more than two years. Further on a software-based model has been developed with the goal to describe and assess different variations of the iSaS. The results of the assessment of material-flows, monetary-flows and communication-flows within the iSaS are presented in this dissertation. The iSaS model is adaptable and transferable to the socio-economic conditions in other regions and climate zones.
In contemporary society, data representation is an important and essential part of many aspects of our daily lives. This thesis aims to contribute to our understanding on how people experience data and what role representational modality plays in the process of perception and interpretation. This research is grounded in phenomenology - I align my theoretical exploration to ideas and concepts from philosophical phenomenology, while also respecting the essence of a phenomenological approach in the choice and application of methods. Alongside offering a rich description of people’s experience of data representation, the key contributions I claim transcend four areas: theory, methods, design, and empirical findings. From a theoretical perspective, besides describing a phenomenology of human-data relations, I define, for the first time, multisensory data representation and establish a design space for the study of this class of representation. In relation to methodologies, I describe and deploy two methods to investigate different aspects of data experience. I blend the Repertory Grid technique with a focus group session and show how this adaption can be used to elicit rich design relevant insight. I also introduce the Elicitation Interview technique as a method for gathering detailed and precise accounts of human experience. Furthermore, I describe for the first time, how this technique can be used to elicit accounts of experience with data. My contribution to design relates to the creation of a series of bespoke data-driven artefacts, as well as describing an approach to design that I call Design Probes, which allows researchers to focus their enquiry on specific design features. To answer the research questions I set out in this thesis, I report on a series of empirical studies that used the aforementioned methods. The findings of these studies show, for instance, how certain representational modalities cause us to have heightened awareness of our body, some are more difficult to interpret than others, some rely heavily on instinct and each of them solicit us to reference external events during the process of interpretation. I conclude that a phenomenology of human-data relations show how representational modality affects the way we experience data, it also shows how this experience unfolds and it offers insight into particular moments such as the formation of meaning.
Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics.
An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries.
Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations.
The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels.
Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target
displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams.
Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed .
Briefly, the two basic questions that this research is supposed to answer are:
1. Howmuch fiber is needed and how fibers should be distributed through a fiber reinforced composite (FRC) structure in order to obtain the optimal and reliable structural response?
2. How do uncertainties influence the optimization results and reliability of the structure?
Giving answer to the above questions a double stage sequential optimization algorithm for finding the optimal content of short fiber reinforcements and their distribution in the composite structure, considering uncertain design parameters, is presented. In the first stage, the optimal amount of short fibers in a FRC structure with uniformly distributed fibers is conducted in the framework of a Reliability Based Design Optimization (RBDO) problem. Presented model considers material, structural and modeling uncertainties. In the second stage, the fiber distribution optimization (with the aim to further increase in structural reliability) is performed by defining a fiber distribution function through a Non-Uniform Rational BSpline (NURBS) surface. The advantages of using the NURBS surface as a fiber distribution function include: using the same data set for the optimization and analysis; high convergence rate due to the smoothness of the NURBS; mesh independency of the optimal layout; no need for any post processing technique and its non-heuristic nature. The output of stage 1 (the optimal fiber content for homogeneously distributed fibers) is considered as the input of stage 2. The output of stage 2 is the Reliability Index (b ) of the structure with the optimal fiber content and distribution.
First order reliability method (in order to approximate the limit state function) as well as different material models including Rule of Mixtures, Mori-Tanaka, energy-based approach and stochastic multi-scales are implemented in different examples. The proposed combined model is able to capture the role of available uncertainties in FRC structures through a computationally efficient algorithm using all sequential, NURBS and sensitivity based techniques. The methodology is successfully implemented for interfacial shear stress optimization in sandwich beams and also for optimization of the internal cooling channels in a ceramic matrix composite.
Finally, after some changes and modifications by combining Isogeometric Analysis, level set and point wise density mapping techniques, the computational framework is extended for topology optimization of piezoelectric / flexoelectric materials.
Die vorliegende Arbeit beschäftigt sich mit dem Thema Stadthotels in Deutschland zwischen Energieeffizienz und Wirtschaftlichkeit - Studie auf Grundlagen der EnEV-Anforderungen. Die Arbeit setzt sich mit einer qualitativen und quantitativen Analyse über die Energieeffizienz auf Grundlagen der EnEV-Anforderungen und deren Wirtschaftlichkeit bei Stadthotels in Deutschland auseinander. Die Analyse wurde anhand von verschiedenen Untersuchungen bei Hotels aufgebaut. Diese umfassen empirische, energetische und wirtschaftliche Untersuchungen. Die durchgeführten Untersuchungen kommen schließlich zu eindeutigen Ergebnissen auf verschiedenen Ebenen. Im Ergebnis wird deutlich, dass die Optimierung der Gebäudetechnik sowie auch die Verbesserung der energetischen Qualität der Gebäudehülle der Hotels bedeutende Einflussfaktoren zur Steigerung der Energieeffizienz darstellen. Dabei ist jedoch festzuhalten, dass sich die Optimierung der Gebäudetechnik der Hotels insbesondere im Bereich der Lüftungs- und Klimatechnik als besonders wirksam erwiesen hat. Die Effektivität dieser Maßnahmen konnte sowohl in Hinsicht auf die Steigerung der Energieeffizienz als auch in Bezug auf die Wirtschaftlichkeit bewiesen werden.
The increasing success of BIM (Building Information Model) and the emergence of its implementation in 3D construction models have paved a way for improving scheduling process. The recent research on application of BIM in scheduling has focused on quantity take-off, duration estimation for individual trades, schedule visualization, and clash detection.
Several experiments indicated that the lack of detailed planning causes about 30% non-productive time and stacking of trades. However, detailed planning still has not been implemented in practice despite receiving a lot of interest from researchers. The reason is associated with the huge amount and complexity of input data. In order to create a detailed planning, it is time consuming to manually decompose activities, collect and calculate the detailed information in relevant. Moreover, the coordination of detailed activities requires much effort for dealing with their complex constraints.
This dissertation aims to support the generation of detailed schedules from a rough schedule. It proposes a model for automated detailing of 4D schedules by integrating BIM, simulation and Pareto-based optimization.
Development of a Sustainability-based Sanitation Planning Tool (SusTA) for Developing Countries
(2014)
Background and Research Goal
Despite all the efforts in the sanitation sector, it is acknowledged that the world is not on track to meet the MDG sanitation target to reduce the number of people without access to sanitation by 2015. Furthermore, a large number of existing sanitation facilities in developing countries is out of order. This leads to the conclusion that, besides technical failures, the planning process in the sanitation sector was ineffective. This ineffectiveness may be attributed to the lack of knowledge of the sanitation planners about the local conditions of the sanitation project. In addition, sustainability of a technology is often approached from a fragmented perspective that often leads to an unsustainable solution.
The dissertation is conducted within the framework of the Integrated Water Resources Management (IWRM) Indonesia project. The goal of this work is to contribute to the development of a methodology of a planning tool for sustainable sanitation technology. The tool is designed for sanitation planners in developing countries, where a top-down planning approach is common practice. The proposed tool enables comprehensive sustainability assessments (using the Helmholtz Concept of Sustainability as reference), taking into account local conditions.
State of the Science
In the planning practice, many sanitation planning tools focus on technology selection. However, it has become evident that the selection criteria for sustainable technologies are not always considered in the tools’ framework. In other cases, when the criteria are provided by the tool, there is no clear indication of the conditions to be fulfilled in order to meet these criteria. Specifically, there is no reference to what is meant by sustainable technology in a particular context and how to comprehensively assess the sustainability of different technology options.
Research Methodology
Developing a planning tool is an empirical process, combining theory and practical experience. Hence, the development process of such a tool requires extensive observations, particularly on the interaction between stakeholders in the sanitation sector as well as between technology and its environment. For this purpose, a case study within the project area was carried out. Pucanganom, a village representing common strategic problems in developing countries (e.g. top-down planning approaches, lack of involvement of beneficiaries in the planning process, lack of sustainability assessments) was finally selected as the case study area. After the in-depth case study, an analytical generalisation was developed to enable the tool’s application to a broader context.
Results
The result of this research is a new tool – the Sustainability-based Sanitation Planning Tool (SusTA). SusTA enables comprehensive sustainability assessment in its five generic steps, namely: (1) analysis of stakeholders and sanitation policy in the region, (2) distance-to-target analysis on sanitation conditions in the region, (3) examination of physical and socio-economic conditions in the project area, (4) contextualisation of the technology assessment process in the project area, and (5) sustainability-oriented technology assessment at the project level. These steps are conducted at two levels of planning – the region and the project area – in order to identify the specific problems and interests which influence the selection of a sanitation system. Each planning step is equipped with tool elements (e.g. set of indicators, household questionnaires, technology assessment matrices) to support the analysis.
From the development of SusTA, it can be concluded that four elements are required for an effective and widely applicable sanitation planning tool: sustainability concept, participatory approach, contextualisation framework and modification framework. SusTA provides both a theoretical and a practical basis for assessing the sustainability of sanitation technologies in developing countries. The tool’s main advantages for decision makers in these countries are: It is simple and transparent in its steps, does not require vast amounts of data and does not need a sophisticated computer program.
Die Entwicklung von Hybridtechnologien führt zu vielen neuartigen und effizienten Anwen-dungen. Hybridtechnologien kommen immer dann zum Einsatz, wenn die ausschließliche Nutzung einer Technologie oder eines Werkstoffs nicht zum gewünschten Ergebnis führt. Dann kann durch Kombination unterschiedlicher Werkstoffe oder Technologien ein System geschaffen werden, das in seiner Konfiguration ein Optimum an Eigenschaften darstellt.
Im Bauwesen geht die Entwicklung schon seit jeher in Richtung von immer schlankeren ar-chitektonisch ansprechenden Konstruktionen. In der gegenwärtigen Entwicklung ermöglichen hochtechnologische Kunststoffe und Faserwerkstoffe, wie z. B. Kohlenstofffasern, sehr schlanke, leichte und dennoch hochtragfähiger Konstruktionen. Der wirtschaftliche Aspekt bei der Entwicklung von Tragsystemen bzw. -strukturen erfordert dabei in fast allen Fällen eine kostengünstig effiziente Ausbildung und die Optimierung von Trageigenschaften und Kostenfaktoren. Daher besteht oft die Anforderung nach einem Verbundsystem, bei dem unterschiedliche Materialien in der Art miteinander kombiniert werden, dass jeder Werkstoff für eine bestimmte Beanspruchung angeordnet wird und sein Tragfähigkeitspotenzial optimal ausschöpft. Im Rahmen dieser Arbeit werden an konkreten Beispielen Möglichkeiten aufge-zeigt, Hochtechnologiewerkstoffe in effizienter Art und Weise zu nutzen.
Der Kunststoff-Faser-Verbundwerkstoff stellt eine Möglichkeit dar, den als solches nur für dünnschichtige Klebverbindungen nutzbaren Klebstoff in seinen Anwendungsmöglichkeiten zu erweitern. Die Fasern wirken dabei dem mechanischen Schwachpunkt des Klebstoffs, einer nur geringen Zugfestigkeit, effektiv entgegen. Mit faserverstärkten Klebstoff können Anwendungen realisiert werden, bei denen der Klebstoff auch zur Zugkraftübertragung ge-nutzt wird. Zusätzlich bieten Füllstoffe eine Möglichkeit, die Steifigkeit des Klebstoffs zu stei-gern, was für viele mechanischen Beanspruchungen Vorteile mit sich bringt. Die Kombination aus einem partikelgefüllten und zusätzlich faserverstärkten Klebstoff führt zu einem Ver-bundwerkstoff, der für viele unterschiedliche Anwendungen geeignet ist. Praktische Anwen-dungsmöglichkeiten finden sich in der Herstellung von Fassadenelementen, wo der faserver-stärkte Klebstoff zur Verbindung von Aluminiumhohlprofilen verwendet wird. Weitere Anwen-dungsgebiete erstrecken sich auf die Zugkraftbewehrung von Betontragelementen, bei denen der faserverstärkte Klebstoff die Rolle einer Zugbewehrung an der Betonoberfläche übernimmt.
Alu-CFK-Hybridelemente ermöglichen die Herstellung sehr effizienter Tragsysteme, bei de-nen Gewichtsreduzierung der Tragstruktur und Kosteneinsparungen im Betrieb des Bauwerks gleichermaßen ermöglicht werden. Die CFK-Lamellen werden dabei in den am stärksten längskraftbeanspruchten Bereichen eines Aluminiumtragelementes angeordnet, wodurch sich die Biegetragfähigkeit des dann hybriden Tragelements signifikant erhöht. In der Folge können Gewichtsreduzierungen, verglichen mit herkömmlichen Aluminiumtragelementen, erzielt werden. Weiterhin können die Querschnittsaußenmaße bei Alu-CFK-Hybridelementen deutlich reduziert werden. In der Folge vereinfachen sich der Transport und die Montage dieser Art Tragwerke, was besonders bei fliegenden Bauten einen wesentlichen Vorteil dar-stellt.
Der Einsatz von Glas-Kunststoff-Hybridelementen ermöglicht die Konstruktion transparenter Tragstrukturen in einer optisch einzigartigen Qualität. Die Konstruktion eines Glas-Kunststoff-Hybridelementes ermöglicht ein redundant wirkendes Tragverhalten, bei dem die Steifigkeit und optische Qualität des Glases optimal im Tragsystem genutzt werden können. Der Kunst-stoff stellt eine Art Sicherheitselement dar und übernimmt im Falle eines Glasbruchs die Tragwirkung des Glases. Die Eigenschaft der Vorankündigung eines Systemversagens stellt die Grundlage für eine baupraktische Anwendung des Glas-Kunststoff-Hybridelementes als statisches Tragsystem dar. Durch die Redundanz des Tragverhaltens von Glas-Kunststoff-Hybridelementen ist das Versagen dieser Tragstruktur durch optische oder strukturelle An-zeichen erkennbar und eine Bemessung somit möglich.
Für die mechanische Analyse grundlegender Zusammenhänge in Hybridsystemen können ingenieurmäßige, analytische und numerische Betrachtungen durchgeführt werden. Die in-genieurmäßigen Betrachtungen sind sehr gut geeignet, um Abschätzungen zu treffen, die in später durchgeführten experimentellen Bauteiluntersuchungen oft auch ihre Bestätigung fan-den. Bei Detailbetrachtungen, wie z. B. der Analyse eines nichtlinearen Spannungsverlaufes in mechanisch beanspruchten Klebfugen, bietet eine numerische Betrachtung mittels FEM Vorteile, da sie eine sehr detaillierte Auswertung in Bereichen mit hohen Spannungsgradien-ten ermöglicht. Durch die Anwendung der FEM ist es möglich, Strukturen in unterschiedlichen Skalierungsbereichen zu analysieren und dabei auch Bereiche einzubeziehen, die für experimentelle Untersuchungen nur sehr schwer zugänglich sind. Genaue Kenntnisse über das Materialverhalten der zu analysierenden Stoffe stellen dabei eine wesentliche Grundlage für die Erstellung qualitativ hochwertiger Rechenmodelle dar.
This thesis presents two new methods in finite elements and isogeometric analysis for structural analysis. The first method proposes an alternative alpha finite element method using triangular elements. In this method, the piecewise constant strain field of linear triangular finite element method models is enhanced by additional strain terms with an adjustable parameter a, which results in an effectively softer stiffness formulation compared to a linear triangular element. In order to avoid the transverse shear locking of Reissner-Mindlin plates analysis the alpha finite element method is coupled with a discrete shear gap technique for triangular elements to significantly improve the accuracy of the standard triangular finite elements.
The basic idea behind this element formulation is to approximate displacements and rotations as in the standard finite element method, but to construct the bending, geometrical and shear strains using node-based smoothing domains. Several numerical examples are presented and show that the alpha FEM gives a good agreement compared to several other methods in the literature.
Second method, isogeometric analysis based on rational splines over hierarchical T-meshes (RHT-splines) is proposed. The RHT-splines are a generalization of Non-Uniform Rational B-splines (NURBS) over hierarchical T-meshes, which is a piecewise bicubic polynomial over a hierarchical
T-mesh. The RHT-splines basis functions not only inherit all the properties of NURBS such as non-negativity, local support and partition of unity but also more importantly as the capability of joining geometric objects without gaps, preserving higher order continuity everywhere and allow local refinement and adaptivity. In order to drive the adaptive refinement, an efficient recovery-based error estimator is employed. For this problem an imaginary surface is defined. The imaginary surface is basically constructed by RHT-splines basis functions which is used for approximation and interpolation functions as well as the construction of the recovered stress components. Numerical investigations prove that the proposed method is capable to obtain results with higher accuracy and convergence rate than NURBS results.