Refine
Has Fulltext
- yes (493) (remove)
Document Type
- Doctoral Thesis (493) (remove)
Institute
- Institut für Strukturmechanik (ISM) (56)
- Institut für Europäische Urbanistik (29)
- Promotionsstudiengang Kunst und Design-Freie Kunst-Medienkunst (Ph.D) (25)
- F. A. Finger-Institut für Baustoffkunde (FIB) (20)
- Professur Sozialwissenschaftliche Stadtforschung (16)
- Professur Baubetrieb und Bauverfahren (15)
- Professur Denkmalpflege und Baugeschichte (13)
- Professur Informatik im Bauwesen (12)
- Professur Informatik in der Architektur (12)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Architektur (25)
- Beton (21)
- Stadtplanung (18)
- Finite-Elemente-Methode (17)
- Optimierung (14)
- Stadtentwicklung (13)
- Denkmalpflege (12)
- Isogeometric Analysis (10)
- Kunst (10)
- Modellierung (10)
Organisation im soziotechnischen Gemenge - Mediale Umschichtungen durch die Einführung von SAP
(2017)
Der Alltag in Organisationen besteht vor allem aus den Medien und Technologien, mit denen die Koordination zwischen einzelnen Arbeitsabläufen hergestellt wird.
Diese ethnografische Studie begleitet den Prozess der Einführung eines SAP-Systems in einem mittelständischen Unternehmen und zeigt, wie das bestehende Geflecht aus Praktiken und Technologien eine Neuanordnung erfährt. Dabei tritt das komplexe soziotechnische Gemenge zutage, auf dem Koordination und Organisation beruhen. Es geht um Hardware, ebenso wie Software, um mechanische und elektronische Medien, um Papiere, Drucker, Akten, Interfaces und Tastaturen, aber auch um die jahrzehntelang eingespielten Routinen und das Erfahrungswissen der Angestellten.
Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too.
Open Innovation in kleinen und mittleren Unternehmen (KMU) hat sich stark ausdifferenziert. Dabei zeigt die Empirie, dass KMU unterschiedliche Wege in der offenen Entwicklung von Innovationen begehen. Um die bestehende Literatur zu erweitern, wurden mit dieser Dissertation die Ziele verfolgt 1) offene Innovationsaktivitäten in KMU aus einer Prozessperspektive aufzudecken und genau zu beschreiben und 2) zu erklären, warum sich die Öffnung von Innovationsprozessen in KMU unterscheidet. Dafür wurde auf eine multiple Fallstudienanalyse zurückgegriffen. Untersuchungsobjekte waren kleine etablierte High-Tech Unternehmen aus den neuen Bundesländern. Die Ergebnisse zeigen sechs Prozessmodelle der offenen Innovationsentwicklung, beschrieben als Open Innovation Muster. Deskriptionen dieser Muster unter Berücksichtigung von formenden Innovationsaktivitäten, ausgetauschtem Wissen, beteiligten externen Akteuren und Gründen für und gegen Open Innovation vermitteln ein über den bisherigen Forschungsstand hinausgehendes Verständnis von Open Innovation in KMU. Zudem zeigen die Ergebnisse, dass die Entrepreneurial Orientation erklärt, warum KMU bei der Ausgestaltung von offenen Innovationsprozessen unterschiedlich vorgehen. In der Dissertation wird detailliert dargelegt, welche Open Innovation Muster sich anhand der Entrepreneurial Orientation von KMU (nicht-entrepreneurial bis entrepreneurial) zeigen. Die Ergebnisse liefern sowohl wissenschaftliche Implikationen, als auch Handlungsempfehlungen für die Unternehmenspraxis.
In this Thesis we study some complex and hypercomplex function spaces and classes such as hypercomplex Besov spaces, Bloch space and Op spaces as well as the class of basic sets of polynomials in several complex variables. It is shown that hyperholomorphic Besov spaces can be applied to characterize the hyperholomorphic Bloch space. Moreover, we consider BMOM and VMOM spaces.
On the mechanisms of shrinkage reducing admixtures in self con-solidating mortars and concretes
(2010)
Self Consolidating Concrete – a dream has come true!(?) Self Consolidating Concrete (SCC) is mainly characterised by its special rheological properties. With-out any vibration this concrete can be placed and compacted under its own weight, without segrega-tion or bleeding. The use of such concrete can increase the productivity on construction sites and en-able the use of a higher degree of well distributed reinforcement for thin walled structural members. This new technology also reduces health risks since in contrast to the traditional handling of concrete, the emission of noise and vibration are substantially decreased. The specific mix design for self consolidating concretes was introduced around the 1980s in Japan. In comparison to normal vibrated concrete an increased paste volume enables a good distribution of aggregates within the paste matrix, minimising the influence of aggregates friction on the concrete flow property. The introduction of inert and/or pozzolanic additives as part of the paste provides the required excess paste volume without using disproportionally high amounts of plain cement. Due to further developments of concrete admixtures such as superplasticizers, the cement paste can gain self levelling properties without causing segregation of aggregates. Whereas SCC differs from normal vibrated concrete in its fresh attributes, it should reach similar properties in the hardened state. Due to the increased paste volume it usually shows higher shrinkage. Furthermore, owing to strength requirements, SCC is often produced at low water to cement ratios and hence may additionally suffer from autogenous shrinkage. This means that cracking caused by drying or autogenous shrinkage is a real risk for SCC and can compromise its durability as cracks may serve as ingression paths for gases and salts or might permit leaching. For the time being SCC still exhibits increased shrinkage and cracking probability and hence may be discarded in many practical applications. This can be overcome by a better understanding of those mechanisms and the ways to mitigate them. It is a target of this thesis to contribute to this. How to cope with increased shrinkage of SCC? In general, engineers are facing severe problems related to shrinkage and cracking. Even for normal and high performance concrete, containing moderate amounts of binder, a lot of effort was put on counteracting shrinkage and avoiding cracking. For the time being these efforts resulted in the knowledge of how to distribute cracks rather to avoid them. The most efficient way to decrease shrinkage turned out to be to decrease the cement content of concrete down to a minimum but still sufficient amount. For SCC this obviously seems to be contradictory with the requirement of a high paste volume. Indeed, the potential for shrinkage reduction is limited to some small range modifications in the mix design following two major concepts. The first one is the reduction of the required paste volume by optimising the aggregate grading curve. The second one involves high volume substitution of cement, preferentially using inert mineral additives. The optimization of grading curves is limited by several severe practical issues. Problems start with the availability of sufficiently fractionated aggregates. Usually attempts fail because of the enormous effort in composing application-optimized grading curves or mix designs. Due to durability reasons, the substitution rate for cement is limited depending on the application purpose and on environmental exposure of the hardened concrete. In the early 1980s Shrinkage Reducing Admixtures (SRA) were introduced to counteract drying shrinkage of concrete. The first publications explicitly dealing with SRA go back to Goto and Sato (Japan). They were published in 1983, which is also the time when the SCC concept was introduced. SRA modified concretes showed a substantial reduction of free drying shrinkage contributing to crack prevention or at least a significant decrease of crack width in situations of restrained drying shrinkage. Will shrinkage reducing admixtures contribute to a broader application of SCC? Within the last three decades performance tests on several types of concrete proved the efficiency of shrinkage reducing admixtures. So, at least in terms of shrinkage and cracking, concretes in general and SCC in particular can benefit from SRA application. But "One man's meat is another man's poison" and with respect to long term performance of SRA modified concretes there are still several issues to be clarified. One of these concerns the impact of SRAs on cement hydration. It is therefore an issue to know if changes in the hydrated phase composition, induced by SRA, result in undesired properties or decreased durability. Another issue is that the long term shrinkage reduction has to be evaluated. For example, one can wonder if SRA leaching may diminish or even eliminate long term shrinkage reduction and if the release of admixtures could be a severe environmental issue. It should also be noted that the basic mechanism or physical impact of SRA as well as its implementation in recent models for shrinkage of concrete is still being discussed. The present thesis tries to shed light on the role of SRA in self consolidating concrete focusing on the three questions outlined above: basic mechanisms of cement hydration, physical impact on shrinkage and the sustainability of SRA-application. Which contributions result from this study? Based on an extensive patent search, commercial SRAs could be identified to be synergistic mixtures of non-ionic surfactants and glycols. This turns out to be most important information for more than one reason and is the subject of chapter 4. An abundant literature focuses on properties of these non-ionic surfactants. Moreover, from this rich pool of information, the behaviour of SRAs and their interactions in cementitious systems were better understood through this thesis. For example, it could be anticipated how SRAs behave in strong electrolytes and how surface activity, i.e. surface tension, and interparticle forces might be affected. The synergy effect regarding enhanced performance induced by the presence of additional glycol in SRAs could be derived from the literature on the co-surfactant nature of glycols. Generally it now can be said that glycols ensure that the non-ionic surfactant is properly distributed onto the paste interfaces to efficiently reduce surface tension. In literature, the impact of organic matter on cement hydration was extensively studied for other admixtures like superplasticizer. From there, main impact factors related to the nature of these molecules could be identified. In addition, here again, the literature on non-ionic surfactants provides sufficient information to anticipate possible interactions of SRA with cement hydration based on the nature of non-ionic surfactants. All in all, the extensive study on the nature of non-ionic surfactants, presented in chapter 4, provides fundamental understanding of the behaviour of SRAs in cement paste. Taking a step further to relate this to the impact on drying and shrinkage required to review recent models for drying and shrinkage of cement paste as presented in chapter 3. There, it is shown that macroscopic thermodynamics of the open pore systems can be successfully applied to predict drying induced deformation, but that surface activity of SRA still has to be implemented to explain the shrinkage reduction it causes. Because of severe issues concerning the importance of capillary pressure on shrinkage, a new macroscopic thermodynamic model was derived in a way that meets requirements to properly incorporate surface activity of SRA. This is the subject of chapter 5. Based on theoretical considerations, in chapter 5 the broader impact of SRA on drying cementitious matter could be outlined. In a next step, cement paste was treated as a deformable, open drying pore system. Thereby, the drying phenomena of SRA modified mortars and concrete observed by other authors could be retrieved. This phenomenological consistency of the model constitutes an important contribution towards the understanding of SRA mechanisms. Another main contribution of this work came from introducing an artificial pore system, denominated the normcube. Using this model system, it could be shown how the evolution of interfacial area and its properties interact in presence of SRAs and how this impacts drying characteristics. In chapter 7, the surface activity of commercial SRAs in aqueous solution and synthetic pore solution was investigated. This shows how the electrolyte concentration of synthetic pore solution impacts the phase behaviour of SRA and conversely, how the presence of SRA impacts the aqueous electrolyte solution. Whilst electrolytes enhance self-aggregation of SRAs into micelles and liquid crystals, the presence of SRAs leads to precipitation of minerals as syngenite and mirabilite. Moreover, electrolyte solutions containing SRAs comprise limited miscibility or rather show miscibility gaps, where the liquid separates into isotropic micellar solutions and surfactant rich reverse micellar solutions. The investigation of surface activity and phase behaviour of SRA unravelled another important contribution. From macroscopic surface tension measurements, a relationship between excess surface concentration of SRA, bulk concentration of SRA and exposed interfacial area could be derived. Based on this, it is now possible to predict the actual surface tension of the pore fluid in the course of drying once the evolution of internal interfacial area is known. This is used later in this thesis to describe the specific drying and shrinkage behaviour of SRA modified pastes and mortars. Calorimetric studies on normal Portland cement and composite binders revealed that SRA alone show only minor impact on hydration kinetics. In presence of superplasticizer however the cement hydration can be significantly decelerated. The delaying impact of SRA could be related to a selective deceleration of silicate phase hydration. Moreover, it could be shown that portlandite precipitation in presence of SRA is changed, turning the compact habitus into more or less layered structures. Thereby, the specific surface increases, causing the amount of physically bound water to increase, which in turn reduces the maximum degree of hydration achievable for sealed systems. Extensive phase analysis shows that the hydrated phase composition of SRA modified binders re-mains almost unaffected. The appearance of a temporary mineral phase could be detected by environmental scanning electron microscopy. As could be shown for synthetic pore solutions, syngenite precipitates during early hydration stages and is later consumed in the course of aluminate hydration, i.e. when sulphates are depleted. Moreover, for some SRAs, the salting out phenomena supposed to be enhanced in strong electrolytes could also be shown to take place. The resulting organic precipitates could be identified by SEM-EDX in cement paste and by X-ray diffraction on solid residues of synthetic pore solution. The presence of SRAs could also be identified to impact microstructure of well cured cement paste. Based on nitrogen adsorption measurements and mercury intrusion porosimetry the amount of small pores is seen to increase with SRA dosage, whilst the overall porosity remains unchanged. The question regarding sustainability of SRA application is the subject of chapter 10. By means of leaching studies it could be shown that SRA can be leached significantly. The mechanism could be identified as a diffusion process and a range of effective diffusion coefficients could be estimated. Thereby, the leaching of SRA can now be estimated for real structural members. However, while the admixture can be leached to high extents in tank tests, the leaching rates in practical applications can be assumed to be low because of much reduced contact with water. This could be proven by quantifying admixture loss during long term drying and rewetting cycles. Despite a loss of admixture shrinkage reduction is hardly impacted. Moreover, the cyclic tests revealed that the total deformations in presence of SRA remain low due to a lower extent of irreversibly shrinkage deformations. Another important contribution towards the better understanding of the working mechanism of SRA for drying and shrinkage came from the same leaching tests. A significant fraction of SRA is found to be immobile and does not diffuse in leaching. This fraction of SRA is probably strongly associated to cement phases as the calcium-silicate-hydrates or portlandite. Based on these findings, it is now also possible to quantify the amount of admixture active at the interfaces. This means that, the evolution of surface tension in the course of drying can be approximated, which is a fundamental requirement for modeling shrinkage in presence of SRA. The last experimental chapter of this study focuses on the working mechanism and impact of SRA on drying and shrinkage. Based on the thermodynamics of the open deformable pore system introduced in chapter 5, energy balances are set up using desorption and shrinkage isotherms of actual samples. Information on distribution of SRA in the hydrated paste is used to estimate the actual surface tensions of the pore solution. In other words, this is the first time that the surface activity of the SRA in the course of the drying is fully accounted for. From the energy balances the evolution and properties of the internal interface are then obtained. This made it possible to explain why SRAs impact drying and shrinkage and in what specific range of relative humidity they are active. Summarising the findings of this thesis it can be said that the understanding of the impact of SRAs on hydration, drying and shrinkage was brought forward. Many of the new insights came from the careful investigation of the theory of non-ionic surfactants, something that the cement community had generally overlooked up to now.
Polymeric nanocomposites (PNCs) are considered for numerous nanotechnology such as: nano-biotechnology, nano-systems, nanoelectronics, and nano-structured materials. Commonly , they are formed by polymer (epoxy) matrix reinforced with a nanosized filler. The addition of rigid nanofillers to the epoxy matrix has offered great improvements in the fracture toughness without sacrificing other important thermo-mechanical properties. The physics of the fracture in PNCs is rather complicated and is influenced by different parameters. The presence of uncertainty in the predicted output is expected as a result of stochastic variance in the factors affecting the fracture mechanism. Consequently, evaluating the improved fracture toughness in PNCs is a challenging problem.
Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) have been employed to predict the fracture energy of polymer/particle nanocomposites. The ANN and ANFIS models were constructed, trained, and tested based on a collection of 115 experimental datasets gathered from the literature. The performance evaluation indices of the developed ANN and ANFIS showed relatively small error, with high coefficients of determination (R2), and low root mean square error and mean absolute percentage error.
In the framework for uncertainty quantification of PNCs, a sensitivity analysis (SA) has been conducted to examine the influence of uncertain input parameters on the fracture toughness of polymer/clay nanocomposites (PNCs). The phase-field approach is employed to predict the macroscopic properties of the composite considering six uncertain input parameters. The efficiency, robustness, and repeatability are compared and evaluated comprehensively for five different SA methods.
The Bayesian method is applied to develop a methodology in order to evaluate the performance of different analytical models used in predicting the fracture toughness of polymeric particles nanocomposites. The developed method have considered the model and parameters uncertainties based on different reference data (experimental measurements) gained from the literature. Three analytical models differing in theory and assumptions were examined. The coefficients of variation of the model predictions to the measurements are calculated using the approximated optimal parameter sets. Then, the model selection probability is obtained with respect to the different reference data.
Stochastic finite element modeling is implemented to predict the fracture toughness of polymer/particle nanocomposites. For this purpose, 2D finite element model containing an epoxy matrix and rigid nanoparticles surrounded by an interphase zone is generated. The crack propagation is simulated by the cohesive segments method and phantom nodes. Considering the uncertainties in the input parameters, a polynomial chaos expansion (PCE) surrogate model is construed followed by a sensitivity analysis.
Increasing structural robustness is the goal which is of interest for structural engineering community. The partial collapse of RC buildings is subject of this dissertation. Understanding the robustness of RC buildings will guide the development of safer structures against abnormal loading scenarios such as; explosions, earthquakes, fine, and/or long-term accumulation effects leading to deterioration or fatigue. Any of these may result in local immediate structural damage, that can propagate to the rest of the structure causing what is known by the disproportionate collapse.
This work handels collapse propagation through various analytical approaches which simplifies the mechanical description of damaged reinfoced concrete structures due to extreme acidental event.
Moderne Büroarchitektur mit Räumen in Leichtbauweise und großen transparenten Fassa-denanteilen verschärft im Zusammenwirken mit hohen internen Lasten die Problematik der sommerlichen Überhitzung in Gebäuden. Phasenübergangsmaterialien (PCM: phase change materials) stellen eine interessante Möglichkeit dar, sommerliche Überhitzung in Gebäuden ohne aufwändige Anlagentechnik wie beispielsweise Klimaanlagen zu reduzieren. Der thermische Komfort in Räumen, die mit einem PCM-Putz ausgestattet sind, kann signifikant erhöht werden. Die Arbeit untersucht Anwendungsmöglichkeiten und Optimierungspotential eines PCM-Putzes auf experimentelle und numerische Weise. Zur Untersuchung des PCM-Putzes wurden materialtechnische und experimentelle sowie numerische und numerisch-analytische Methoden eingesetzt. Die Kenntnis der thermischen Parameter des PCM-Putzes ist unablässig für die Berechnung der möglichen Temperaturreduktionen. Zur Bestimmung der Latentwärme, des qualitativen Schmelz- und Erstarrungsprozesses sowie des Temperaturintervalls, in dem der Phasenübergang stattfindet, wurden Messungen mit einem Differential Scanning Calorimeter (DSC) durchgeführt. Für die experimentelle Untersuchung des PCM-Putzes wurden zwei identische Testräume in Leichtbauweise erstellt. Die Räume wurden im Verifikationsobjekt „Eiermannbau“ des Sonderforschungsbereiches SFB 524 der Bauhaus-Universität Weimar gemessen. Nach der Überprüfung, dass sich beide Räume thermisch gleich verhalten, wurde ein Raum mit dem PCM-Putz und der zweite Raum mit einem vergleichbaren Innenputz ohne PCM verputzt. Thermoelemente zur Temperaturmessung im Bauteil, an der Oberfläche und zur Raumlufttemperaturbestimmung wurden angebracht und mit einer Messwerterfassungsanlage verbunden. Der Verlauf der Außenlufttemperatur und die Globalstrahlung am Standort der Versuchsräume wurden aufgezeichnet, um einen Klimadatensatz zu erstellen. Für die Berechnung der Temperaturverteilung in einem PCM-Bauteil mit kontinuierlichem Phasenübergang existiert keine geschlossene analytische Lösung. Daher wurde ein numerischer Ansatz gewählt, bei dem der Phasenübergang im Temperaturbereich T1 bis T2 mit Hilfe einer temperaturabhängigen Wärmekapazität c(T) innerhalb der erweiterten Fou-rier’schen Wärmeleitungsgleichung dargestellt wird. Die Funktion c(T) wird auf Basis der DSC-Messungen bestimmt. Die Modellierung erfolgte mit einem Finite-Differenzen-Verfahren auf Grundlage der Fourier’schen Wärmeleitungsgleichung. Im Rahmen der Arbeit wurde ein PCM-Modul entwickelt, das in ein Gebäudesimulationsprogramm implementiert wurde. Mit dem neuen Modul lassen sich sowohl die Temperaturverläufe in einem PCM-Bauteil wie auch seine Wechselwirkung mit dem Raumklima darstellen. Eine Validierung des entwickelten PCM-Moduls anhand von zahlreichen experimentellen Daten der Versuchsräume wurde für das PCM-Modul erfolgreich durchgeführt. Sommerliche Überhitzungsstunden können durch PCM in Wand- und Deckenelementen deutlich reduziert werden. Der PCM-Putz eignet sich vor allem für Anwendungen in Leichtbauten wie z.B. moderne Büroräume. In Räumen, in denen bereits eine ausreichende thermische Masse vorhanden ist, ist die Temperaturreduktion durch PCM nur gering. Kann das PCM während der Nachtstunden nicht erstarren, erschöpft sich seine Fähigkeit zur Latentwärmespeicherung. Erhöhte Nachtlüftung führt bei entsprechend niedrigen Außentemperaturen zu höherem Wärmeübergang und kann damit zur besseren Entladung des PCM beitragen. Im Rahmen der Dissertation konnten Aussagen zur idealen Phasenübergangstemperatur in Abhängigkeit des verwendeten Materials und der Schichtdicke getroffen werden. Die Reduktion der Oberflächentemperaturen, die sich bei Einsatz eines PCM-Putzes unter geeigneten Randbedingungen ergibt, beträgt 2.0 - 3.5 K für eine Putzschicht von 1 cm und 3.0 - 5.0 K für eine Putzschicht von 3 cm. Diese Werte wurden sowohl numerisch als auch durch experimentelle Untersuchungen ermittelt. Die Reduktion der Lufttemperaturen aufgrund einer Konditionierung des Raumes mit PCM-Putz beträgt bei geeigneten thermischen Verhältnissen ca. 1.0 - 2.5 K für eine Putzschicht von 1 cm und 2.0 - 3.0 K für eine Putzschicht von 3 cm. Die operative Temperatur als wichtiger Komfortparameter kann durch den Einsatz des PCM-Putzes um bis zu 4 K gesenkt werden. Damit lässt sich mit Hilfe eines PCM-Putzes die thermische Behaglichkeit in einem Raum deutlich erhöhen.
Die Komplexität des Schweißprozesses und das Verhalten der Werkstoffe infolge des Energieeintrages erfordern eine umfassende Betrachtungsweise. Die Entwicklung von numerischen Modellen und Methoden in den letzten 50 Jahren ermöglicht die Simulation, Analyse und Optimierung von Schweißverbindungen hinsichtlich Temperatur, Gefügestruktur und Eigenspannungen. Eine Differenzierung der Schweißsimulation in Prozess-, Werkstoff- und Struktursimulation gestattet eine gezielte Untersuchung von einzelnen Aspekten. Diese Unterteilung erfordert zum Teil eine starke Abstraktion und Idealisierung der Realität durch geeignete Annahmen und Randbedingungen, die von der zu untersuchenden Fragestellung abhängen. Dadurch wird eine Kalibrierung und Verifikation der Modelle mit Versuchsergebnissen notwendig. Die in dieser Arbeit durchgeführten Untersuchungen beschäftigen sich mit wichtigen Fragestellungen hinsichtlich der numerischen Simulation und experimentellen Untersuchung des Temperaturfeldes sowie des Gefüge- und Eigenspannungszustandes von MAG-Schweißverfahren an den Werkstoffen Feinkornbaustahl und Duplex-Stahl, CO2-Laserstrahlschweißverfahren am Werkstoff Quarzglas, Trennprozessen von Proben, WIG-Nachbehandlungsverfahren. Hinsichtlich der Naht- und Stoßarten orientierte sich die Arbeit an baupraktisch relevanten Schweißverbindungen sowie Besonderheiten, die sich aus Schweißprozessen und unterschiedlichen Werkstoffen ergeben. Eine Interpretation der numerisch und experimentell ermittelten Ergebnisse ermöglicht die Ableitung von allgemeingültigen Erkenntnissen zur Ausbildung des Temperaturfeldes, Entstehung von Gefügestrukturen und Eigenspannungen. Voraussetzungen für eine realitätsnahe Schweißsimulation zur Bestimmung von Temperatur, Gefügeanteil und Eigenspannung sind neben den Geometriemodellen geeignete numerische Modelle für die Einkopplung der Energie aus dem Schweißprozess und für die Abgabe der Energie durch Konvektion und Strahlung an die Umgebung, zur Beschreibung des thermischen und mechanischen Werkstoffverhaltens im Bereich von Raumtemperatur bis zur Schmelztemperatur.
Numerische Berechnung von Mauerwerkstrukturen in homogenen und diskreten Modellierungsstrategien
(2004)
Im Zentrum der Arbeit stehen die Entwicklung, Verifikation, Implementierung und Leistungsfähigkeit numerischer Berechnungsmodelle für Mauerwerk im Rahmen der Kontinuums- und Diskontinuumsmechanik. Makromodelle beschreiben das Mauerwerk als verschmiertes Ersatzkontinuum. Mikromodelle berücksichtigen durch die Modellierung der einzelnen Steine und Fugen die Struktur des Mauerwerkverbandes. Soll darüber hinaus der durch die Querdehnungsinteraktion zwischen Stein und Mörtel hervorgerufene heterogene Spannungszustand im Mauerwerk abgebildet werden, so ist ein detailliertes Mikromodell, welches Steine und Fugen in ihren exakten geometrischen Dimensionen berücksichtigt, erforderlich. Demgegenüber steht die vereinfachte Mikromodellierung, bei der die Fugen mit Hilfe von Kontaktalgorithmen beschrieben werden. Im Rahmen der Makromodellierung werden neue räumliche Materialmodelle für verschiedene ein- und mehrschalige Mauerwerkarten hergeleitet. Die vorgestellten Modelle berücksichtigen die Anisotropie der Steifigkeiten, der Festigkeiten sowie des Ver- und Entfestigungsverhaltens. Die numerische Implementation erfolgt mit Hilfe moderner elastoplastischer Algorithmen im Rahmen der impliziten Finite Element Methode in das Programm ANSYS. Innerhalb der detaillierten Mikromodellierung wird ein neues, aus Materialbeschreibungen für Stein, Mörtel sowie deren Verbund bestehendes nichtlineares Berechnungsmodell entwickelt und in das Programm ANSYS implementiert. Die diskontinuumsmechanische Beschreibung von Mauerwerk im Rahmen der vereinfachten Mikromodellierung erfolgt unter Verwendung der expliziten Distinkt Element Methode mit Hilfe der Programme UDEC und 3DEC. An praktischen Beispielen werden Probleme der Tragfähigkeitsbewertung gemauerter Bogenbrücken, Möglichkeiten zur Bewertung vorhandener Rissbildungen und Schädigungen an historischen Mauerwerkstrukturen und Traglastberechnungen an gemauerten Stützen ausgewertet und analysiert.
Alkali-silica reaction causes major problems in concrete structures due to the rapidity of its deformation which leads to the serviceability limit of the structure being reached well before its time. Factors that affect ASR vary greatly, including alkali and silica content, relative humidity, temperature and porosity of the cementitious matrix,all these making it a very complex phenomenon to consider explicitly. With this in mind, the finite element technique was used to build models and generate expansive pressures and damage propagation due to ASR under the influence of thermo-hygrochemoelastic loading. Since ASR initializes in the mesoscopic regions of the concrete,
the accumulative effects of its expansion escalates onto the macroscale level with the development of web cracking on the concrete surface, hence solution of the damage model as well as simulation of the ASR phenomenon at both the macroscale and mesoscale levels have been performed. The macroscale model realizes the effects of ASR expansion as a whole and shows how it develops under the influence of moisture, thermal and mechanical loading. Results of the macroscale modeling are
smeared throughout the structure and are sufficient to show how damage due to ASR expansion orientates. As opposed to the mesoscale model, the heterogeneity of the model shows us how difference in material properties between aggregates and the cementitious matrix facilitates ASR expansion. With both these models, the ASR phenomenon under influence of thermo-chemo-hygro-mechanical loading can be better understood.
Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment.
This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy.
The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping.
Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation.
The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential.
The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes.
The main categories of wind effects on long span bridge decks are buffeting, flutter, vortex-induced vibrations (VIV) which are often critical for the safety and serviceability of the structure. With the rapid increase of bridge spans, research on controlling wind-induced vibrations of long span bridges has been a problem of great concern.The developments of vibration control theories have led to the wide use of tuned mass dampers (TMDs) which has been proven to be effective for suppressing these vibrations both analytically and experimentally. Fire incidents are also of special interest in the stability and safety of long span bridges due to significant role of the complex phenomenon through triple interaction between the deck with the incoming wind flow and the thermal boundary of the surrounding air.
This work begins with analyzing the buffeting response and flutter instability of three dimensional computational structural dynamics (CSD) models of a cable stayed bridge due to strong wind excitations using ABAQUS finite element commercial software. Optimization and global sensitivity analysis are utilized to target the vertical and torsional vibrations of the segmental deck through considering three aerodynamic parameters (wind attack angle, deck streamlined length and viscous damping of the stay cables). The numerical simulations results in conjunction with the frequency analysis results emphasized the existence of these vibrations and further theoretical studies are possible with a high level of accuracy. Model validation is performed by comparing the results of lift and moment coefficients between the created CSD models and two benchmarks from the literature (flat plate theory) and flat plate by (Xavier and co-authors) which resulted in very good agreements between them. Optimum values of the parameters have been identified. Global sensitivity analysis based on Monte Carlo sampling method was utilized to formulate the surrogate models and calculate the sensitivity indices. The rational effect and the role of each parameter on the aerodynamic stability of the structure were calculated and efficient insight has been constructed for the stability of the long span bridge.
2D computational fluid dynamics (CFD) models of the decks are created with the support of MATLAB codes to simulate and analyze the vortex shedding and VIV of the deck. Three aerodynamic parameters (wind speed, deck streamlined length and dynamic viscosity of the air) are dedicated to study their effects on the kinetic energy of the system and the vortices shapes and patterns. Two benchmarks from the literature (Von Karman) and (Dyrbye and Hansen) are used to validate the numerical simulations of the vortex shedding for the CFD models. A good consent between the results was detected. Latin hypercube experimental
method is dedicated to generate the surrogate models for the kinetic energy of the system and the generated lift forces. Variance based sensitivity analysis is utilized to calculate the main sensitivity indices and the interaction orders for each parameter. The kinetic energy approach performed very well in revealing the rational effect and the role of each parameter in the generation of vortex shedding and predicting the early VIV and the critical wind speed.
Both one-way fluid-structure interaction (one-way FSI) simulations and two-way fluid-structure interaction (two-way FSI) co-simulations for the 2D models of the deck are executed to calculate the shedding frequencies for the associated wind speeds in the lock-in region in addition to the lift and drag coefficients. Validation is executed with the results of (Simiu and Scanlan) and the results of flat plate theory compiled by (Munson and co-authors) respectively. High levels of agreements between all the results were detected. A decrease in the critical wind speed and the shedding frequencies considering (two-way FSI) was identified compared to those obtained in the (one-way FSI). The results from the (two-way FSI) approach predicted appreciable decrease in the lift and drag forces as well as prediction of earlier VIV for lower critical wind speeds and lock-in regions which exist at lower natural frequencies of the system. These conclusions help the designers to efficiently plan and consider for the design and safety of the long span bridge before and after construction.
Multiple tuned mass dampers (MTMDs) system has been applied in the three dimensional CSD models of the cable stayed bridge to analyze their control efficiency in suppressing both wind -induced vertical and torsional vibrations of the deck by optimizing three design parameters (mass ratio, frequency ratio and damping ratio) for the (TMDs) supporting on actual field data and minimax optimization technique in addition to MATLAB codes and Fast Fourier Transform technique. The optimum values of each parameter were identified and validated with two benchmarks from the literature, first with (Wang and co-authors) and then with (Lin and co-authors). The validation procedure detected a good agreement between the results. Box-Behnken experimental method is dedicated to formulate the surrogate models to represent the control efficiency of the vertical and torsional vibrations. Sobol's sensitivity indices are calculated for the design parameters in addition to their interaction orders. The optimization results revealed better performance of the MTMDs in controlling both the vertical and the torsional vibrations for higher mode shapes. Furthermore, the calculated rational effect of each design parameter facilitates to increase the control efficiency of the MTMDs in conjunction with the support of the surrogate models which simplifies the process of analysis for vibration control to a great extent.
A novel structural modification approach has been adopted to eliminate the early coupling between the bending and torsional mode shapes of the cable stayed bridge. Two lateral steel
beams are added to the middle span of the structure. Frequency analysis is dedicated to obtain the natural frequencies of the first eight mode shapes of vibrations before and after the structural modification. Numerical simulations of wind excitations are conducted for the 3D model of the cable stayed bridge. Both vertical and torsional displacements are calculated at the mid span of the deck to analyze the bending and the torsional stiffness of the system before and after the structural modification. The results of the frequency analysis after applying lateral steel beams declared that the coupling between the vertical and torsional mode shapes of vibrations has been removed to larger natural frequencies magnitudes and higher rare critical wind speeds with a high factor of safety.
Finally, thermal fluid-structure interaction (TFSI) and coupled thermal-stress analysis are utilized to identify the effects of transient and steady state heat-transfer on the VIV and fatigue of the deck due to fire incidents. Numerical simulations of TFSI models of the deck are dedicated to calculate the lift and drag forces in addition to determining the lock-in regions once using FSI models and another using TFSI models. Vorticity and thermal fields of three fire scenarios are simulated and analyzed. The benchmark of (Simiu and Scanlan) is used to validate the TFSI models, where a good agreement was manifested between the two results. Extended finite element method (XFEM) is adopted to create 3D models of the cable stayed bridge to simulate the fatigue of the deck considering three fire scenarios. The benchmark of (Choi and Shin) is used to validate the damaged models of the deck in which a good coincide was seen between them. The results revealed that the TFSI models and the coupled thermal-stress models are significant in detecting earlier vortex induced vibration and lock-in regions in addition to predicting damages and fatigue of the deck and identifying the role of wind-induced vibrations in speeding up the damage generation and the collapse of the structure in critical situations.
Railway systems are highly competitive compared with other means of transportation because of their distinct advantages in speed, convenience and safety. Therefore, the demand for railway transportation is increasing around the world. Constructing railway tracks and related engineering structures in areas with loose or soft cohesive subgrade usually leads to problems, such as excessive settlement, deformation and instability. Several remedies have been proposed to avoid or reduce such problems, including the replacement of soft soil and the construction of piles or stone columns.
This thesis aims to expand the geotechnical knowledge of how to improve subgrade ballasted railway tracks, using stone columns and numerical modeling for the railway infrastructure. Three aspects are considered: i) railway track dynamics modeling and validation by field measurements, ii) modeling and parametric studies on stone columns, and iii) studies on the linear and non-linear behavior of stone columns under the dynamic load of trains.
The first step of this research was to develop a reliable numerical model of a railway track. The finite element method in a time domain was used for either a 2D plane strain or 3D analysis. Individual methods for modeling a train load in 2D and 3D were implemented and are discussed in this thesis. The developed loading method was validated with three different railway tracks using obtained vibration measurements. Later, these numerical models were used to analyze the influence of stone column length and train speed in the stress field.
The performance of the treated ground depends on various parameters, such as the strength of stone columns, spacing, length and diameter of the columns. Therefore, the second step was devoted to a parameter study of stone columns as a unit cell with an axisymmetric condition. The results showed that even short stone columns were effective for settlement reduction, and area of replacement was the main influential parameter in their performance.
The third part of this thesis focuses on a hypothetical railway-track response to the passage of various train speeds and the influence of stone-column length. The stress-strain response of subgrade is analyzed under either an elastic–perfectly plastic or advanced constitutive model. The non-linear soil response in the finite element method and the impact of train speed and stone column length on railway tracks are also evaluated. Moreover, the reductions of induced vibration – in both a horizontal and a vertical direction – after improvement are investigated.
The current thesis presents research about new methods of citizen participation based on digital technologies. The focus on the research lies on decentralized methods of participation where citizens take the role of co-creators. The research project first conducted a review of the literature on citizen participation, its origins and the different paradigms that have emerged over the years. The literature review also looked at the influence of technologies on participation processes and the theoretical frameworks that have emerged to understand the introduction of technologies in the context of urban development. The literature review generated the conceptual basis for the further development of the thesis.
The research begins with a survey of technology enabled participation applications that examined the roles and structures emerging due to the introduction of technology. The results showed that cities use technology mostly to control and monitor urban infrastructure and are rather reluctant to give citizens the role of co-creators. Based on these findings, three case studies were developed. Digital tools for citizen participation were conceived and introduced for each case study. The adoption and reaction of the citizens were observed using three data collection methods.
The results of the case studies showed consistently that previous participation and engagement with informal citizen participation are a determinining factor in the potential adoption of digital tools for decentralized engagement. Based on these results, the case studies proposed methods and frameworks that can be used for the conception and introduction of technologies for decentralized citizen participation.
From a macroscopic point of view, failure within concrete structures is characterized by the initiation and propagation of cracks. In the first part of the thesis, a methodology for macroscopic crack growth simulations for concrete structures using a cohesive discrete crack approach based on the extended finite element method is introduced. Particular attention is turned to the investigation of criteria for crack initiation and crack growth. A drawback of the macroscopic simulation is that the real physical phenomena leading to the nonlinear behavior are only modeled phenomenologically. For concrete, the nonlinear behavior is characterized by the initiation of microcracks which coalesce into macroscopic cracks. In order to obtain a higher resolution of this failure zones, a mesoscale model for concrete is developed that models particles, mortar matrix and the interfacial transition zone (ITZ) explicitly. The essential features are a representation of particles using a prescribed grading curve, a material formulation based on a cohesive approach for the ITZ and a combined model with damage and plasticity for the mortar matrix. Compared to numerical simulations, the response of real structures exhibits a stochastic scatter. This is e.g. due to the intrinsic heterogeneities of the structure. For mesoscale models, these intrinsic heterogeneities are simulated by using a random distribution of particles and by a simulation of spatially variable material parameters using random fields. There are two major problems related to numerical simulations on the mesoscale. First of all, the material parameters for the constitutive description of the materials are often difficult to measure directly. In order to estimate material parameters from macroscopic experiments, a parameter identification procedure based on Bayesian neural networks is developed which is universally applicable to any parameter identification problem in numerical simulations based on experimental results. This approach offers information about the most probable set of material parameters based on experimental data and information about the accuracy of the estimate. Consequently, this approach can be used a priori to determine a set of experiments to be carried out in order to fit the parameters of a numerical model to experimental data. The second problem is the computational effort required for mesoscale simulations of a full macroscopic structure. For this purpose, a coupling between mesoscale and macroscale model is developed. Representative mesoscale simulations are used to train a metamodel that is finally used as a constitutive model in a macroscopic simulation. Special focus is placed on the ability of appropriately simulating unloading.
Großsiedlungen sind nicht nur ein Erbe der Moderne, sondern seit über drei Jahrzehnten Gegenstand der Stadterneuerung. Dieses Buch erörtert, was eine heute „normale“ Großsiedlung stadtplanerisch benötigt und welche stadtentwicklungs- als auch wohnungspolitisch gesteuerten Ressourcen in einer integrierten Planungssteuerung gebündelt werden sollten. Dabei wird das grundsätzliche Planungsinstrument des Quartiersmanagements aktualisiert – über den Gegenstand Großsiedlungen hinaus.
The present study analysis the environmental benefits of urban vegetation within the municipal boundary of a megacity through multi scale integrated modelling to estimate its benefits approximately. The advantages (and challenges) that Nature, inserted into cities, offers to the population are observed from different viewpoints. As geographical reference the profile of megacities located in low (tropical) latitudes was observed, in a case study on the city of São Paulo/ Brazil. Commonly, urban vegetation is overlooked by local people, governments and economical structures. Although sparse vegetation exists, it is hardly recognized. Along the brief history of rapid urbanization which is accompanied by massive environmental degradation, urban green becomes, in the dispute for space, a true luxury in cities like São Paulo. Not as retrogression but as advance, it demonstrates that the integration between nature and city would be desirable. The approximated quantification of the variations which occur between actual scenario and greened scenarios shows the need to rethink the urban biome as a man-dominated ecosystem. The benefits of the urban vegetation are diverse. This work details plants as agents of climatic and ecosystem balance and performance. It also approaches current issues like climate change, energy efficiency and thermal comfort, as well as the purification of natural resources, through the treatment of water, soil and air. Especially because at present no efficient technical solutions exist, that could substitute the environmental services of the vegetation. These benefits contribute to quality of life and increase socio-environmental equity especially important in high-contrast megacities. The vegetation assumes two important roles in cities. The functional dimension brings concrete and measurable benefits to the environment. From a symbolic vision, vegetation represents Nature in cities, approximating humans to their origins. Conclusively the study defends the importance of the valorization of Nature and of the united efforts for literally green cities because it proves that financial investment in urban vegetation has direct effects on the costs destined to the areas of health and infrastructure. The City of São Paulo, invested in 2008 about US$ 180 million (one hundred and eighty million dollars) in urban green (and environment) which tends to save US$ 980 million (nine hundred and eighty million dollars) of expenses annually. In other words, for each US$ 1 invested in planting and maintenance of urban green, the society saves at least US$ 5 of expenses in health, construction of French drains, energy etc.
Natural Urban Resilience: Understanding general urban resilience through Addis Ababa’s inner city
(2021)
This dissertation describes the urban actors and spatial practices that contribute to natural urban resilience in Addis Ababa’s inner city. Natural urban resilience is a non-strategical and bottom-up, everyday form of general urban resilience – an urban system’s ability to maintain its essential characteristics under any change. This study gains significance by exposing conceptual gaps in the current un-derstanding of general urban resilience and highlighting its unconvincing applicability to African cities. This study attains further relevance by highlighting the danger of the ongoing large-scale redevelopment of the inner city. The inner city has naturally formed, and its urban memory, spaces, and social cohesion contribute to its primarily low-income population’s resilience. This thesis argues that the inner city’s demolition poses an incalculable risk of maladaptation to future stresses and shocks for Addis Ababa. The city needs a balanced urban discourse that highlights the inner city’s qualities and suggests feasible urban transformation measures. “Natural Urban Resilience” contributes an empirical study to the debate by identifying those aspects of the inner city that contribute to general resilience and identifies feasible action areas. This study develops a qualitative research design for a single case study in Addis Ababa. The data is obtained through expert interviews, interviews with resi-dents, and the analysis of street scene photos, which are abstracted using Grounded Theory. That way, this thesis provides first-time knowledge about who and what generates urban resilience in the inner city of Addis Ababa and how. Furthermore, the study complements existing theories on general urban resilience. It provides a detailed understanding of the change mechanisms in resilience, of which it identifies four: adaptation, upgrading, mitigation, and resistance. It also adapts the adaptive cycle, a widely used concept in resilience thinking, conceptually for urban environments. The study concludes that the inner city’s continued redevelopment poses an incalculable threat to the entire city. Therefore, “Natural urban resilience” recommends carefully weighing any intervention in the inner city to promote Addis Ababa’s overall resilience. This dissertation proposes a pattern language for natural urban resilience to support these efforts and to translate the model of natural urban resilience into practice.
Advances in nanotechnology lead to the development of nano-electro-mechanical systems (NEMS) such as nanomechanical resonators with ultra-high resonant frequencies. The ultra-high-frequency resonators have recently received significant attention for wide-ranging applications such as molecular separation, molecular transportation, ultra-high sensitive sensing, high-frequency signal processing, and biological imaging. It is well known that for micrometer length scale, first-principles technique, the most accurate approach, poses serious limitations for comparisons with experimental studies. For such larger size, classical molecular dynamics (MD) simulations are desirable, which require interatomic potentials. Additionally, a mesoscale method such as the coarse-grained (CG) method is another useful method to support simulations for even larger system sizes.
Furthermore, quasi-two-dimensional (Q2D) materials have attracted intensive research interest due to their many novel properties over the past decades. However, the energy dissipation mechanisms of nanomechanical resonators based on several Q2D materials are still unknown. In this work, the addressed main issues include the development of the CG models for molybdenum disulphide (MoS2), investigation of the mechanism effects on black phosphorus (BP) nanoresonators and the application of graphene nanoresonators. The primary coverage and results of the dissertation are as follows:
Method development. Firstly, a two-dimensional (2D) CG model for single layer MoS2 (SLMoS2) is analytically developed. The Stillinger-Weber (SW) potential for this 2D CG model is further parametrized, in which all SW geometrical parameters are determined analytically according to the equilibrium condition for each individual potential term, while the SW energy parameters are derived analytically based on the valence force field model. Next, the 2D CG model is further simplified to one-dimensional (1D) CG model, which describes the 2D SLMoS2 structure using a 1D chain model. This 1D CG model is applied to investigate the relaxed configuration and the resonant oscillation of the folded SLMoS2. Owning to the simplicity nature of the 1D CG model, the relaxed configuration of the folded SLMoS2 is determined analytically, and the resonant oscillation frequency is derived analytically. Considering the increasing interest in studying the properties of other 2D layered materials, and in particular those in the semiconducting transition metal dichalcogenide class like MoS2, the CG models proposed in current work provide valuable simulation approaches.
Mechanism understanding. Two energy dissipation mechanisms of BP nanoresonators are focused exclusively, i.e. mechanical strain effects and defect effects (including vacancy and oxidation). Vacancy defect is intrinsic damping factor for the quality (Q)-factor, while mechanical strain and oxidation are extrinsic damping factors. Intrinsic dissipation (induced by thermal vibrations) in BP resonators (BPRs) is firstly investigated. Specifically, classical MD simulations are performed to examine the temperature dependence for the Q-factor of the single layer BPR (SLBPR) along the armchair and zigzag directions, where two-step fitting procedure is used to extract the frequency and Q-factor from the kinetic energy time history. The Q-factors of BPRs are evaluated through comparison with those of graphene and MoS2 nanoresonators. Next, effects of mechanical strain, vacancy and oxidation on BP nanoresonators are investigated in turn. Considering the increasing interest in studying the properties of BP, and in particular the lack of theoretical study for the BPRs, the results in current work provide a useful reference.
Application. A novel application for graphene nanoresonators, using them to self-assemble small nanostructures such as water chains, is proposed. All of the underlying physics enabling this phenomenon is elucidated. In particular, by drawing inspiration from macroscale self-assembly using the higher order resonant modes of Chladni plates, classical MD simulations are used to investigate the self-assembly of water molecules using
graphene nanoresonators. An analytic formula for the critical resonant frequency based on the interaction between water molecules and graphene is provided. Furthermore, the properties of the water chains assembled by the graphene nanoresonators are studied.
This thesis concerns the physical and mechanical interactions on carbon nanotubes and polymers by multiscale modeling. CNTs have attracted considerable interests in view of their unique mechanical, electronic, thermal, optical and structural properties, which enable them to have many potential applications.
Carbon nanotube exists in several structure forms, from individual single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) to carbon nanotube bundles and networks. The mechanical properties of SWCNTs and MWCNTs have been extensively studied by continuum modeling and molecular dynamics (MD) simulations in the past decade since the properties could be important in the CNT-based devices. CNT bundles and networks feature outstanding mechanical performance and hierarchical structures and network topologies, which have been taken as a potential saving-energy material. In the synthesis of nanocomposites, the formation of the CNT bundles and networks is a challenge to remain in understanding how to measure and predict the properties of such large systems. Therefore, a mesoscale method such as a coarse-grained (CG) method should be developed to study the nanomechanical characterization of CNT bundles and networks formation.
In this thesis, the main contributions can be written as follows: (1) Explicit solutions for the cohesive energy between carbon nanotubes, graphene and substrates are obtained through continuum modeling of the van der Waals interaction between them. (2) The CG potentials of SWCNTs are established by a molecular mechanics model. (3) The binding energy between two parallel and crossing SWCNTs and MWCNTs is obtained by continuum modeling of the van der Waals interaction between them. Crystalline and amorphous polymers are increasingly used in modern industry as tructural materials due to its important mechanical and physical properties. For crystalline polyethylene (PE), despite its importance and the studies of available MD simulations and continuum models, the link between molecular and continuum descriptions of its mechanical properties is still not well established. For amorphous polymers, the chain length and temperature effect on their
elastic and elastic-plastic properties has been reported based on the united-atom (UA) and CG MD imulations in our previous work. However, the effect of the CL and temperature on the failure behavior is not understood well yet. Especially, the failure behavior under shear has been scarcely reported in previous work. Therefore, understanding the molecular origins of macroscopic fracture behavior such as fracture energy is a fundamental scientific challenge.
In this thesis, the main contributions can be written as follows: (1) An analytical molecular mechanics model is developed to obtain the size-dependent elastic properties of crystalline PE.
(2) We show that the two molecular mechanics models, the stick-spiral and the beam models, predict considerably different mechanical properties of materials based on energy equivalence. The difference between the two models is independent of the materials. (3) The tensile and shear failure behavior dependence on chain length and temperature in amorphous polymers are scrutinized using molecular dynamics simulations. Finally, the influence of polymer wrapped two neighbouring SWNTs’ dispersion on their load transfer is investigated by molecular dynamics (MD) simulations, in which the SWNTs' position, the polymer chain length and the temperature on the interaction force is systematically studied.
Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods.
Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method.
Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD.
Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA.
The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries.
Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials.
Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low.
Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs.
Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite.
It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs.
Multi-Frame Rate Rendering
(2008)
Multi-frame rate rendering is a parallel rendering technique that renders interactive parts of a scene on one graphics card while the rest of the scene is rendered asynchronously on a second graphics card. The resulting color and depth images of both render processes are composited, by optical superposition or digital composition, and displayed. The results of a user study confirm that multi-frame rate rendering can significantly improve the interaction performance. Multi-frame rate rendering is naturally implemented on a graphics cluster. With the recent availability of multiple graphics cards in standalone systems the method can also be implemented on a single computer system where memory bandwidth is much higher compared to off-the-shelf networking technology. This decreases overall latency and further improves interactivity. Multi-frame rate rendering was also investigated on a single graphics processor by interleaving the rendering streams for the interactive elements and the rest of the scene. This approach enables the use of multi-frame rate rendering on low-end graphics systems such as laptops, mobile phones, and PDAs. Advanced multi-frame rate rendering techniques reduce the limitations of the basic approach. The interactive manipulation of light sources and their parameters affects the entire scene. A multi-GPU deferred shading method is presented that splits the rendering task into a rasterization and lighting pass and assigns the passes to the appropriate image generators such that light manipulations at high frame rates become possible. A parallel volume rendering technique allows the manipulation of objects inside a translucent volume at high frame rates. This approach is useful for example in medical applications, where small probes need to be positioned inside a computed-tomography image. Due to the asynchronous nature of multi-frame rate rendering artifacts may occur during migration of objects from the slow to the fast graphics card, and vice versa. Proper state management allows to almost completely avoid these artifacts. Multi-frame rate rendering significantly improves the interactive manipulation of objects and lighting effects. This leads to a considerable increase of the size for 3D scenes that can be manipulated compared to conventional methods.
Es werden sowohl analytische als auch numerische Verfahren zur Berechnung der Wärmeverluste von Verglasungen vorgestellt, wobei alle am Energietransport beteiligten Prozesse, die Wärmeleitung, die thermisch getriebenen Konvektionsströmungen und die infrarote Strahlungswechselwirkung, korrekt und vollständig berücksichtigt werden. Mit Hilfe numerischer Strömungssimulation werden Verglasungen systematisch hinsichtlich der Füllgasart, der Infrarotverspiegelung, der Einbaulage und des Scheibenabstandes sowie der Anzahl der Gaszwischenräume (Zwei-, Drei- und Vierscheiben-Verglasung) untersucht und verglichen. Die Abhängigkeit des k-Wertes von den Temperaturen der angrenzenden Klimate (Atmosphäre und Innenraum) wird dargestellt.
Modellbildung und computergestütztes Modellieren in frühen Phasen des architektonischen Entwurfs
(1997)
Die Arbeit befaßt sich mit der Konzeption und exemplarischen Realisierung von CAD-Tools zur Unterstützung des konzeptionellen Entwurfs von Gebäuden. Die Spezifik des Bauentwurfs macht eine Analyse des Arbeitsfelds "früher architektonischer Entwurf" aus Sicht der Informationsverarbeitung nichtig. Auf der Grundlage einer Untersuchung des Entwurfsprozesses aus pragmatischer und kognitiver Sicht wird ein generisches Prozeßmodell "Entwurf" abgeleitet und mit Beispielen belegt. Ausgehend von der Auffassung, daß Entwerfen der Prozesse das Vorausdenken von Bauwerken in Modellen ist, wird die "objektorientierte Modellierungstechnologie" einer eingehenden Bewertung unterzogen. Ihre methodische Eignung zur modellhaften Abbildung von Bauwerken wird grundsätzlich nachgewiesen. Mit Blick auf die spezifischen Gegebenheiten der frühen Entwurfsphasen, wie Unschärfe und nicht formalisiert darstellbarer Informationen, werden einige Erweiterungen vorgeschlagen und experimentiert. Ziel der Arbeit ist es, ein multipel und über lange Zeiträume auswertbares Bauwerksmodell zu schaffen, um Planungsprozesse über den gesamten Bauwerkslebenszyklus durchgängig zu unterstützen. Grundlage des Entwerfens mittels der objektorientierten Methodik sind generalisierte Domänenmodelle. Es wird gezeigt, daß für eine Interpretation konkreter Bauwerksmodelle die generalisierten Domänenmodelle entweder zu normieren sind oder aber daß sie explizit verfügbar sein müssen. Die Objektorientierung bietet die Möglichkeit, sowohl konkrete als auch generalisierte Modelle in einem einheitlichen Metamodell zu beschreiben...
Modell bedarfsorientierter Leistungserbringung im FM auf Grundlage von Sensortechnologien und BIM
(2023)
Während der Digitalisierung im Bauwesen insbesondere im Bereich der Planungs- und Errichtungsphase von Bauwerken immer größere Aufmerksamkeit zuteilwird, ist das digitale Potenzial im Facility Management weit weniger ausgeschöpft, als dies möglich wäre. Vor dem Hintergrund, dass die Bewirtschaftung von Gebäuden jedoch einen wesentlichen Kostenanteil im Lebenszyklus darstellt, ist eine Fokussierung auf digitale Prozesse im Gebäudebetrieb erforderlich. Im Facility Management werden Dienstleistungen häufig verrichtungsorientiert, d. h. nach statischen Intervallen, oder bedarfsorientiert erbracht. Beide Arten der Leistungserbringung weisen Defizite auf, beispielweise weil Tätigkeiten auf Basis definierter Intervalle erbracht werden, ohne dass eine Notwendigkeit besteht oder weil bestehende Bedarfe mangels Möglichkeiten der Bedarfsermittlung nicht identifiziert werden. Speziell die Definition und Ermittlung eines Bedarfs zur Leistungserbringung ist häufig subjektiv geprägt. Auch sind Dienstleister oft nicht in frühen Phasen der Gebäudeplanung involviert und erhalten für ihre Dienstleistungen notwendige Daten und Informationen erst kurz vor Inbetriebnahme des zu betreibenden Gebäudes.
Aktuelle Ansätze des Building Information Modeling (BIM) und die zunehmende Verfügbarkeit von Sensortechnologien in Gebäuden bieten Chancen, die o. g. Defizite zu beheben.
In der vorliegenden Arbeit werden deshalb Datenmodelle und Methoden entwickelt, die mithilfe von BIM-basierten Datenbankstrukturen sowie Auswertungs- und Entscheidungsmethodiken Dienstleistungen der Gebäudebewirtschaftung objektiviert und automatisiert auslösen können. Der Fokus der Arbeit liegt dabei auf dem Facility Service der Reinigungs- und Pflegedienste des infrastrukturellen Facility Managements.
Eine umfangreiche Recherche etablierter Normen und Standards sowie öffentlich zugänglicher Leistungsausschreibungen bilden die Grundlage der Definition erforderlicher Informationen zur Leistungserbringung. Die identifizierten statischen Gebäude- und Prozessinformationen werden in einem relationalen Datenbankmodell strukturiert, das nach einer Darstellung von Messgrößen und der Beschreibung des Vorgehens zur Auswahl geeigneter Sensoren für die Erfassung von Bedarfen, um Sensorinformationen erweitert wird. Um Messwerte verschiedener und bereits in Gebäuden existenten Sensoren für die Leistungsauslösung verwenden zu können, erfolgt die Implementierung einer Normierungsmethodik in das Datenbankmodell. Auf diese Weise kann der Bedarf zur Leistungserbringung ausgehend von Grenzwerten ermitteln werden. Auch sind Verknüpfungsmethoden zur Kombination verschiedener Anwendungen in dem Datenbankmodell integriert. Zusätzlich zur direkten Auslösung erforderlicher Aktivitäten ermöglicht das entwickelte Modell eine opportune Auslösung von Leistungen, d. h. eine Leistungserbringung vor dem eigentlich bestehenden Bedarf. Auf diese Weise können tätigkeitsähnliche oder räumlich nah beieinander liegende Tätigkeiten sinnvoll vorzeitig erbracht werden, um für den Dienstleister eine Wegstreckeneinsparung zu ermöglichen. Die Arbeit beschreibt zudem die für die Auswertung, Entscheidungsfindung und Auftragsüberwachung benötigen Algorithmen.
Die Validierung des entwickelten Modells bedarfsorientierter Leistungserbringung erfolgt in einer relationalen Datenbank und zeigt simulativ für unterschiedliche Szenarien des Gebäudebetriebs, dass Bedarfsermittlungen auf Grundlage von Sensortechnologien erfolgen und Leistungen opportun ausgelöst, beauftragt und dokumentiert werden können.
Text classification deals with discovering knowledge in texts and is used for extracting, filtering, or retrieving information in streams and collections. The discovery of knowledge is operationalized by modeling text classification tasks, which is mainly a human-driven engineering process. The outcome of this process, a text classification model, is used to inductively learn a text classification solution from a priori classified examples. The building blocks of modeling text classification tasks cover four aspects: (1) the way examples are represented, (2) the way examples are selected, (3) the way classifiers learn from examples, and (4) the way models are selected.
This thesis proposes methods that improve the prediction quality of text classification solutions for unseen examples, especially for non-standard tasks where standard models do not fit. The original contributions are related to the aforementioned building blocks: (1) Several topic-orthogonal text representations are studied in the context of non-standard tasks and a new representation, namely co-stems, is introduced. (2) A new active learning strategy that goes beyond standard sampling is examined. (3) A new one-class ensemble for improving the effectiveness of one-class classification is proposed. (4) A new model selection framework to cope with subclass distribution shifts that occur in dynamic environments is introduced.
Die hier vorliegende Arbeit befasst sich mit dem Modifizieren von Computerspielen (Modding). Die Annäherung an das Modding geschieht aus zwei unterschiedlichen Blickrichtungen: Zum einen wird mit einem analytischen Blick auf das Themenfeld geschaut, der das bereits Erforschte mit den eigenen Suchbewegungen kombiniert. Zum anderen wird die Perspektive der Handlung eingenommen, die sich in der Widerständigkeit des Materials, der Werkzeuge und der Spieltechnologie äußert. Im Mittelpunkt der Auseinandersetzung stehen das Modding als Praxis, die Mods als Derivate und die Erforschung des Computerspiels mit den Praktiken und Derivaten des Modifizierens. Das Modding wird so zu einer epistemischen Praxis des Computerspiels.
Die hier formulierten Überlegungen zum Modding, als eine forschende Praxis des Computerspiels, präsentieren eine Vorgehensweise, die ästhetische, widerständige und stabilisierende Aspekte in sich vereint. Sie dient der Erforschung des Computerspiels entlang seiner Diskussionen, Materialien, Technologien und Praktiken und fokussiert hierbei auf das Abseitige, dass als integraler Bestandteil des Computerspiels verstanden wird. Mit diesem Blick auf die Grenzen des Computerspielens werden Dinge sichtbar, die zwar Teil der synthetischen Computerspielwelten sind, durch dessen Inszenierungen und Atmosphären jedoch verschleiert werden. Der hier entwickelte Ansatz ermöglicht einen Perspektivenwechsel innerhalb dieser Welten und die Erforschung des Computerspiels unter besonderer Berücksichtigung seiner eingeschriebenen Normen und Machtverhältnissen. Das Modding dient hierbei als eine kritische Praxis zur Entschlüsselung dieser medial vermittelten Konstellationen.
Mitigating Risks of Corruption in Construction: A theoretical rationale for BIM adoption in Ethiopia
(2021)
This PhD thesis sets out to investigate the potentials of Building Information Modeling (BIM) to mitigate risks of corruption in the Ethiopian public construction sector. The wide-ranging capabilities and promises of BIM have led to the strong perception among researchers and practitioners that it is an indispensable technology. Consequently, it has become the frequent subject of science and research. Meanwhile, many countries, especially the developed ones, have committed themselves to applying the technology extensively. Increasing productivity is the most common and frequently cited reason for that.
However, both technology developers and adopters are oblivious to the potentials of BIM in addressing critical challenges in the construction sector, such as corruption. This particularly would be significant in developing countries like Ethiopia, where its problems and effects are acute. Studies reveal that bribery and corruption have long pervaded the construction industry worldwide. The complex and fragmented nature of the sector provides an environment for corruption. The Ethiopian construction sector is not immune from this epidemic reality. In fact, it is regarded as one of the most vulnerable sectors owing to varying socio-economic and political factors. Since 2015, Ethiopia has started adopting BIM, yet without clear goals and strategies. As a result, the potential of BIM for combating concrete problems of the sector remains untapped. To this end, this dissertation does pioneering work by showing how collaboration and coordination features of the technology contribute to minimizing the opportunities for corruption. Tracing loopholes, otherwise, would remain complex and ineffective in the traditional documentation processes.
Proceeding from this anticipation, this thesis brings up two primary questions: what are areas and risks of corruption in case of the Ethiopian public construction projects; and how could BIM be leveraged to mitigate these risks? To tackle these and other secondary questions, the research employs a mixed-method approach. The selected main research strategies are Survey, Grounded Theory (GT) and Archival Study. First, the author disseminates an online questionnaire among Ethiopian construction engineering professionals to pinpoint areas of vulnerability to corruption. 155 responses are compiled and scrutinized quantitatively. Then, a semi-structured in-depth interview is conducted with 20 senior professionals, primarily to comprehend opportunities for and risks of corruption in those identified highly vulnerable project stages and decision points. At the same time, open interviews (consultations) are held with 14 informants to be aware of state of the construction documentation, BIM and loopholes for corruption in the country. Consequently, these qualitative data are analyzed utilizing the principles of GT, heat/risk mapping and Social Network Analysis (SNA). The risk mapping assists the researcher in the course of prioritizing corruption risks; whilst through SNA, methodically, it is feasible to identify key actors/stakeholders in the corruption venture. Based on the generated research data, the author constructs a [substantive] grounded theory around the elements of corruption in the Ethiopian public construction sector. This theory, later, guides the subsequent strategic proposition of BIM. Finally, 85 public construction related cases are also analyzed systematically to substantiate and confirm previous findings.
By ways of these multiple research endeavors that is based, first and foremost, on the triangulation of qualitative and quantitative data analysis, the author conveys a number of key findings. First, estimations, tender document preparation and evaluation, construction material as well as quality control and additional work orders are found to be the most vulnerable stages in the design, tendering and construction phases respectively. Second, middle management personnel of contractors and clients, aided by brokers, play most critical roles in corrupt transactions within the prevalent corruption network. Third, grand corruption persists in the sector, attributed to the fact that top management and higher officials entertain their overriding power, supported by the lack of project audits and accountability. Contrarily, individuals at operation level utilize intentional and unintentional 'errors’ as an opportunity for corruption.
In light of these findings, two conceptual BIM-based risk mitigation strategies are prescribed: active and passive automation of project audits; and the monitoring of project information throughout projects’ value chain. These propositions are made in reliance on BIM’s present dimensional capabilities and the promises of Integrated Project Delivery (IPD). Moreover, BIM’s synchronous potentials with other technologies such as Information and Communication Technology (ICT), and Radio Frequency technologies are topics which received a treatment. All these arguments form the basis for the main thesis of this dissertation, that BIM is able to mitigate corruption risks in the Ethiopian public construction sector. The discourse on the skepticisms about BIM that would stem from the complex nature of corruption and strategic as well as technological limitations of BIM is also illuminated and complemented by this work. Thus, the thesis uncovers possible research gaps and lays the foundation for further studies.
Die europäische Gegenwartsarchitektur, die sich exemplarisch etwa in Arbeiten von Herzog & deMeuron, Adolf Krischanitz oder Rem Kohlhaas manifestiert, verbindet eine Beziehung zum Gewöhnlichen, Alltäglichen, Banalen. Diese Beziehung ist unklar und problematisch, da üblicherweise Architektur auf der einen Seite und Banalität auf der anderen Seite einander ausschliessende Begriffspaare bezeichnen. Diese Hinwendung der Architektur zum Gewöhnlichen entwickelt sich historisch als ein Modell, welches Gewöhnlichkeit als ethisches Prinzip etabliert, das unabdingbar mit der Entwicklung der Kultur verbunden ist. Besonders im zwanzigsten Jahrhundert werden solcherart einflußreiche Alternativen zu einer Architektur des Besten, Größten und Schönsten vorgestellt. Der Regelfall für die Beurteilung der Beziehungen zwischen Architektur und Banalität ist allerdings die Kritik der Banalität. Hier gibt es zwei hauptsächliche Kritikmuster, nämlich die Kritik von Banalität als Bedeutungslosigkeit und die Kritik der Banalität als Wertlosigkeit. Problematisch ist eine Kritik der Banalität dann, wenn sie objektorientiert argumentiert, anstatt Interpretationsmodelle zu untersuchen, da in Gegenständen keine Bedeutungen liegen, sondern nur Unterschiede, die es dem Betrachter möglich machen, ihnen Bedeutung zuzuschreiben. Genauso wie eine objektorientierte Kritik an Banalität nicht zielführend sein kann, so sind auch Versuche, eine Ästhetik des Banalen zu begründen, fragwürdig. Ein Kunstwerk oder auch Architektur definiert sich nicht über seine Neutralität, sondern über seine Differenz zum Alltäglichen, Gewöhnlichen, Banalen. Eine Wechselwirkung zwischen Architektur und Banalität ist also nur über die Effekte des Gewöhnlichen untersuchbar, da das Gewöhnliche selbst uninterpretiert und unsichtbar bleiben muß. Entscheidend ist letztlich die Frage, auf welche Art Architektur Banalität für ihre Transformationen benützt. Ein Erklärungsmodell ist der Kantsche Begriff des Parergon, der zeigt, wie sich Architektur durch Ränder, das Sekundäre, das Gewöhnliche konstituiert. Das Parergon ist dabei subversiv und sinister, weil es die herkömmliche Einheit der Architektur zerstört. Letztlich also ist die Annäherung der zeitgenössischen Architektur an die Banalität eine subversive Differenzierungsstrategie, mit der durch scheinbare Nähe größtmögliche Distanzierung erreicht wird.
Zwischen den Jahren 1920 und 1930 kam es an der kalifornischen Küste zu Bauschäden an Brücken und Fahrbahnen, die sich vor allem in einer deutlichen Rissbildung äußerten. Seither werden immer wieder Bauschäden beschrieben, deren Ursache in der Reaktion von Zuschlägen, die „reaktive“ Kieselsäure enthalten, mit der Porenlösung des Betons zu sehen ist. Diese Reaktion wird als Alkali-Kieselsäure Reaktion (AKR) bezeichnet. Seit der ersten Veröffentlichung von Stanton über die „alkali-aggregate reaction“ an opalhaltigen Zuschlägen sind hunderte von Forschungsarbeiten zu diesem Thema durchgeführt und deren Ergebnisse veröffentlicht worden. Trotz eingehender Forschung seit mehr als 8o Jahren ist weder der Mechanismus der AKR vollständig geklärt noch eine eindeutige Voraussage über die Gefährdung von Bauwerken oder Bauteilen mit potentiell AKR-empfindlichen Zuschlägen möglich. Das liegt vor allen Dingen daran, das es sich bei der AKR um eine Reaktion handelt, die aus einer komplexen Abfolge chemischer und physikalischer Prozesse besteht, die in ihrer Gesamtheit zu einer Schädigung von Beton bzw. Betonbauteilen und Bauwerken führen können. Eine geschlossene Beschreibung und Behandlung dieser Reaktion ist nicht möglich, solange keine befriedigende Kenntnis über den Ablauf der einzelnen Schritte vorliegt. Dazu bedarf es grundsätzlicher Untersuchungen der einzelnen chemischen und physikalischen Reaktionsschritte sowie einer möglichst quantitativen Bewertung der verschiedenen Einflussfaktoren. Grundsätzlich gibt es weltweit eine ganze Reihe von Richtlinien und Normen , die dazu verhelfen sollen, Schädigungen an Bauwerken durch AKR zu verhindern. In Deutschland ist das momentan gültige Regelwerk die sogenannte Alkali-Richtlinie des deutschen Ausschusses für Stahlbeton (DAfStb). Sie dient zur Beurteilung von Zuschlag nach DIN 4226 [6, 7, 8] mit alkaliempfindlichen Bestandteilen. Dabei bezieht sich der Teil 2 der Richtlinie auf Zuschläge mit Opalsandstein, Kieselkreide und Flint aus bestimmten Gewinnungsgebieten. Hier wird eine reine Zuschlagprüfung gefordert. Teil 3 der Richtlinie bezieht sich auf präkambrische Grauwacken und andere alkaliempfindliche Gesteine. Gefordert werden hier Prüfungen der Zuschläge selbst sowie Prüfung an Betonbalken und 30er Würfeln in der Nebelkammer. Für die meisten in der Richtlinie genannten Zuschläge bilden die Prüfungen und Vorschriften eine ausreichende Sicherheit, um eine AKR zu vermeiden. Dennoch treten immer wieder Schäden mit Zuschlägen auf, die nach der Alkali-Richtlinie als unempfindlich eingestuft werden müssten. Dabei handelt es sich in der Regel um Schadensfälle, die erst nach mehreren Jahren mit spät reagierenden AKR-empfindlichen Zuschlägen auftreten. Zu diesen Zuschlägen, die gegebenenfalls speziell im Nebelkammertest innerhalb von neun Monaten keine signifikante Dehnung (<0,6mm/m) aufweisen, gehören Stressquarze, Kieselkalk, Granit, Porphyr, Kieselschiefer und Grauwacke. Die vorliegende Arbeit dient speziell der Beurteilung und Einordnung von unterschiedlichen kristallinen Quarzmodifikationen sowie der Ermittlung geeigneter Untersuchungsmethoden für die Beurteilung der AKR-Empfindlichkeit von Quarz.
Methods based on B-splines for model representation, numerical analysis and image registration
(2015)
The thesis consists of inter-connected parts for modeling and analysis using newly developed isogeometric methods. The main parts are reproducing kernel triangular B-splines, extended isogeometric analysis for solving weakly discontinuous problems, collocation methods using superconvergent points, and B-spline basis in image registration applications.
Each topic is oriented towards application of isogeometric analysis basis functions to ease the process of integrating the modeling and analysis phases of simulation.
First, we develop reproducing a kernel triangular B-spline-based FEM for solving PDEs. We review the triangular B-splines and their properties. By definition, the triangular basis function is very flexible in modeling complicated domains. However, instability results when it is applied for analysis. We modify the triangular B-spline by a reproducing kernel technique, calculating a correction term for the triangular kernel function from the chosen surrounding basis. The improved triangular basis is capable to obtain the results with higher accuracy and almost optimal convergence rates.
Second, we propose an extended isogeometric analysis for dealing with weakly discontinuous problems such as material interfaces. The original IGA is combined with XFEM-like enrichments which are continuous functions themselves but with discontinuous derivatives. Consequently, the resulting solution space can approximate solutions with weak discontinuities. The method is also applied to curved material interfaces, where the inverse mapping and the curved triangular elements are considered.
Third, we develop an IGA collocation method using superconvergent points. The collocation methods are efficient because no numerical integration is needed. In particular when higher polynomial basis applied, the method has a lower computational cost than Galerkin methods. However, the positions of the collocation points are crucial for the accuracy of the method, as they affect the convergent rate significantly. The proposed IGA collocation method uses superconvergent points instead of the traditional Greville abscissae points. The numerical results show the proposed method can have better accuracy and optimal convergence rates, while the traditional IGA collocation has optimal convergence only for even polynomial degrees.
Lastly, we propose a novel dynamic multilevel technique for handling image registration. It is application of the B-spline functions in image processing. The procedure considered aims to align a target image from a reference image by a spatial transformation. The method starts with an energy function which is the same as a FEM-based image registration. However, we simplify the solving procedure, working on the energy function directly. We dynamically solve for control points which are coefficients of B-spline basis functions. The new approach is more simple and fast. Moreover, it is also enhanced by a multilevel technique in order to prevent instabilities. The numerical testing consists of two artificial images, four real bio-medical MRI brain and CT heart images, and they show our registration method is accurate, fast and efficient, especially for large deformation problems.
Bauablaufplänen kommt bei der Realisierung von Bauprojekten eine zentrale Rolle zu. Sie dienen der Koordination von Schnittstellen und bilden für die am Projekt Beteiligten die Grundlage für ihre individuelle Planung. Eine verlässliche Terminplanung ist daher von großer Bedeutung, tatsächlich sind aber gerade Bauablaufpläne für ihre Unzuverlässigkeit bekannt.
Aufgrund der langen Vorlaufzeiten bei der Planung von Bauprojekten sind zum Zeitpunkt der Planung viele Informationen nur als Schätzwerte bekannt. Auf der Grundlage dieser geschätzten und damit mit Unsicherheiten behafteten Daten werden im Bauwesen deterministische Terminpläne erstellt. Kommt es während der Realisierung zu Diskrepanzen zwischen Schätzungen und Realität, erfordert dies die Anpassung der Pläne. Aufgrund zahlreicher Abhängigkeiten zwischen den geplanten Aktivitäten können einzelne Planänderungen vielfältige weitere Änderungen und Anpassungen nach sich ziehen und damit einen reibungslosen Projektablauf gefährden.
In dieser Arbeit wird ein Vorgehen entwickelt, welches Bauablaufpläne erzeugt, die im Rahmen der durch das Projekt definierten Abhängigkeiten und Randbedingungen in der Lage sind, Änderungen möglichst gut zu absorbieren. Solche Pläne, die bei auftretenden Änderungen vergleichsweise geringe Anpassungen des Terminplans erfordern, werden hier als robust bezeichnet.
Ausgehend von Verfahren der Projektplanung und Methoden zur Berücksichtigung von Unsicherheiten werden deterministische Terminpläne bezüglich ihres Verhaltens bei eintretenden Änderungen betrachtet. Hierfür werden zunächst mögliche Unsicherheiten als Ursachen für Änderungen benannt und mathematisch abgebildet. Damit kann das Verhalten von Abläufen für mögliche Änderungen betrachtet werden, indem die durch Änderungen erzwungenen angepassten Terminpläne simuliert werden. Für diese Monte-Carlo-Simulationen der angepassten Terminpläne wird sichergestellt, dass die angepassten Terminpläne logische Weiterentwicklungen des deterministischen Terminplans darstellen. Auf der Grundlage dieser Untersuchungen wird ein stochastisches Maß zur Quantifizierung der Robustheit erarbeitet, welches die Fähigkeit eines Planes, Änderungen zu absorbieren, beschreibt. Damit ist es möglich, Terminpläne bezüglich ihrer Robustheit zu vergleichen.
Das entwickelte Verfahren zur Quantifizierung der Robustheit wird in einem Optimierungsverfahren auf Basis Genetischer Algorithmen angewendet, um gezielt robuste Terminpläne zu erzeugen. An Beispielen werden die Methoden demonstriert und ihre Wirksamkeit nachgewiesen.
Die vorliegende Arbeit richtet sich an Ingenieur*innen und Wissenschaftler*innen der technischen Gebäudeausrüstung. Sie greift einen sich abzeichnenden Änderungsbedarf in der Umwelt- und Nachhaltigkeitsbewertung von Gebäuden und wärmetechnischen Anlagen auf. Der aktuell genutzte nicht erneuerbare Primärenergiebedarf wird insbesondere hinsichtlich künftiger politischer Klima- und Umweltschutzziele als alleinige Bewertungsgröße nicht ausreichend sein. Die mit dieser Arbeit vorgestellte Ökoeffizienzbewertungsmethode kann als geeignetes Instrument zur Lösung der Probleme beitragen. Sie ermöglicht systematische, ganzheitliche Bewertungen und reproduzierbare Vergleiche wärmetechnischer Anlagen bezüglich ihrer ökologischen und ökonomischen Nachhaltigkeit. Die wesentlichsten Neuentwicklungen sind die spezifische Umweltleistung, in Erweiterung zum genutzten Primärenergiefaktor, und der Ökoeffizienzindikator UWI.
Metaphosphat – modifizierte Silikatbinder als Basis säurebeständiger Beschichtungsmaterialien
(2008)
Mörtel basierend auf erhärtetem Wasserglas als Binder weisen eine ausgesprochen gute Beständigkeit im Kontakt mit stark sauren Medien auf; unzureichend hingegen ist die che-mische Beständigkeit im Kontakt mit alkalischen bis schwachsauren Medien. Ziel der Un-tersuchungen ist eine Verbesserung der Wasserbeständigkeit von Natriumsilikatbindern durch gezielte chemische Modifikation mit verschiedenen Metaphosphaten. Durch eine systematische Charakterisierung der Zusammensetzung und des strukturellen Aufbaus der Binder werden dabei die Ursachen der bindertypischen Eigenschaften aufgeklärt. Eine Modifikation der Natriumsilikatlösung mit Natriumtrimetaphosphat hat eine Erhöhung des Kondensationsgrades und eine verbesserte mechanische Beständigkeit des verfestig-ten Natriumsilikatbinders zur Folge. Durch die reaktive Bindung der Basizität der Natriumsi-likatlösung beim Abbau der Metaphosphatstruktur wird die Wasserbeständigkeit mit Natri-umtrimetaphosphat modifizierter Natriumsilikatmörtel erhöht. Die gute Beständigkeit im Kontakt mit hochkonzentrierter Schwefelsäure bleibt nahezu unverändert erhalten. Eine Modifikation der Natriumsilikatlösung mit Aluminiumtetrametaphosphat führt durch Reaktion beider Komponenten miteinander zur Bildung eines alumosilikatischen Netzwer-kes. Das alumosilikatische Netzwerk des mit Aluminiumtetrametaphosphat modifizierten Natriumsilikatbinders ist auch in einer stark alkalischen Natriumhydroxidlösung beständig. Die gute Beständigkeit des Binders im Kontakt mit hochkonzentrierter Schwefelsäure bleibt trotz des Aluminates im Bindernetzwerk erhalten.
Metamorphosen des Organizismus : zur Formensprache der Lebendigen Architektur von Imre Makovecz
(1999)
Die vorliegende Arbeit begann als Studie des ungarischen ”organischen” Architekten Imre Makovecz und entwickelte sich zu einer Untersuchung der Metamorphosen des Organizismus in der Kunst- und Architekturtheorie des 20. Jahrhunderts. Die Essenz der organischen Architektur wird oft im Konzept der organischen Form gesehen. Architekturhistoriker konstruieren häufig Genealogien, die Frank Lloyd Wright mit Emerson verbinden oder Makovecz mit Rudolf Steiner und sehen den Organizismus in der Architektur als Indikator für den Bruch mit der klassischen Tradition an. Faßt man den Organizismus als ein Konzept auf, das die Natur zum Vorbild nimmt, so gibt es keine Idee in der westlichen Kunst- und Architekturtheorie, die fundamentaler oder weiter verbreitet wäre. Die Studie thematisiert, inwiefern sich die verschiedenen Varianten des ars imitatur naturam auf das Werk von Imre Makovecz und auf die Tradition der organischen Architektur im 20. Jahrhundert beziehen. Aus architekturtheoretischer Sicht erfolgt eine philosophische Skizzierung des Diskurses der Organizismustheorie, wobei die Person Imre Makovez als Ausgangspunkt der Untersuchung gewählt wurde.
Metakaolin made from kaolin is used around the world but rarely in Vietnam where abundant deposits of kaolin is found. The first studies of producing metakaolin were conducted with high quality Vietnamese kaolins. The results showed the potential to produce metakaolin, and its effect has on strength development of mortars and concretes. However, utilisation of a low quality kaolin for producing Vietnamese metakaolin has not been studied so far.
The objectives of this study were to produce a good quality metakaolin made from low quality Vietnamese kaolin and to facilitate the utilisation of Vietnamese metakaolin in composite cements.
In order to reach such goals, the optimal thermal conversion of Vietnamese kaolin into metakaolin was carried out by many investigations, and as such the optimal conversion is found using the analysis results of DSC/TGA, XRD and CSI. During the calcination in a range of 500 – 800 oC lasting for 1 – 5 hours, the characterisation of calcinated kaolin was also monitored for mass loss, BET surface, PSD, density as well as the presence of the residual water. It is found to have a well correlation between residual water and BET surface.
The pozzolanic activity of metakaolin was tested by various methods regarding to the saturated lime method, mCh and TGA-CaO method. The results of the study showed which method is the most suitable one to characterise the real activity of metakaolin and can reach the greatest agreement with concrete performance. Furthermore, the pozzolanic activity results tested using methods were also analysed and compared to each other with respect to the BET surface.
The properties of Vietnam metakaolin was established using investigations on water demand, setting time, spread-flowability, and strength. It is concluded that depending on the intended use of composite cement and weather conditions of cure, each Vietnamese metakaolin can be used appropriately to produce (1) a composite cement with a low water demand (2) a high strength of composite cement (3) a composite cement that aims to reduce CO2 emissions and to improve economics of cement products (4) a high performance mortar.
The durability of metakaolin mortar was tested to find the needed metakaolin content against ASR, sulfat and sulfuric acid attacks successfully.
1921 nennt der Mathematiker Hermann Weyl (1885-1955) das Kontinuum ein "Medium des freien Werdens". Diese Formulierung nimmt die Dissertation zum Anlass, um die medientheoretische Bedeutung der philosophischen Schriften von Hermann Weyl zu untersuchen. Das konstruktive Kontinuum, in dem nach Weyls Auffassung die Physik präparierte Ereignisse ansiedelt, ist scharf von der anschaulichen Wirklichkeit zu unterscheiden. Weyl erklärt in Diskussionen der Positionen Hilberts und Brouwers in der mathematischen Grundlagenkrise das Kontinuum als Produkt des menschlichen Bewusstseins. Weyls Theorie des Kontinuums weicht von der Darstellung in der Philosophie der symbolischen Formen Ernst Cassirers ab. Cassirer setzt in seinem Hauptwerk voraus, daß der mathematische Symbolismus und das konstruktive Kontinuum eine Brücke zwischen Bewusstsein und Wirklichkeit bilden. Das Wechselverhältnis zwischen dem Mathematiker Weyl und dem Philosophen Cassirer zeigt beispielhafte Formen der Vermittlung zwischen Philosophie und moderner Naturwissenschaft. Weyls Schriften werden als paradigmatisch für die Medientheorie gedeutet.
Compactly, this thesis encompasses two major parts to examine mechanical responses of polymer compounds and two dimensional materials:
1- Molecular dynamics approach is investigated to study transverse impact behavior of polymers, polymer compounds and two dimensional materials.
2- Large deflection of circular and rectangular membranes is examined by employing continuum mechanics approach.
Two dimensional materials (2D), including, Graphene and molybdenum disulfide (MoS2), exhibited new and promising physical and chemical properties, opening new opportunities to be utilized alone or to enhance the performance of conventional materials. These 2D materials have attracted tremendous attention owing to their outstanding physical properties, especially concerning transverse impact loading.
Polymers, with the backbone of carbon (organic polymers) or do not include carbon atoms in the backbone (inorganic polymers) like polydimethylsiloxane (PDMS), have extraordinary characteristics particularly their flexibility leads to various easy ways of forming and casting. These simple shape processing label polymers as an excellent material often used as a matrix in composites (polymer compounds).
In this PhD work, Classical Molecular Dynamics (MD) is implemented to calculate transverse impact loading of 2D materials as well as polymer compounds reinforced with graphene sheets. In particular, MD was adopted to investigate perforation of the target and impact resistance force . By employing MD approach, the minimum velocity of the projectile that could create perforation and passes through the target is obtained. The largest investigation was focused on how graphene could enhance the impact properties of the compound. Also the purpose of this work was to discover the effect of the atomic arrangement of 2D materials on the impact problem. To this aim, the impact properties of two different 2D materials, graphene and MoS2, are studied. The simulation of chemical functionalization was carried out systematically, either with covalently bonded molecules or with non-bonded ones, focusing the following efforts on the covalently bounded species, revealed as the most efficient linkers.
To study transverse impact behavior by using classical MD approach , Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software, that is well-known among most researchers, is employed. The simulation is done through predefined commands in LAMMPS. Generally these commands (atom style, pair style, angle style, dihedral style, improper style, kspace style, read data, fix, run, compute and so on) are used to simulate and run the model for the desired outputs. Depends on the particles and model types, suitable inter-atomic potentials (force fields) are considered. The ensembles, constraints and boundary conditions are applied depends upon the problem definition. To do so, atomic creation is needed. Python codes are developed to generate particles which explain atomic arrangement of each model. Each atomic arrangement introduced separately to LAMMPS for simulation. After applying constraints and boundary conditions, LAMMPS also include integrators like velocity-Verlet integrator or Brownian dynamics or other types of integrator to run the simulation and finally the outputs are emerged. The outputs are inspected carefully to appreciate the natural behavior of the problem. Appreciation of natural properties of the materials assist us to design new applicable materials.
In investigation on the large deflection of circular and rectangular membranes, which is related to the second part of this thesis, continuum mechanics approach is implemented. Nonlinear Föppl membrane theory, which carefully release nonlinear governing equations of motion, is considered to establish the non-linear partial differential equilibrium equations of the membranes under distributed and centric point loads. The Galerkin and energy methods are utilized to solve non-linear partial differential equilibrium equations of circular and rectangular plates respectively. Maximum deflection as well as stress through the film region, which are kinds of issue in many industrial applications, are obtained.
One of the main criteria determining the thermal comfort of occupants is the air temperature. To monitor this parameter, a thermostat is traditionally mounted in the indoor environment for instance in office rooms in the workplaces, or directly on the radiator or in another location in a room. One of the drawbacks of this conventional method is the measurement at a certain location instead of the temperature distribution in the entire room including the occupant zone. As a result, the climatic conditions measured at the thermostat point may differ from those at the user's location. This not only negatively impacts the thermal comfort assessment but also leads to a waste of energy due to unnecessary heating and cooling. Moreover, for measuring the distribution of the air temperature under laboratory conditions, multiple thermal sensors should be installed in the area under investigation. This requires high effort in both installation and expense.
To overcome the shortcomings of traditional sensors, Acoustic travel-time TOMography (ATOM) offers an alternative based on measuring the transmission sound velocity signals. The basis of the ATOM technique is the first-order dependency of the sound velocity on the medium's temperature. The average sound velocity, along the propagation paths, can be determined by travel-times estimation of a defined acoustic signal between transducers. After the travel-times collection, the room is divided into several volumetric grid cells, i.e. voxels, whose sizes are defined depending on the dimension of the room and the number of sound paths. Accordingly, the spatial air temperature in each voxel can be determined using a suitable tomographic algorithm. Recent studies indicate that despite the great potential of this technique to detect room climate, few experiments have been conducted.
This thesis aims to develop the ATOM technique for indoor climatic applications while coupling the analysis methods of tomography and room acoustics. The method developed in this thesis uses high-energy early reflections in addition to the direct paths between transducers for travel time estimation. In this way, reflections can provide multiple sound paths that allow the room coverage to be maintained even when a few or even only one transmitter and receiver are used.
In the development of the ATOM measurement system, several approaches have been employed, including the development of numerical methods and simulations and conducting experimental measurements, each of which has contributed to the improvement of the system's accuracy. In order to effectively separate the early reflections and ensure adequate coverage of the room with sound paths, a numerical method was developed based on the optimization of the coordinates of the sound transducers in the test room. The validation of the optimal positioning method shows that the reconstructed temperatures were significantly improved by placing the transducers at the optimal coordinates derived from the developed numerical method. The other numerical method developed is related to the selection of the travel times of the early reflections. Accordingly, the detection of the travel times has been improved by adjusting the lengths of the multiple analysis time-windows according to the individual travel times in the reflectogram of the room impulse response. This can reduce the probability of trapping faulty travel times in the analysis time-windows.
The simulation model used in this thesis is based on the image source model (ISM) method for simulating the theoretical travel times of early reflection sound paths. The simulation model was developed to simulate the theoretical travel times up to third-order reflections.
The empirical measurements were carried out in the climate lab of the Chair of Building Physics under different boundary conditions, i.e., combinations of different room air temperatures under both steady-state and transient conditions, and different measurement setups. With the measurements under controllable conditions in the climate lab, the validity of the developed numerical methods was confirmed.
In this thesis, the performance of the ATOM measurement system was evaluated using two measurement setups. The setup for the initial investigations consists of an omnidirectional receiver and a near omnidirectional sound source, keeping the number of transducers as few as possible. This has led to accurately identify the sources of error that could occur in each part of the measuring system. The second measurement setup consists of two directional sound sources and one omnidirectional receiver. This arrangement of transducers allowed a higher number of well-detected travel times for tomography reconstruction, a better travel time estimation due to the directivity of the sound source, and better space utilization. Furthermore, this new measurement setup was tested to determine an optimal selection of the excitation signal. The results showed that for the utilized setup, a linear chirp signal with a frequency range of 200 - 4000 Hz and a signal duration of t = 1 s represents an optimal selection with respect to the reliability of the measured travel times and higher signal-to-noise ratio (SNR).
To evaluate the performance of the measuring setups, the ATOM temperatures were always compared with the temperatures of high-resolution NTC thermistors with an accuracy of ±0.2 K. The entire measurement program, including acoustic measurements, simulation, signal processing, and visualization of measurement results are performed in MATLAB software.
In addition, to reduce the uncertainty of the positioning of the transducers, the acoustic centre of the loudspeaker was determined experimentally for three types of excitation signals, namely MLS (maximum length sequence) signals with different lengths and duration, linear and logarithmic chirp signals with different defined frequency ranges. For this purpose, the climate lab was converted into a fully anechoic chamber by attaching absorption panels to the entire surfaces of the room. The measurement results indicated that the measurement of the acoustic centre of the sound source significantly reduces the displacement error of the transducer position.
Moreover, to measure the air temperature in an occupied room, an algorithm was developed that can convert distorted signals into pure reference signals using an adaptive filter. The measurement results confirm the validity of the approach for a temperature interval of 4 K inside the climate lab.
Accordingly, the accuracy of the reconstructed temperatures indicated that ATOM is very suitable for measuring the air temperature distribution in rooms.
Matrix-free voxel-based finite element method for materials with heterogeneous microstructures
(2019)
Modern image detection techniques such as micro computer tomography
(μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis.
However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm.
This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained.
Für die Optimierung eines bereits bestehenden Prozesses, z.B. im Hinblick auf den maximal möglichen Durchsatz bei gleich bleibender Qualität der Pyrolyseprodukte oder für die Einstellung der Betriebsparameter bei einem unbekannten Einsatzstoff, kann ein mathematisches Modell eine erste Abschätzung für die Einstellung betrieblicher Parameter, wie z.B. Temperaturprofile im Gas und Feststoff, geben. Darüber hinaus kann man mit einem Modell für neu zu konzipierende Anlagen konstruktive Parameter ermitteln oder überprüfen. In dem hier dargestellten vereinfachten Modellansatz werden u. a. die Umsatzvorgänge für ein Partikelkollektiv mit Hilfe von Summenparametern aus Untersuchungen an einer Thermowaage und ergänzend im Drehrohr ermittelt. Das Prozessmodell basiert auf einem Reaktormodell, das das Verweilzeitverhalten des Einsatzstoffes im Reaktor beschreibt und einem Basismodell, bestehend aus Massen- und Energiebilanzen für Solid und Gas sowie Ansätzen zur Trocknung und zum Umsatz. Im Hinblick auf die Verfügbarkeit von stoffspezifischen Daten von Abfällen sind insbesondere zur Berechnung des Verweilzeitverhaltens und des Umsatzes im Heißbetrieb vereinfachende Ansätze durch die Bildung von Summenparametern hilfreich. Das Prozessmodell wurde schrittweise validiert: Zunächst wurde in Kaltversuchen ein Summenparameter, der u.a. die unbekannten Reibungsverhältnisse im Drehrohr berücksichtigt, durch Vergleich von Experiment und Rechnung für Sand ermittelt. Für heterogene Abfallgemische kann dieser Materialfaktor zwar für Kaltversuche bestimmt werden (soweit dies für Abfälle möglich ist), im Heißbetrieb ändern sich jedoch alle wesentlichen Stoffparameter wie Partikeldurchmesser, Schüttdichte und Schüttwinkel sowie die Reibungsverhältnisse. Für diesen Fall wird der Materialfaktor zu Eins gesetzt und die wesentlichen Stoffgrößen umsatzabhängig modelliert. Dazu ist die Kenntnis der Schüttdichten, statischen Schüttwinkel und mittleren Partikeldurchmesser vom Abfall und Koks aus dem Abfall notwendig. Die mit diesen Stoffdaten berechnete Verweilzeit wurde in einem Heißversuch bei der Pyrolyse von Brennstoff aus Müll- (BRAM) Pellets mit einem Fehler von ca. 20 % erreicht. Das Basismodell wurde zunächst ohne Umsatz an Messergebnisse mit Sand im Drehrohr unter Variation von Temperaturen und Massenstrom angepasst bevor mit diesem Modell die Pyrolyse von einem homogenen Einsatzstoff (Polyethylen mit Sand) im Drehrohr berechnet wurde. Hier konnte bereits gezeigt werden, dass mit diesem vereinfachten Modellansatz gute Ergebnisse beim Vergleich von Modell und Experiment erzielt werden können. Im nächsten Schritt wurde der Sand angefeuchtet, um die Teilmodelle der Trocknung unterhalb und bei Siedetemperatur zu validieren. Die Mess- und Modellierungsergebnisse stimmen gut miteinander überein. Für ein Abfallgemisch aus BRAM-Pellets konnte der Verlauf der Solidtemperaturen unter der Berücksichtigung variabler Stoffwerte des Solids und eines Verschmutzungsfaktors, der den Belag des Drehrohres mit anklebendem Pellets bis zur Verkokung berücksichtigt, gut wiedergegeben werden. Die Gastemperaturen können in erster Näherung ausreichend genau durch das mathematische Modell beschrieben werden. Mit diesem vereinfachten mathematischen Modellansatz steht nun ein Hilfsmittel zur Auslegung und Optimierung von indirekt beheizten Drehrohren zur Verfügung, um bei einem neuen Einsatzstoff mit Daten aus experimentellen Basisuntersuchungen, die Temperaturverläufe im Feststoff und Gas sowie die Gaszusammensetzung in Abhängigkeit der wesentlichen Einflussgrößen abzuschätzen.
Stapelprobleme treten in der Praxis in vielfältiger Form auf. So finden sich Stapelprobleme in einer großen Fülle von Variationen im Logistikbereich, aber auch im Bauwesen. Zunächst wird das klassische Turm von Hanoi Problem kurz vorgestellt. Dieses Problem wird als Stapelproblem formuliert. Weiterhin werden verzweigte Stapelproblem untersucht: Ein gegebener Stapel -- bestehend aus den Elementen v der Menge V -- soll an anderer Stelle in einer vorgeschriebenen, veränderten Struktur wieder aufgebaut werden. Dazu stehen Hilfsstapelplätze zur Verfügung. Die Optimierung dieses Problems hinsichtlich der Anzahl der benötigten Hilfsstapelplätze ist NP-vollständig. Es werden Erfahrungen mit einem Branch-and-Bound Algorithmus zur Lösung des Problems vorgestellt sowie ein heuristischer Algorithmus diskutiert. Schließlich werden verzweigte Stapelprobleme betrachtet, bei denen keine eineindeutige Zuordnung mehr von Elementen des Ausgangsstapels zu verfügbaren Positionen im Zielstapel existiert. Hier ist schon die Bestimmung einer günstigsten Zuordnung in bezug auf die Anzahl benötigter Hilfsstapelplätze NP-schwer.
Markennetze
(2003)
Die Arbeit zeigt, dass Markenallianzen oder -netzwerke aus ökonomischer Perspektive analog zu Allianzen in anderen Bereichen des Unternehmens oder strategischen Partner-schaften kompletter Unternehmen betrachtet und analysiert werden können. Auch auf Marken wirken Herausforderungen ein, die sich aus der Globalisierung und Deregulierung von Märkten und verändertem Konsumentenverhalten ergeben. Dies führt dazu, dass eine alleinstehende Marktbearbeitung häufig nicht mehr erfolgversprechend scheint. Marken als Gegenstand von interorganisationalen Kooperationsbeziehungen weisen Be-sonderheiten im Vergleich zu sonstigen Vermögensgegenständen und Unternehmensbereichen auf. Marken übernehmen in Wirtschaft und Gesellschaft wesentlich weitrei-chendere Funktionen als nur das Garantieren von Produktqualität. Eine Marke kann als symbolisch generalisiertes Kommunikationsmedium im Sinne von LUHMANN bzw. als so-ziales Interaktionsmedium im! Sinne von PARSONS definiert werden. Diese Sichtweise er-möglicht es, unterschiedliche Funktionen der Marke für unterschiedliche Nutzer oder Ei-gentümer auch in verschiedenen Märkten und Lebensbereichen in einem schlüssigen Ge-samtzusammenhang darzustellen. Weiterhin wird es durch die ganzheitliche Betrachtung der Marke als Medium möglich, bisher nebeneinander stehende Perspektiven und Defini-tionen von Marken zu integrieren. Die Darstellung der Marke als Medium und die Analyse der medialen Ebenen führt zu einem neuen Dispositiv der Markenführung. So müssen Marken für den Konsumenten mehr Raum zur eigenen Aufladung mit Bedeutung lassen, die symbolische Bedeutung lässt sich nicht direkt und umfassend von Seiten des markenführenden Unternehmens planen. Weiterhin sollten Marken mit unterschiedlichen symbolischen Funktionen unter-schiedlich geführt werden. Bisherige Konzepte der Markenführung differenzieren ihre Empfehlungen jedoch nicht systematisch nach verschiedenen symb! olischen Funktionen der Marke. Dies gilt auch für die Füh! rung von Markennetzen: Je nach den Eigenschaften der beteilig-ten Marken müssen hierfür unterschiedliche Koordinationsmechanismen Anwendung finden. Für die Führung von Markennetzen lassen sich aus den Überlegungen zur Koordi-nation von Unternehmensnetzwerken Empfehlungen ableiten, welche bisherige Leitfäden zur Gestaltung von Markenallianzen nicht nur theoretisch fundieren und wesentlich er-gänzen können, sondern insbesondere auch eine vorwärts gerichtete Analyse anstelle einer simplen Beschreibung der verschiedenen Erscheinungsformen ermöglichen. Diese Arbeit konzeptionalisiert Markennetze als ein Netzwerk von vielschichtigen Principal-Agent-Beziehungen. Die Ausgestaltung von Markennetzen ist folglich ein komplexer Pro-zess. Zudem existieren aufgrund unterschiedlicher Zielsetzungen und Ausgangssituationen der Marken nicht für sämtliche Formen von Markennetzen identische, allgemein gültige Handlungsempfehlungen. Zur Entwicklung des Handlungsrahmens wurde ein Ansatz aus d! er Netzwerktheorie über-nommen, welcher die Koordination von Netzwerken in vier Schritte unterteilt, die „vier basalen Funktionen des Managements“. Diese Funktionen sind die Selektion der Partner, die Allokation von Ressourcen, die Regulation der Beziehungen und schließlich die Evalua-tion der Zusammenarbeit. Die vorliegende Arbeit entwickelt ausgehend von diesen vier Funktionen einen kompletten Bezugsrahmen zur Ausgestaltung von Markennetzen.
Diese erste umfassende Monographie über den Architekten, Professor und Museumsmann Manfred Lehmbruck (Paris 1913 – Stuttgart 1992) konzentriert sich neben dem Lebensweg und den theoretischen Arbeiten Lehmbrucks zum Museumsbau auf die drei realisierten Museumsbauten: das Reuchlinhaus Pforzheim, das Wilhelm-Lehmbruck-Museum Duisburg und das Federseemuseum Bad Buchau. Aber auch die drei anderen bedeutenden Gebäudeensembles, die Pausa AG Mössingen, die Berufsschule und das Stadtbad Stuttgart-Feuerbach und der Solitär der Mittelschule in Mössingen, alle fertiggestellt in den Sechziger Jahren des letzten Jahrhunderts, werden ausführlich besprochen. Das umfassende Werkverzeichnis im Anhang zeigt deutlich den Einfluß, den seine Lehrer, allen voran Ludwig Mies van der Rohe, aber auch Heinrich Tessenow, Paul Bonatz und Auguste Perret, auf die Architektursprache Lehmbrucks gehabt haben.
In the last two decades, many cities have faced changes in their economic basis and therefore adopted an entrepreneurial approach in the municipal administration accompanied by city marketing strategies. Brazilian cities have also adopted this approach, like the case of Florianópolis. Florianópolis has promoted advertising campaigns on the natural resources of the Island of Santa Catarina as well as on its quality of life in comparison to other cities. However, due also to such campaigns, it has experienced a great demographic growth and, consequently, infrastructural and social problems. Nevertheless, it seems to have a good image within the national urban scenario and has been commonly considered an “urban consumption dream” for many Brazilians. This paradoxical situation is the reason why it has been chosen as the research object in this dissertation. Thus, the questions of this research are: is there a gap between the promise and the performance of the city of Florianópolis? If so, can tourists and residents recognize it? And finally, how can this gap be demonstrated? Accordingly, the main objective of this research is to propose a conformity assessment approach applicable to cities, by which the content of city advertisement campaigns can be compared to its performance indicators and satisfaction degree of its consumers. Therefore, this approach is composed by different methods: literature and legislation reviews, semi-structured and structured interviews with experts and inhabitants, an urban centrality development analysis, a qualitative discourse analysis of advertising material (including images), a qualitative content analysis of newspaper reports and a questionnaire survey. Finally, the theses are: yes, there is a gap between promise and performance of Florianópolis; this promise is a result of city marketing campaigns which advertise its natural features and at the same time hiding its urban aspects, supported by some political and private actors, mainly interested in the development of tourism and real estate market in the city; this gap has been already recognized by tourists and more intensively by residents; the selected methods worked as a kind of conformity assessment for cities and tourist destinations; and last but not least, since there is a gap, it designates the practice of “make-up urbanism”. Research limitations are the short time frame covered by this analysis and small and non-representative samples. However, its relevance lies in the attempt to fill in two disciplinary lacunas: a conformity assessment approach for cities and the creation of knowledge about Florianópolis and its further presentation at an international level, on the one hand. On the other hand, the transfer of this approach to other cities would help explaining a (common) contemporary urban phenomenon and appeal for more ethical conduct and transparency in the practices of city marketing.
A fundamental characteristic of human beings is the desire to start learning at the moment of birth. The rather formal learning process that learners have to deal with in school, on vocational training or in university, is currently subject to fundamental changes. The increasing technologization, overall existing mobile devices, the ubiquitous access to digital information, and students being early adaptors of all these technological innovations require reactions on the part of the educational system.
This study examines such a reaction: The use of mobile learning in higher education.
Examining the subject m-learning first requires an investigation of the educational model e-learning. Many universities already established e-learning as one of their educational segments, providing a wide range of methods to support this kind of teaching.
This study includes an empirical acceptance analysis regarding the general learning behavior of students and their approval of e-learning methods. A survey on the approval of m-learning supplements the results.
Mobile learning is characterized by both the mobility of the communication devices and the users. Both factors lead to new correlations, demonstrate the potential of today's mobile devices and the probability to increase the learning performance.
The dissertation addresses these correlations and the use of mobile devices in the context of m-learning. M-learning and the usage of mobile devices not only require a reflection from a technological point of view. In addition to the technical features of such mobile devices, the usability of their applications plays an important role, especially with regard to the limited display size.
For the purpose of evaluating mobile apps and browser-based applications, various analytical methods are suitable.
The concluding heuristic evaluation points out the vulnerability of an established m-learning application, reveals the need for improvement, and shows an approach to rectify the shortcoming.
In der vorliegenden Arbeit wird eine kraftschlüssige Verbindungstechnik für modulare, schalenartige Faserverbundbauteile vorgestellt. Die Verbindung basiert auf der Verklebung mit lokal begrenzten Stahlblechen. Aus dem Verbindungsansatz wird die Verklebung zwischen Stahl und Faserverbundkunststoff vertiefend betrachtet. Ziel sind die Wahl von technologischen Randbedingungen, die Erarbeitung eines Vorschlages zur numerischen Berechnung und Bemessung und die Formulierung konstruktiver Empfehlungen zum Entwurf von Verklebungen. Mechanische Kennwerte werden in Zugversuchen ermittelt und direkt auf die nichtlinearen Berechnungen übertragen. Technologische Einflüsse und die Streuungen aus realen Verklebungen werden über die Nachrechnung von Zugscherversuchen in die Bemessung integriert. Es wird gezeigt, dass die Verklebungen ausreichende Festigkeiten und ein zufriedenstellendes Bruchverhalten aufweisen. Die Kombination aus einer Werkstattverklebung und einer baustellengerechten Montage ermöglicht eine materialgerechte und effiziente Verbindungen für Faserverbundkonstruktionen unter den Randbedingungen des Bauwesens.
As human thought was developing, likewise, the technology used for illumination was growing. But a haul through history, reviewing its pages and analyzing it, inherently brings up old and new question, like: Is it possible to alter negatively the image of historic buildings and monuments through inadequate lighting to the degree of distorting the perception that people have of the work? and if so, what are the causes that generate it? Do the light designers take into consideration criteria to protect not only historic buildings and monuments, but also the environment? What are the consequences that may generate the inadequate lighting of urban heritage to the environment? What are the factors to consider for a proper illumination of urban heritage? The answers to these questions will help lay the foundation for proper illumination of the urban heritage, avoiding at the maximum the light pollution and the effects that it generates, seeking a balance and harmonious reconciliation between the technology, urban heritage and environment, taking as a framework and the case study the urban heritage of a city from the colonial era in southern Mexico, with pre-Hispanic roots and where today you can still see through its streets and buildings an atmosphere of mysticism reflection of their folklore and traditions, this city is known as Chiapa de Corzo, Chiapas.