Refine
Document Type
- Article (1015)
- Conference Proceeding (857)
- Doctoral Thesis (494)
- Master's Thesis (115)
- Part of a Book (50)
- Book (45)
- Report (43)
- Periodical (28)
- Preprint (27)
- Bachelor Thesis (22)
Institute
- Professur Theorie und Geschichte der modernen Architektur (493)
- Professur Informatik im Bauwesen (484)
- Institut für Strukturmechanik (ISM) (346)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (201)
- Professur Baubetrieb und Bauverfahren (145)
- Institut für Europäische Urbanistik (71)
- Professur Bauphysik (53)
- Professur Stochastik und Optimierung (46)
- Graduiertenkolleg 1462 (42)
- F. A. Finger-Institut für Baustoffkunde (FIB) (38)
Keywords
- Weimar (446)
- Bauhaus-Kolloquium (442)
- Angewandte Mathematik (331)
- Computerunterstütztes Verfahren (289)
- Architektur (247)
- Architektur <Informatik> (201)
- Strukturmechanik (189)
- CAD (184)
- Angewandte Informatik (155)
- Bauhaus (125)
Year of publication
- 2004 (220)
- 2003 (197)
- 2006 (173)
- 1997 (165)
- 2015 (125)
- 2020 (123)
- 2010 (114)
- 2008 (112)
- 2005 (106)
- 2012 (105)
- 2000 (100)
- 2022 (94)
- 2011 (91)
- 2021 (88)
- 2013 (85)
- 2014 (85)
- 2016 (70)
- 2019 (65)
- 2023 (65)
- 1987 (63)
- 1990 (60)
- 2017 (58)
- 2018 (58)
- 2007 (53)
- 1983 (49)
- 2009 (49)
- 1979 (36)
- 1976 (29)
- 2001 (24)
- 2002 (24)
- 1993 (23)
- 1999 (18)
- 1992 (16)
- 1998 (7)
- 2024 (4)
- 1995 (1)
Die wachsende Notwendigkeit zur Energieeinsparung hat in verschiedenen Ländern zur Entwicklung von Prognosemodellen zur Bestimmung des Energiebedarfs im Wohnungssektor geführt. Obwohl Prognosemodelle prinzipiell eine Lösung zur Bestimmung des Energiebedarfs und zur Beurteilung der Auswirkungen von zukünftigen Energieeinsparmaßnahmen darstellen, sind die bestehenden Modelle jedoch mit Unwägbarkeiten in der Modellierung und Mängeln bezüglich der verwendeten Daten und Methodik behaftet.
In dieser Arbeit werden die Übertragbarkeit, Genauigkeit und stochastische Unsicherheit von zwölf Prognosemodellen (MAED-2, FfE-Gebäudemodell, CDEM, REM, CREEM, ECCABS, REEPS, BREHOMES, LEAP, DECM, CHM, BSM) analysiert, wobei Deutschland als Fallbeispiel verwendet wird. Zur Verbesserung der Übertragbarkeit der bestehenden Modelle werden Anpassungen vorgeschlagen. Außerdem wird für jedes Modell eine Bestimmung der einflussreichsten Parameter auf den simulierten Endenergiebedarf mit Hilfe einer Sensitivitätsanalyse vorgenommen. Es konnte gezeigt werden, dass Modelle mit einem hohen Detaillierungsgrad nicht zwangsläufig genauere Ergebnisse für den Endenergiebedarf garantieren. Dennoch wurde festgestellt, dass Modelle mit einem niedrigen Detaillierungsgrad Ergebnisse mit größeren Unsicherheiten liefern als Modelle mit einem höheren Detaillierungsgrad. Es wurde weiterhin festgestellt, dass die einflussreichsten Parameter zur Bestimmung des Endenergiebedarfs im Wohnungssektor Innenraumtemperatur, Außentemperatur (Gradtagzahl), Bevölkerungsentwicklung und Anzahl der Gebäude/Wohnungen sind.
Auf der Grundlage der Erkenntnisse zur Bewertung bestehender Modelle und der Bestimmung der einflussreichsten Parameter wurde ein optimiertes Prognosemodell (Transferable Residential Energy Model, TREM) entwickelt. Mit dessen Hilfe wurde die Entwicklung des Endenergiebedarfs im deutschen Wohnungssektor sowie in anderen Ländern (Vereinigtes Königsreich und Chile) prognostiziert. Diese Ergebnisse wurden anschließend mit statistischen Daten verglichen. Das TREM-Modell bestimmt den Endenergiebedarf auf der Grundlage der wahrscheinlichsten Variationen der einflussreichsten Eingangsparameter mit Hilfe einer Monte-Carlo-Simulation. Im Gegensatz zu bestehenden Modellierungsansätzen liefert das Modell damit auch einen Bereich mit Wahrscheinlichkeitsbändern für den zukünftigen Endenergiebedarf. Die Ergebnisse des TREM-Modells zeigen, dass das Modell genauere Ergebnisse liefern kann als derzeitige Modelle mit einem Mittelwert der prozentualen Differenz niedriger als 5% und einem Korrelationskoeffizienten r höher als 0,35 und darüber hinaus dazu geeignet ist, ohne Anpassungen eine Prognose der Entwicklung des zukünftigen Endenergiebedarfs im Wohnungssektor für unterschiedliche Länder zu erstellen.
Im Rahmen der Arbeit wird das Querkrafttragverhalten bewehrter Bauteile aus Porenbeton untersucht. Die vorherrschende Beschreibung des inneren Kräftezustandes basiert auf der Modellvorstellung eines Fachwerks oder Sprengwerks mit Stahlzugstreben und Betondruckstreben. Ziel ist die Entwicklung eines alternativen Verfahrens zur Ermittlung des inneren Kräftezustandes.
Ausgehend vom Prinzip des Minimums des elastischen Gesamtpotentials wird eine Extremalaufgabe für das mechanische Problem formuliert. Die numerische Umsetzung basiert auf der Überführung der Extremalaufgabe in eine nichtlineare Optimierungsaufgabe. Diese lässt sich mit Standardsoftware lösen. Der Vorteil dieser Vorgehensweise besteht darin, dass das grundlegende Verfahren unabhängig vom verwendeten Materialmodell ist. Nichtlineare Spannungs-Dehnungs-Beziehungen oder die Berücksichtigung der Rissbildung erfordern keine Anpassung des Berechnungsalgorithmus.
Bewehrte Porenbetonbauteile besitzen im Hinblick auf das Trag- und Verformungsverhalten einige Besonderheiten. Berechnungsansätze für Stahlbetonelemente lassen sich nicht ohne entsprechende Modifikationen übertragen lassen. Die Bewehrung wird aus glatten Stäben hergestellt, so dass nach der Herstellung nur ein Haftverbund wirksam ist. Dieser kann über die Lebensdauer teilweise oder vollständig versagen. Die Kraftübertragung zwischen den Verbundelementen muss durch entsprechende Kopplungselemente (z.B. Querstäbe, Bügel, Endwinkel) sichergestellt werden.
Der Bewehrungskorb ist im Porenbeton gebettet. Aufgrund der relativ niedrigen Festigkeit bzw. Steifigkeit des Porenbetons und des teilweise unwirksamen Verbundes treten Relativverschiebungen zwischen beiden Verbundmaterialien auf. Hier sind die Ursachen dafür zu finden, dass die Beanspruchung der Querkraftbewehrung viel geringer ist als bei vergleichbaren Stahlbetonbalken. Der Querkraftbewehrungsgrad erlaubt keine Rückschlüsse auf den Querkraftwiderstand.
Das zentrale Anliegen der Arbeit ist die Implementierung nichtlinearer Materialansätze, der Rissbildung des Porenbetons sowie der porenbetonspezifischen Besonderheiten verschieblicher Verbund, diskrete Verankerung der Bewehrung und Relativverschiebungen zwischen Porenbeton und Bewehrung) in das Berechnungsmodell.
Die Leistungsfähigkeit des entwickelten Berechnungsmodells wird anhand von Beispielen demonstriert. Die Kräfte in der Bewehrung sowie das Tragwerksverhalten werden realitätsnah bestimmt.
Ziel der Arbeit ist es, die Breite des Aufgabengebietes Bauen im Bestand zu analysieren und grafisch aufzuwerten. In einem Verantwortlichkeitskataster werden in Anlehnung an die HOAI die Aufgaben während eines Bestandsvorhabens den einzelnen Beteiligten zugeordnet. Die Bestandsaufnahme als entscheidenden Bestandteil eines Bestandsprojektes wird näher erläutert und beschrieben. Einige Besonderheiten in der Vorbereitung und Überwachung eines Vorhabens im Bestand werden in Ansätzen zu einem Leitfaden zur Ingenieurtätigkeit zusammengefasst.
Es wird die Geschichte des Spannbetons im Brückenbau wiedergegeben. Hierzu wird unter anderem ein Normenvergleich zum Thema Spannbeton aufgestellt. Desweiteren wird erläutert, was die Grundlagen für die Nachrechnung bestehender Brücken sind. Für die Brücke über die Saale bei Jena-Kunitz wird ein Beprobungskonzept mit zerstörenden Prüfungen erstellt. Die erhaltenen Proben sollen in Labortests weiter untersucht werden. Die durch die Beprobung entstehenden Schädigungen sollen zu weiteren Messungen an der Brücke genutzt werden. Die Schädigungszustände werden statisch nachgewiesen. Zum Schluss werden Hinweise für den geplanten Abriss der Brücke gegeben.
Mit der Einführung des semiprobabilistischen Sicherheitskonzeptes im Bauwesen wurden auch die Berechnungs- und Nachweisgrundlagen für Brücken neu definiert. Für die praktische Anwendung auf nationaler Ebene und die Präzisierung der Festlegungen des EC’s wurde ein ARS mit konsistenten Regeln zur Ermittlung von Kräften und Verformungen für Brückenlager erstellt. Schwerpunkt der Arbeit ist die Ermittlung von Kräften und Verformungen an stahlbewehrten Elastomerlagern nach den Normengrundlagen der DIN 1072 und des DIN-Fachbericht 101 (in Erweiterung durch den Entwurf des ARS vom 25.07.2005) an einem komplexen Spannbeton-Brückentragwerk, um einen Normenvergleich bezüglich des Sicherheitsniveaus anstellen zu können. Die Berechnungen wurden unter Verwendung des FE-Programmsystems ANSYS am nichtlinearen, komplexen Gesamtsystem durchgeführt. Im Vorfeld erfolgte dazu ein grundlegender Vergleich der Sicherheitskonzepte, der Lastannahmen und maßgebenden Lastfallkombinationen. Für die Windlastannahmen wurden die aktuellen Regelungen der DIN 1055-4 (2005-03) und der EN 1991-1-4 (2005-07) angewandt. Für die Vorbemessung und Nachweise des Spannbeton-Überbaus wurde ein InfoCAD-FE-Modell genutzt. Resultierend in der Lagerdimensionierung wurde anhand der Untersuchungsergebnisse die Vergleichsanalyse des jeweils erreichbaren Sicherheitsniveaus durchgeführt. Unter der betrachteten Bemessungssituation ist das erreichte Sicherheitsniveau nach neuer Normung vergleichbar mit dem des bisherigen Erfahrungsbereiches der DIN 1072.
Am beispiel der sich im Bau befindlichen U-Bahnstation 'Vijzelgracht' in Amsterdam werden beispielhaft Ablaufkonzepte untersucht und und durch Analyse ihrer Logik geprüft. Der Schwerpunkt der Untersuchung liegt in der Entwicklung von möglichen Konzepten der Kombination von Erdaushub mit der Montage von aussteifenden Elementen. Dazu werden die wesentlichen, relevanten Arbeitsschritte identifiziert und beschrieben. Auf der Basis von Aufwandswerten werden Maschinen und Geräte dimensioniert. Die Mengenermittlung des Erdstoffvolumens ist hierbei ebenso Grundlage für die Erarbeitung von Ablaufkonzepten. Auf diesen Grundlagen werden mehrere Ablaufkonzepte dargestellt. Der Nachweis der logischen Wahrheit erfolgt am Beispiel einer Ablaufvariante. Hier werden die in logische Sprache überführten Arbeitsschritte dargestellt und auf Wahrheit hin geprüft.
Für die weitgehende Nutzung der enthaltenen Nährstoffe in häuslichem Abwasser und die Realisierung des Kreislaufgedankens auch in kommunalen Abwassersystemen ist eine Trennung der Abwasserströme, wie sie vielfach bereist in der industriellen Produktion eingesetzt wird, Vorrausetzung. Die getrennte Erfassung und Sammlung sowie die Nutzung bzw. Behandlung der einzelnen Teilströme weicht von den üblichen Verfahren der bisher praktizierten end-of-pipe-Systeme ab und es gilt nach alternativen Verfahrensmöglichkeiten zu suchen, um die Ziele eines teilstromorientierten Abwasserkonzeptes zu verwirklichen. Im Rahmen dieser Arbeit wird ein anaerobes mesophiles Behandlungsverfahren für den Teilstrom Braunwasser untersucht. Zu diesem Zweck wurden verschieden Versuche mit Faeces mit unterschiedlichen Wassergehalten und in unterschiedlichen Mischungen von Einsatz des Rohsubstrates bis hin zur Zugabe von Urin, Wasser und/oder Impfschlamm untersucht. Während der Versuche in temperierten Reaktoren fand eine Aufzeichnung der Gaserträge der einzelnen Ansätze, der pH-Werte in ausgewählten Ansätzen, der Raumtemperatur und der Substrattemperatur im Reaktor statt. Diese verschiedenen Ansätze sind anhand ihrer Zusammensetzung dargestellt und in den gewonnenen Ergebnissen untereinander und mit denen in der Startliteratur angegebenen Werten verglichen und bewertet. Ziel der anaeroben Behandlung war es, ein stabilisiertes Substrat aus der anaeroben Stufe zu erhalten. Aus den gewonnenen Ergebnissen wird ein Verfahrensvorschlag für eine anaerobe Stabilisierung gemacht.
Die Arbeit »Anachronismen: Historiografie und Kino« geht von einer zunächst einfachen Beobachtung aus: beinahe immer, wenn Historiker_innen sich mit Geschichtsfilmen auseinander setzen, findet sich die lautstark geführte Beschwerde über die zahlreichen und vermeidbaren Anachronismen der Filme, die sie als ernst zu nehmende historiografische Beiträge desavouieren.
Von hier ausgehend verfolgt die Arbeit ein dreifaches Projekt: zunächst in einer kritischen Analyse geschichtstheoretischer Texte einige Hinweise für den Status von Anachronismen für die moderne westliche Historiografie zu gewinnen. Zweitens zu untersuchen, welche Rolle Anachronismen für den Geschichtsfilm spielen. Und drittens von dort aus das epistemische Potential anachronistischen Geschichtskinos zu untersuchen.
Eine der Hauptthesen, welche den Blick sowohl auf die Filme wie auf die theoretischen Texte leitet, besagt, dass Anachronismen genau jene Punkte sind, an denen die Medien einer jeden Geschichtsschreibung beobachtbar werden. Die Beobachtung und Beschreibung dieser Medien der kinematografischen Geschichtsschreibung unternimmt die Arbeit unter Zuhilfenahme einiger theoretischer Überlegungen der Actor Network Theory (ANT).
Die Arbeit ist in vier Kapitel gegliedert, in deren Zentrum jeweils die Diskussion eines ANT-Begriffs sowie die Analyse eines Geschichtsfilmes steht. Zu den untersuchten Filmen gehören Shutter Island (Martin Scorsese, 2010), Chronik der Anna Magdalena Bach (Jean-Marie Straub/Danièle Huillet, 1968), Cleopatra (Joseph L. Mankiewicz, 1963) und Caravaggio (Derek Jarman, 1986). Die Arbeit kommentiert außerdem theoretische Texte zur Historiografie und zu Anachronismen von Walter Benjamin, Leo Bersani, Georges Didi-Huberman, Siegfried Kracauer, Friedrich Meinecke, Friedrich Nietzsche, Jacques Rancière, Leopold Ranke, Paul Ricœur, Georg Simmel, Hayden White u. a.
The concept of isogeometric analysis, where functions that are used to describe geometry in CAD software are used to approximate the unknown fields in numerical simulations, has received great attention in recent years. The method has the potential to have profound impact on engineering design, since the task of meshing, which in some cases can add significant overhead, has been circumvented. Much of the research effort has been focused on finite element implementations of the isogeometric concept, but at present, little has been seen on the application to the Boundary Element Method. The current paper proposes an Isogeometric Boundary Element Method (BEM), which we term IGABEM, applied to two-dimensional elastostatic problems using Non-Uniform Rational B-Splines (NURBS). We find it is a natural fit with the isogeometric concept since both the NURBS approximation and BEM deal with quantities entirely on the boundary. The method is verified against analytical solutions where it is seen that superior accuracies are achieved over a conventional quadratic isoparametric BEM implementation.
Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.
The general motivation of this research is to develop software to support the handling of the increased complexity of architectural design. In this paper we describe a system providing general support during the whole process. Instead of only developing design tools we are also addressing the problem of the operating environment of these tools. We conclude that design tools have to be integrated in an open, modular, distributed, user friendly and efficient environment. Two major fields have to be addressed - the development of design tools and the realisation of an integrated system as their operation environment. We will briefly focus on the latter by discussing known technologies in the field of information technology and other design disciplines that can be used to realise such an environment. Regarding the first subject we have to state the need of a detailed tool specification. As a solution we suggest a strategy where the tool functions are specified on the basis of a transformation, where a hierarchical process model is mapped into specifications of different design tools realising appropriate support for all sub-processes of architectural design. Using this strategy the main steps to develop such a support system are: implementation of a framework as basis for the integrated design system decision whether the tool specification are already implemented in available tools in this case these tools can be integrated using known methods for tool coupling otherwise new design tools have to be developed according to the framework
This paper describes a research project that addresses the difficulties in dealing with regulatory documents such as national and regional codes. These documents tend to be voluminous, heavily cross-referenced, possibly ambiguous and even conflicting at times. There are often multiple documents that need to be consulted and satisfied; however it is a difficult task to locate all of the relevant provisions. In addition, sections dealing with the same or similar conceptual ideas sometimes lay down conflicting requirements. We propose a framework for regulation representation, analysis and comparison with emphasis on the extraction of similarities between provisions. We focus on accessibility regulations, whose intent is to provide the same or equivalent access to a building and its facilities for disabled persons. An XML regulatory repository is developed to extract structural as well as non-structural features from government regulations to help user understanding and computational analysis. A similarity analysis is performed between different sources of regulations. In order to achieve a better comparison between provisions, we employ a combination of feature matching and structural analysis. Results are shown on comparisons between American and European codes, as well as on the domain of electronic-rulemaking.
The modeling of crack propagation in plain and reinforced concrete structures is still a field for many researchers. If a macroscopic description of the cohesive cracking process of concrete is applied, generally the Fictitious Crack Model is utilized, where a force transmission over micro cracks is assumed. In the most applications of this concept the cohesive model represents the relation between the normal crack opening and the normal stress, which is mostly defined as an exponential softening function, independently from the shear stresses in tangential direction. The cohesive forces are then calculated only from the normal stresses. By Carol et al. 1997 an improved model was developed using a coupled relation between the normal and shear damage based on an elasto-plastic constitutive formulation. This model is based on a hyperbolic yield surface depending on the normal and the shear stresses and on the tensile and shear strength. This model also represents the effect of shear traction induced crack opening. Due to the elasto-plastic formulation, where the inelastic crack opening is represented by plastic strains, this model is limited for applications with monotonic loading. In order to enable the application for cases with un- and reloading the existing model is extended in this study using a combined plastic-damage formulation, which enables the modeling of crack opening and crack closure. Furthermore the corresponding algorithmic implementation using a return mapping approach is presented and the model is verified by means of several numerical examples. Finally an investigation concerning the identification of the model parameters by means of neural networks is presented. In this analysis an inverse approximation of the model parameters is performed by using a given set of points of the load displacement curves as input values and the model parameters as output terms. It will be shown, that the elasto-plastic model parameters could be identified well with this approach, but require a huge number of simulations.
The initial shear modulus, Gmax, of soil is an important parameter for a variety of geotechnical design applications. This modulus is typically associated with shear strain levels about 5*10^-3% and below. The critical role of soil stiffness at small-strains in the design and analysis of geotechnical infrastructure is now widely accepted.
Gmax is a key parameter in small-strain dynamic analyses such as those to predict soil behavior or soil-structure interaction during earthquake, explosions, machine or traffic vibration where it is necessary to know how the shear modulus degrades from its small-strain value as the level of shear strain increases. Gmax can be equally important for small-strain cyclic situations such as those caused by wind or wave loading and for small-strain static situations as well. Gmax may also be used as an indirect indication of various soil parameters, as it, in many cases, correlates well to other soil properties such as density and sample disturbance. In recent years, a technique using bender elements was developed to investigate the small-strain shear modulus Gmax.
The objective of this thesis is to study the initial shear stiffness for various sands with different void ratios, densities, grain size distribution under dry and saturated conditions, then to compare empirical equations to predict Gmax and results from other testing devices with results of bender elements from this study.
An Experimental Study on Hydro-Mechanical Characteristics of Compacted Bentonite-Sand Mixtures
(2005)
Nuclear and hazardous waste disposal issues have become universal issues and problems related to the final disposal of these waste including finding a suitable site, natural and engineered barriers used, construction of the repository, long-term performance assessment have gained increasing attention all over the world. High-level radioactive and hazardous waste are required to be buried in deep geological repositories. In Germany, the ongoing researches have assessed the suitability of salt-stone formation in the country as a host rock candidate for its nuclear waste repository. Bentonite-based materials have been proposed to be used as sealing and buffer elements for the nuclear waste repository. Several hydro-mechanical processes will take place in the field and influence behaviour of the sealing and buffer elements of the repository. In this dissertation, a study on the hydro-mechanical characterisation of bentonite-sand mixtures is presented. Mixtures of a calcium-type bentonite, named Calcigel, and quartz sand were used in the investigation. Series of experiments including basic and physico-chemical characterisation, microstructure and fabric studies, suction and swelling pressure measurements, wetting and drying test, one-dimensional compression-rebound test, one-dimensional cyclic wetting-drying test under constant vertical stress, and saturated permeability test were conducted. The experimental data obtained are analysed and several characteristics of the material are brought out in this dissertation. Conclusions regarding basic behaviour of the materials are drawn based on the results of microstructure and fabric studies. Factors influencing the magnitude of suction and swelling pressure of the materials are outlined and discussed. The suction-induced compression and rebound characteristics of the material are described. The wetting and drying behaviour as influenced by the material boundary conditions are discussed. Permeability characteristics of the materials are examined based on several available permeability models. Several mechanical and hydraulic parameters that can be used in modelling using some available constitutive modelling approaches are derived based on the experimental data. At the end, conclusions regarding the hydro-mechanical characteristics of the materials are drawn and suggestions for future studies are made.
This paper presents an evaluation system for steel structures of hydroelectric power stations, including hydraulic gates and penstocks, based on Fault Tree Analyasis (FTA) and performance maps. This system consists of fault tree diagrams of FTA, performance maps, design and analysis systems, and engineerin databases. These four modules are integrated by appropriate hyperlinks so that the user of this system can use it easily and seamlessly. A well developed system was applied to some illustrative example cases, and they showed that the developed methodology and system worked well and the users found the system useful and effective for their maintenance tasks at powerstations.
Since the Industrial Revolution in the 1700s, the high emission of gaseous wastes into the atmosphere from the usage of fossil fuels has caused a general increase in temperatures globally. To combat the environmental imbalance, there is an increase in the demand for renewable energy sources. Dams play a major role in the generation of “green" energy. However, these structures require frequent and strict monitoring to ensure safe and efficient operation. To tackle the challenges faced in the application of convention dam monitoring techniques, this work proposes the inverse analysis of numerical models to identify damaged regions in the dam. Using a dynamic coupled hydro-mechanical Extended Finite Element Method (XFEM) model and a global optimization strategy, damage (crack) in the dam is identified. By employing seismic waves to probe the dam structure, a more detailed information on the distribution of heterogeneous materials and damaged regions are obtained by the application of the Full Waveform Inversion (FWI) method. The FWI is based on a local optimization strategy and thus it is highly dependent on the starting model. A variety of data acquisition setups are investigated, and an optimal setup is proposed. The effect of different starting models and noise in the measured data on the damage identification is considered. Combining the non-dependence of a starting model of the global optimization strategy based dynamic coupled hydro-mechanical XFEM method and the detailed output of the local optimization strategy based FWI method, an enhanced Full Waveform Inversion is proposed for the structural analysis of dams.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
This paper examines the impact of information technology (IT) utilization on construction firm performance. Based on empirical data collected from 74 US construction firms, the analyses provide evidence that IT has a positive impact on overall firm performance, schedule performance, and cost performance. Firm performance is a composite score of several metrics of performance: schedule performance, cost performance, customer satisfaction, safety performance, and profit. No relationship is found between IT utilization and customer satisfaction, safety, or profit, although this may be due to limitations of the study given strong correlations between IT utilization and cost and schedule performnance. The empirical evidence of positive association between performance and IT use provided by this research is significant to both construction practice and research literature. This evidence should encourage firms to adopt and invest in IT tools.
The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridy- namic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dy- namic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena.
This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature.
New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification
will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three dis- tinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions.
The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridynamic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dynamic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena.
This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature.
New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three distinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions.
In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.
In the context of finite element model updating using output-only vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the correct pairing of experimentally obtained and numerically derived natural frequencies and mode shapes is important. In many cases, only limited spatial information is available and noise is present in the measurements. Therefore, the automatic selection of the most likely numerical mode shape corresponding to a particular experimentally identified mode shape can be a difficult task. The most common criterion for indicating corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases and is not reliable for automatic approaches. In this paper, the purely mathematical modal assurance criterion will be enhanced by additional physical information from the numerical model in terms of modal strain energies. A numerical example and a benchmark study with experimental data are presented to show the advantages of the proposed energy-based criterion in comparison to the traditional modal assurance criterion.
There are many construction projects in China and mass documents are exchanged among the multi-party, including the owner, the contractor and the engineer in the projects. Based on previous studies, an approach to the utilization of the exchanged documents is established by using data warehouse technology and a prototype system called EXPLYZER is developed. The approach and the prototype system are verified through their application in a construction project. It is concluded that the approach can support the decision-making in project management.
Recent research shows that current learning strategies in construction industry have not been effective in implementing lean principles in construction. With that in mind the researchers set to investigate an alternative learning strategy in order to promote learning at the international level. A web-based environment, was developed for this project with the intent of promoting learning and knowledge exchange on the theory and practice of "process transparency" across different countries.
The promise of lower costs for sensors that can be used for construction inspection means that inspectors will continue to have new choices to consider in creating inspection plans. However, these emerging inspection methods can require different activities, resources, and decisions such that it can be difficult to compare the emerging methods with other methods that satisfy the same inspection needs. Furthermore, the context in which inspection is performed can significantly influence how well certain inspection methods are suited for a given set of goals for inspection. Context information, such as weather, security, and the regulatory environment, can be used to understand what information about a component should be collected and how an inspection should be performed. The research described in this paper is aimed at developing an approach for comparing and selecting inspection plans. This approach consists of (1) refinement of given goals for inspection, if necessary, in order to address any additional information needs due to a given context and in order to reach a level of detail that can be addressed by an inspection activity; (2) development of constraints to describe how an inspection should be achieved; (3) matching of goals to available inspection methods, and generation of activities and resource plans in order to address the goals; and (4) selection of an inspection plan from among the possible plans that have been identified. The authors illustrate this approach with observations made at a local construction site.
The conceptual structure of an application that can support the structural analysis task in a distributed collaboratory is described in (van Rooyen and Olivier 2004). The application described there has a standalone component for executing the finite element method on a local workstation in the absence of network access. This application is comparable to current, local workstation based finite element packages. However, it differs fundamentally from standard packages since the application itself, and its objects, are adapted to support distributed execution of the analysis task. Basic aspects of an object-oriented framework for the development of applications which can be used in similar distributed collaboratories are described in this paper. An important feature of this framework is its application-centred design. This means that an application can contain any number of engineering models, where the models are formed by the collection of objects according to semantic views within the application. This is achieved through very flexible classes Application and Model, which are described in detail. The advantages of the application-centred design approach is demonstrated with reference to the design of steel structures, where the finite element analysis model, member design model and connection design model interact to provide the required functionality.
The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.
An analytical molecular mechanics model for the elastic properties of crystalline polyethylene
(2012)
We present an analytical model to relate the elastic properties of crystalline polyethylene based on a molecular mechanics approach. Along the polymer chains direction, the united-atom (UA) CH2-CH2 bond stretching, angle bending potentials are replaced with equivalent Euler-Bernoulli beams. Between any two polymer chains, the explicit formulae are derived for the van der Waals interaction represented by the linear springs of different stiffness. Then, the nine independent elastic constants are evaluated systematically using the formulae. The analytical model is finally validated by present united-atom molecular dynamics (MD) simulations and against available all-atom molecular dynamics results in the literature. The established analytical model provides an efficient route for mechanical characterization of crystalline polymers and related materials.
In a historical perspective, the relationship between digital media and the museum environment is marked by the role of museums as example use cases for the appli- cation of digital media. Today, this exceptional use as an often technology oriented application has changed and instead digital media have turned into an integral part of mediation strategies in the museum environment. Alongside with this shift not only an increasing professionalization of application development but also a grow- ing demand for new content can be observed. Comparable to its role as the main cost factor in the media industry, the production of content rises to a challenge for museums. In particular small and medium scale european museums with limited funding and an often low level of staff coverage face this new demand and strive therefore for alternative production resources. While productive user contributions can be seen as such an alternative resource, user contributions are at the same time a manifestation for a different mode of in- teracting with content. In contrast to the dominantly passive role of audiences as re- ceivers of information, productive contributions emerge as a mode of content ex- ploration and become in this regard influential for museum mediation strategies. As applications of user contributions in museums and cultural heritage are currently rather seldom, a broader perspective towards user contributions becomes necessary to understand its specific challenges, opportunities and limitations. Productive user contributions can be found in a growing number of applications on the Internet where they either complement or fully substitute corporate content production processes. While the Wikipedia1, an online encyclopedia written entirely by a group of users and open to contributions by all its users, is one of the most prominent examples for this practice, several more applications emerged or are be- ing developed. In consequence user contributions are about to become a powerful source for the production of content in digital media environments.
Authors' own research in applied unicriterial and multicriterial optimisation of bar structures, and also an analysis of accessible bibliography on structural synthesis allows to present herein an attempt to define a general algorithm for proceeding in formulation of a structural optimisation problem. A practical aspect of such an algorithm consists, in author's opinion, in enabling a designer a correct creation of a mathematical model of synthesis problems, independently of known mathematical methods employed to looking for an unconditional extremum of function of several variables. A proposed algorithm is not a ready-for-use tool for solving all the optimisation problems, but it constitutes an easy-to-expand theoretical basis. This basis should allow a designer to create a proper set of compromises on the way to construct a mathematical model of a specific optimisation problem. The algorithm, presented in the paper, is constructed as a sequence of the one-after-another problem questions, on which the designer answers: yes or no, and a set of selections from the knowledge base consisting of the elements of an optimisation problem components. The order of making questions adopted by the authors in the algorithm is subjective, however it is supported by their experience, both in applied optimisation and in designing of structures like trusses or frames.
In the field of engineering, surrogate models are commonly used for approximating the behavior of a physical phenomenon in order to reduce the computational costs. Generally, a surrogate model is created based on a set of training data, where a typical method for the statistical design is the Latin hypercube sampling (LHS). Even though a space filling distribution of the training data is reached, the sampling process takes no information on the underlying behavior of the physical phenomenon into account and new data cannot be sampled in the same distribution if the approximation quality is not sufficient. Therefore, in this study we present a novel adaptive sampling method based on a specific surrogate model, the least-squares support vector regresson. The adaptive sampling method generates training data based on the uncertainty in local prognosis capabilities of the surrogate model - areas of higher uncertainty require more sample data. The approach offers a cost efficient calculation due to the properties of the least-squares support vector regression. The opportunities of the adaptive sampling method are proven in comparison with the LHS on different analytical examples. Furthermore, the adaptive sampling method is applied to the calculation of global sensitivity values according to Sobol, where it shows faster convergence than the LHS method. With the applications in this paper it is shown that the presented adaptive sampling method improves the estimation of global sensitivity values, hence reducing the overall computational costs visibly.
This paper proposes an adaptive atomistic- continuum numerical method for quasi-static crack growth. The phantom node method is used to model the crack in the continuum region and a molecular statics model is used near the crack tip. To ensure self-consistency in the bulk, a virtual atom cluster is used to model the material of the coarse scale. The coupling between the coarse scale and fine scale is realized through ghost atoms. The ghost atom positions are interpolated from the coarse scale solution and enforced as boundary conditions on the fine scale. The fine scale region is adaptively enlarged as the crack propagates and the region behind the crack tip is adaptively coarsened. An energy criterion is used to detect the crack tip location. The triangular lattice in the fine scale region corresponds to the lattice structure of the (111) plane of an FCC crystal. The Lennard-Jones potential is used to model the atom–atom interactions. The method is implemented in two dimensions. The results are compared to pure atomistic simulations; they show excellent agreement.
Numerical simulation of physical phenomena, like electro-magnetics, structural and fluid mechanics is essential for the cost- and time-efficient development of mechanical products at high quality. It allows to investigate the behavior of a product or a system far before the first prototype of a product is manufactured.
This thesis addresses the simulation of contact mechanics. Mechanical contacts appear in nearly every product of mechanical engineering. Gearboxes, roller bearings, valves and pumps are only some examples. Simulating these systems not only for the maximal/minimal stresses and strains but for the stress-distribution in case of tribo-contacts is a challenging task from a numerical point of view.
Classical procedures like the Finite Element Method suffer from the nonsmooth representation of contact surfaces with discrete Lagrange elements. On the one hand, an error due to the approximate description of the surface is introduced. On the other hand it is difficult to attain a robust contact search because surface normals can not be described in a unique form at element edges.
This thesis introduces therefore a novel approach, the adaptive isogeometric contact formulation based on polynomial Splines over hierarchical T-meshes (PHT-Splines), for the approximate solution of the non-linear contact problem. It provides a more accurate, robust and efficient solution compared to conventional methods. During the development of this method the focus was laid on the solution of static contact problems without friction in 2D and 3D in which the structures undergo small deformations.
The mathematical description of the problem entails a system of partial differential equations and boundary conditions which model the linear elastic behaviour of continua. Additionally, it comprises side conditions, the Karush-Kuhn-Tuckerconditions, to prevent the contacting structures from non-physical penetration. The mathematical model must be transformed into its integral form for approximation of the solution. Employing a penalty method, contact constraints are incorporated by adding the resulting equations in weak form to the overall set of equations. For an efficient space discretization of the bulk and especially the contact boundary of the structures, the principle of Isogeometric Analysis (IGA) is applied. Isogeometric Finite Element Methods provide several advantages over conventional Finite Element discretization. Surface approximation with Non-Uniform Rational B-Splines (NURBS) allow a robust numerical solution of the contact problem with high accuracy in terms of an exact geometry description including the surface smoothness.
The numerical evaluation of the contact integral is challenging due to generally non-conforming meshes of the contacting structures. In this work the highly accurate Mortar Method is applied in the isogeometric setting for the evaluation of contact contributions. This leads to an algebraic system of equations that is linearized and solved in sequential steps. This procedure is known as the Newton Raphson Method. Based on numerical examples, the advantages of the isogeometric approach
with classical refinement strategies, like the p- and h-refinement, are shown and the influence of relevant algorithmic parameters on the approximate solution of the contact problem is verified. One drawback of the Spline approximations of stresses though is that they lack accuracy at the contact edge where the structures change their boundary from contact to no contact and where the solution features a kink. The approximation with smooth Spline functions yields numerical artefacts in the form of non-physical oscillations.
This property of the numerical solution is not only a drawback for the
simulation of e.g. tribological contacts, it also influences the convergence properties of iterative solution procedures negatively. Hence, the NURBS discretized geometries are transformed to Polynomial Splines over Hierarchical T-meshes (PHT-Splines), for the local refinement along contact edges to reduce the artefact of pressure oscillations. NURBS have a tensor product structure which does not allow to refine only certain parts of the geometrical domain while leaving other parts unchanged. Due to the Bézier Extraction, lying behind the transformation from NURBS to PHT-Splines, the connected mesh structure is broken up into separate elements. This allows an efficient local refinement along the contact edge.
Before single elements are refined in a hierarchical form with cross-insertion, existing basis functions must be modified or eliminated. This process of truncation assures local and global linear independence of the refined basis which is needed for a unique approximate solution. The contact boundary is a priori unknown. Local refinement along the contact edge, especially for 3D problems, is for this reason not straight forward. In this work the use of an a posteriori error estimation procedure, the Super Convergent Recovery Solution Based Error Estimation Scheme, together with the Dörfler Marking Method is suggested for the spatial search of the contact edge.
Numerical examples show that the developed method improves the quality of solutions along the contact edge significantly compared to NURBS based approximate solutions. Also, the error in maximum contact pressures, which correlates with the pressure artefacts, is minimized by the adaptive local refinement.
In a final step the practicability of the developed solution algorithm is verified by an industrial application: The highly loaded mechanical contact between roller and cam in the drive train of a high-pressure fuel pump is considered.
This cumulative dissertation discusses - by the example of four subsequent publications - the various layers of a tangible interaction framework, which has been developed in conjunction with an electronic musical instrument with a tabletop tangible user interface. Based on the experiences that have been collected during the design and implementation of that particular musical application, this research mainly concentrates on the definition of a general-purpose abstraction model for the encapsulation of physical interface components that are commonly employed in the context of an interactive surface environment. Along with a detailed description of the underlying abstraction model, this dissertation also describes an actual implementation in the form of a detailed protocol syntax, which constitutes the common element of a distributed architecture for the construction of surface-based tangible user interfaces. The initial implementation of the presented abstraction model within an actual application toolkit is comprised of the TUIO protocol and the related computer-vision based object and multi-touch tracking software reacTIVision, along with its principal application within the Reactable synthesizer. The dissertation concludes with an evaluation and extension of the initial TUIO model, by presenting TUIO2 - a next generation abstraction model designed for a more comprehensive range of tangible interaction platforms and related application scenarios.
American images of Utopia
(1997)
Wissenschaftliches Kolloquium vom 27. - 30. Juni 1996 in Weimar an der Bauhaus-Universität zum Thema: ‚ Techno-Fiction. Zur Kritik der technologischen Utopien'
Low-skilled labor makes a significant part of the construction sector, performing daily production tasks that do not require specific technical knowledge or confirmed skills. Today, construction market demands increasing skill levels. Many jobs that were once considered to be undertaken by low or un-skilled labor, now demand some kind of formal skills. The jobs that require low skilled labor are continually decreasing due to technological advancement and globalization. Jobs that previously required little or no training now require skilful people to perform the tasks appropriately. The study aims at ameliorating employability of less skilled manpower by finding ways to instruct them for performing constructions tasks. A review of exiting task instruction methodologies in construction and the underlying gaps within them warrants an appropriate way to train and instruct low skilled workers for the tasks in construction. The idea is to ensure the required quality of construction with technological and didactic aids seeming particularly purposeful to prepare potential workers for the tasks in construction without exposing them to existing communication barriers. A BIM based technology is considered promising along with the integration of visual directives/animations to elaborate the construction tasks scheduled to be carried on site.
Die Dissertation über „Ambiguität im zeitgenössischen Film – Flugversuche“ folgt der Spur einer populären narrativen Tendenz im Kino – nämlich der Mehrdeutigkeit – und zeichnet ihr dramaturgisches Potential, wie ihre ethischen (bzw. mikropolitischen) Implikationen nach. Um typische Muster in der Wahrnehmung mehrdeutiger Filmerzählungen zu beschreiben, die bereits auf der vorbewussten Ebene der Affekte wirksam sind, greife ich auf Begriffe der Prozessphilosophie Alfred North Whitehead’s zurück und auf ihre neueren Reformulierungen bei Gilles Deleuze und Brian Massumi. Ausgehend von Alejandro González Iñárritu’s "Babel" (2006) begibt sich der Leser im ersten Teil auf einen virtuellen Rundflug durch ausgewählte Filmbeispiele mit einem kulturellen Ankerpunkt im heutigen Japan. Im zweiten Teil beschreibe und reflektiere ich mein methodisches Vorgehen in den ersten Phasen der Stoffentwicklung zu einem suggestiven Spielfilmprojekt, und kontextualisiere es mit Interviews zeitgenössischer Autorenfilmer, die ähnliche Erzählweisen entwickeln.
Mikroelektronik und Mikrosystemtechnik in Kombination mit Informations- und Kommunikations-technik erlauben es mittlerweile, Rechenleistung und Kommunikationsfähigkeit in kleinsten Formaten, mit geringsten Energien und zu günstigen Preisen nutzbringend in unser privates und berufliches Umfeld einzubringen. Beispiele sind Notebook-PC, PDA, Handy und das Navigationßystem im Auto. Aber auch eingebettete Elektronik in Komponenten, Geräten und Systemen ist nunmehr zur Selbstverständlichkeit geworden. Bekannte Beispiele aus der Haustechnik sind Mikroprozeßoren in Heizungs- und Alarmanlagen und aber auch in Komponenten wie Brand- und Bewegungsmelder. Wir nähern uns dem vor einigen Jahren noch als Vision bezeichneten Zustand der überall vorhandenen elektronischen Rechenleistung (engl. ubiquitous computing) bzw. des von Informationsverarbeitung durchdrungenen täglichen Umfelds (engl. pervasive computing). Werden die TGA-Komponenten genau wie die größeren Computerkomponenten (z.B. PCs, Server) über Datenschnittstellen zu räumlich verteilten Netzwerken verknüpft (z.B. Internet, Intranet) und mit einer systemübergreifenden und adäquaten Intelligenz (Software) programmiert, so können neuartige Funktionalitäten im jeweiligen Anwendungsumfeld (engl. ambient intelligence, kurz AmI, [1]) entstehen. Hier liegt bei Gebäuden und Räumen speziell eine große Chance, die bislang einer ganzheitlichen Systemkonzeption unter Einschluß von Architektur, Gebäudephysik, technischer Gebäudeausrüstung (TGA) und Gebäudeautomation (GA) im Wege stehende Gewerketrennung zu überwinden. Es entstehen für div. Anwendungszwecke systemisch integrierte >smart areas< (nach Prof. Becker, FH Biberach). Im vorliegenden Beitrag erläuterte Beispiele für AmI-Lösungen im Immobilienbereich sind Raumsysteme zur automatischen und sicheren Erkennung von Notfällen, z.B. in Pflegeheimen; sich automatisch an die Nutzung und den Nutzer bzgl. Klima und Beleuchtung adaptierende Raumsysteme im Büro- oder Hotelbereich und die elektronische Aßistenz des Bau- und Betriebsprozeßes von Gebäuden. Im Duisburger inHaus-Innovationszentrum für Intelligente Raum- und Gebäudesysteme der Fraunhofer-Gesellschaft wurden in den letzten Jahren erste Lösungen mit diesem neuartigen Ansatz konzipiert, entwickelt und erprobt. Der Beitrag beschreibt nach einer kurzen Skizzierung des Ambient-Intelligence-Ansatzes an Beispielen Möglichkeiten für den Transfer dieser neuen Technologie in den Raum- und Gebäudebereich. Es folgt eine abschließende Zusammenfaßung und eine Einschätzung der Zukunftspotenziale der Ambient Intelligence in Raum und Bau.
Geld ist ein Thema, das keinen von uns gleichgültig lässt, hat es doch Auswirkungen auf unser alltägliches praktisches Leben wie auch auf das gesamtgesellschaftliche. Von verschiedenen Aspekten ausgehend wird in dieser Veröffentlichung der Vorlesungen der Erfurter Universität in Zusammenarbeit mit der Fachhochschule Erfurt der ganze Facettenreichtum des Geldes beleuchtet. So kommen Ökonomen genauso zu Wort wie Ökologen, Historiker geben einen Einblick in die Geschichte der Geldwirtschaft und Soziologen zeigen die Aus- bzw. Einwirkungen des Geldes auf Lebensstile. Welche Rolle spielt Geld im Mathematikunterricht und was hat es mit dem Bürgergeld auf sich? Einen Einblick in die Geschichte und die politischen Zusammenhänge der europäischen Währungsunion gewährt uns Hans Tietmeyer am Schluss des Bandes, der dem Leser interessante und überraschende Einsichten in ein nicht nur pekuniäres Gebiet vermittelt.
Die Arbeit zeigt die wesentlichen Gründe auf, warum betahalbhydratreiche Niederbranntgipsbinder (industriell als Stuckgips bezeichnet) oft sehr unterschiedliche Eigenschaften aufweisen.
Der Anteil an Halbhydrat, welches aus dem stark hygroskopischen Anhydrit III (A III) durch die Reaktion mit Luftfeuchtigkeit entsteht, stellt einen erheblichen, bislang vollkommen unbeachteten Einfluss dar. Dieses Halbhydrat aus A III zeigt andere Oberflächeneigenschaften und ein Reaktionsverhalten, das von frisch gebranntem Betahalbhydrat abweicht.
Es zeigt sich, wie weitreichend der Einfluss physiko-chemischer Oberflächenprozesse wie Adsorption und Kondensation ist. Hierdurch wird nicht nur die Oberflächenenergie der Partikel abgebaut, sondern auch eine Verminderung der Hydratationswärme verursacht. Somit wirken sich physikalische Vorgänge thermodynamisch aus. Einwirkende und resultierende Parameter einer Alterung wirken wie folgt äußerst komplex zusammen:
Die dominierenden Bindemitteleigenschaften Abbindeverhalten und Wasseranspruch verändern sich durch eine Alterung sowohl aufgrund der Phasenumwandlungen als auch infolge der Veränderungen der Kristallite. Ebenso einflussreich ist die Veränderung der Oberflächencharakteristik. Die Auswirkung der Alterung auf die Reaktivität geht deutlich über den Abbau von Anhydrit III, die Dezimierung von abbindefähigem Material und die beschleunigende Wirkung von Alterungsdihydrat hinaus. Das Wachstum der Kristallite von Halbhydrat und die Verringerung der inneren Energie sowie die energetisch günstige spontane Beladung der Kristallgitterkanäle kleinster Anhydrit III-Kristallite mit dampfförmigem Wasser müssen als maßgebliche Ursachen für die Abnahme der Reaktivität infolge der Alterung herausgestellt werden. Die Abnahme der spezifischen Oberfläche und der Oberflächenenergie wirken sich außerdem auf den Lösungs- und den Hydratationsprozess aus. Der auf der Oberfläche von Anhydrit III kristallisierte Anhydrit II wirkt sich auch nach der Umwandlung von A III in Halbhydrat lösungshemmend aus. Infolge der alterungsbedingten Dihydratbildung, die bei anhaltender Feuchteeinwirkung einsetzt, wird diese Wirkung aufgehoben bzw. vermindert. Obgleich Dihydrat für seinen Beschleunigungseffekt bekannt ist, entfaltet Alterungsdihydrat infolge seiner besonderen Ausbildung innerhalb der wenige Moleküllagen umfassenden Kondenswasserschicht nur eine geringe keimbildende Wirkung.
Eine wesentliche Erkenntnis betrifft den Bindungscharakter des Überstöchiometrischen Wassers. Diesbezüglich ist eine rein physikalische Bindung nachweisbar. Das in der Arbeit als stärker adsorptiv gebunden bezeichnete Wasser kommt neben der Freien Feuchte ausschließlich bei Anwesenheit von Halbhydrat vor. Dieser Zusammenhang wird erstmalig hergestellt und mit Hilfe der kristallchemisch bedingten höheren Oberflächenenergie von Halbhydrat erklärt.
Die vorliegende Arbeit befasst sich mit der verkehrsplanerischen Untersuchung der Verkehrsanbindung des Klinikums Bad Hersfeld. Auf der theoretischen Grundlage des Verkehrsplanungsprozesses wurde der Planungsablauf methodisch beschrieben. Im Anschluss erfolgte die Zustandsanalyse am konkreten Beispiel -Verkehrsanbindung des Klinikums Bad Hersfeld - in enger Zusammenarbeit mit den Anwohnern des Wohngebietes, der Stadt Bad Hersfeld, der Polizei und des Krankenhauses. Im Rahmen der Analyse wurde ein Zielsystem aufgestellt und eine Mängelanalyse durchgeführt. Auf Basis der gewonnenen Erkenntnisse wurden anschließend verschiedene Lösungsvorschläge erörtert. Anschließend wurde anhand eines Bewertungssystems und eines einfachen Rangordnungsverfahrens eine Vorzugsvariante benannt und deren Vor- und Nachteile beschrieben.
Einleitung:
Die Kunst und der Kunstbetrieb haben sich in den letzten Jahrzehnten stark verändert und werden sich aller Voraussicht nach in Zukunft noch weit rascher und durchgreifender ändern. In meiner Dissertation geht es um eine Analyse des Jetzt-Zustandes des Kunstbetriebs und um die Konsequenzen die daraus für die zu erwartende Entwicklung zu ziehen sind, insbesondere bezüglich der Ausbildung von Künstlern an Kunsthochschulen. Dort sollten meines Erachtens die beruflichen Aspekte des künstlerischen Feldes (in und außerhalb der Akademie) verstärkt erläutert und vermittelt werden.
Der Fokus der Arbeit liegt auf den folgenden 4 Aspekten: Der Künstler, die Arbeitswelt, die Ausbildung und das Netz und die Vernetzung und ihren Zusammenhängen.
Diese Feststellungen basieren auf meinen Recherchen zu den vier Hauptthemen im Rahmen meiner Arbeit in der Lehre und der eigenen künstlerischen Praxis der letzten Jahre und spiegeln diese wider und sollen gleichzeitig als Beispiel für ihre Anwendung dienen und bieten einen Überblick in deren Ausführung in der Praxis.
Hinweis
Die hier vorliegende Dateien (in 5 Teilen) sind die digitale Veröffentlichung meiner Dissertation im Rahmen der Promotion im Studiengang "Kunst und Design" an der Bauhaus-Universität Weimar.
Diese Publikation ist open source und wird in einem offenen und kollaborativen Prozess weiterentwickelt werden. Die jeweils aktuelle Version wird hier zu finden sein: http://phd.nts.is Dort befinden sich auch weitere Formate zum Download, ebenso wie der vollständige (markdown-formatierte) Quelltext.
(Aus urheber- und lizenzrechtlichen Gründen sind in dieser Version der Bildtafeln einige Bilder ausgelassen. Die gedruckte Ausgabe enthält alle Bildtafeln, diese liegt in der Bibliothek der Bauhaus-Universität aus.)
Teile:
- Thesenpapier
- PhD Dissertation
- Bildtafeln
- Der 5-Jahres-Plan
- KIOSK09-Katalog
Generell hat sich im Forschungsprojekt insbesondere durch die Gespräche mit den Hochschulvertretern bestätigt, dass für qualitativ hochwertige Lehre und Forschung qualitativ hochwertige Flächen in ausreichendem Umfang notwendig sind.
Ein Ziel der Forschungsarbeit ist die Entwicklung von Modellen zur Allokation und Steuerung von Flächenressourcen in Hochschulen. Ausgehend von Darstellungen und Erfahrungen für die Flächensteuerung aus Unternehmen, anderen Bereichen der öffentlichen Verwaltung und Forschungseinrichtungen wurden mögliche Steuerungsverfahren für Hochschulen untersucht. Es wurde ein Steuerungsmodell für Hochschulen entwickelt, das auf die hochschulinternen und die extern wirksamen Rahmenbedingungen reagiert.
Die hochschulinterne Flächenallokation wird zum einen maßgeblich von externen Rahmen-bedingungen und zum zweiten von internen Prozessen, Abläufen und Strukturen beeinflusst. Die Kenntnis dieser Bedingungen wird als Voraussetzung für die Benennung von Erfolgsfak-toren für die Implementation neuer Steuerungsmodelle angenommen. Analysiert wurden daher die liegenschaftspolitischen und die organisatorischen Rahmenbedingungen sowie die steuerungsrelevanten Eigenschaften der Flächen selber.
Alles Heritage?
(2016)
Die Erweiterung des Denkmalbegriffs hat zu einer Expansion des Erinnerns, Schützens, Bewahrens und Tradierens auf alle Bereiche des Lebens geführt. Heute werden nicht nur Scheunen, Tankstellen und Großwohnsiedlungen als Teil des historischen Erbes unter Denkmalschutz gestellt, sondern auch kulturelle Praktiken und Bräuche zum „immateriellen“ Weltkulturerbe erklärt. Die Folge dieser als „Denkmal-Inflation“ kritisierten Entwicklung ist eine verschärfte Konkurrenz um Aufmerksamkeit und finanzielle Zuwendungen. Letzteres spiegelt sich nicht zuletzt in einer zunehmenden, maßgeblich von der Tourismusindustrie geförderten publikumswirksamen Inszenierung des Erbes.
Im Zeitalter der „Heritage Industry“ (Robert Hewison, 1987) bilden Kulturgüter aber nicht nur einen wichtigen Standortfaktor, sondern wird das „Erbe“ selbst zunehmend mittels internationaler Charten, Deklarationen, Plaketten und Social Media-Kampagnen konstruiert. Dies geschieht vorwiegend innerhalb eines anglophonen Diskurses, der aber an die deutschsprachigen begriffs- und ideengeschichtlich geprägten Diskussionen strenggenommen nicht anschlussfähig ist. Dort lässt sich ein – in einem ähnlichen Sinne umfassend zu nennender – Erbe-Begriff zwar bereits für die Heimatschutzbewegung konstatieren, eine fachlich ausdifferenzierte Denkmalpflege, wie wir sie heute
kennen, tut sich jedoch schwer, ein solches universelles Konzept zu integrieren. Während die „Heritagisierung“ durch internationale Organisationen zu einer Verschiebung des Fokus von Baudenkmalen hin zur allgemeinen Bewahrung von Kulturerbe führt (das immaterielle eingeschlossen, siehe etwa die Burra Charter), bleibt der Denkmal- und Erbe-Diskurs in den deutschsprachigen Ländern bislang klar auf Baudenkmale und städtebauliche Ensembles konzentriert. Letzteres zeigt sich auch im Vorfeld des European Cultural Heritage Year 2018, das in Deutschland im Gegensatz zu anderen europäischen Ländern maßgeblich von Denkmalschutzorganisationen getragen wird.
Die Wende hin zum Heritage lässt sich gleichermaßen bei neuen Forschungsfeldern und Ausbildungswegen der Denkmalpflege beobachten. So werden heute „Heritage Tourism“ und „Dark Heritage“ als spezifische Formen der „Denkmalnutzung“ untersucht und bilden – in Ergänzung zu den klassischen Disziplinen Kunstgeschichte, Architektur und Planung – „Heritage Management“ und „Heritage Studies“ grundständige Studiengänge. Letzteres gilt inzwischen auch für die deutschsprachigen Länder. Der Weg führt damit weg von der spezialisierten Kennerschaft zum Allrounder mit neuen Schwerpunkten auf Marketing, Verwaltung und Vermittlung. Mit Blick auf sozio-kulturelle Entwicklungen erweist sich, dass der Heritage-Begriff vor allem im ökonomischen und politischen Diskurs weitgehend affirmativ gebraucht wird. Heritage geht demnach mit einem gewissen moralischen wie missionarischen Impetus einher, verbunden mit einer (Kultur-)Politik der „Identitätsstiftung“. In Zeiten, in denen „Identität“ wieder als politisches Schlagwort im gesellschaftlichen Diskurs fungiert, scheint es um so wichtiger, die wissenschaftliche Beschäftigung mit Heritage, die zugrunde liegenden begrifflichen Konzepte und präskriptiven Programme, kritisch zu reflektieren.
Es ist ein Bild aus alten Tagen: ein wissbegieriger Student, auf der Suche nach fundierter wissenschaftlicher Information, begibt sich an den heiligsten Ort aller Bücher – die Universitätsbibliothek. Doch seit einiger Zeit tummeln sich Studierende nicht mehr nur in Bibliotheken, sondern auch immer häufiger im Internet. Sie suchen und finden dort digitale Bücher, sogenannte E-Books.
Wie lässt sich der Wandel durch den Einzug des E-Books in das etablierte Forschungssystem beschreiben, welche Konsequenzen lassen sich daraus ablesen und wird schließlich alles digital, sogar die Bibliothek? Diesen Fragen geht ein elfköpfiges Expertenteam aus Deutschland und der Schweiz während der zweitägigen Konferenz auf den Grund.
Bei den Weimarer E-DOC-Tagen geht es nun um die Veränderung des institutionellen Gefüges rund um das digitale Buch. Denn traditionell sind Verlage und Bibliotheken wichtige Bestandteile der Wissensversorgung in Studium und Lehre. Doch mit dem Aufkommen des E-Books verlagert sich die Recherche mehr und mehr ins Internet. Die Suchmaschine Google tritt als neuer Konkurrent der klassischen Bibliotheksrecherche auf. Aber auch Verlage müssen verstärkt auf die neuen Herausforderungen eines digitalen Buchmarktes reagieren.
In Kooperation mit der Universitätsbibliothek und dem Master-Studiengang Medienmanagement diskutieren Studierende, Wissenschaftler, Bibliothekare und Verleger, wie das E-Book unseren Umgang mit Literatur verändert. Der Tagungsband stellt alle Perspektiven und Ergebnisse zum Nachlesen zusammen.
Entwicklung eines Algorithmus für ein nichtlineares Materialgesetz für die vollautomatische Rissentwicklungssimulation unter Verwendung der am Institut für Strukturmechanik entwickelten netzfreien Verfahren. In Anlehnung an die Kontinuumsplastizität wird unter Verwendung einer arbeitsbasierten Formulierung mit Kombination der Mode I und Mode IIa Bruchenergien für sensitive Strukturen und eines nicht-assoziierten Fließgesetzes werden die Rissweggrößen (Rissöffnungsweite und Rissgleitungen) iterativ ermittelt. Dadurch ist es möglich, den Dilatanzeffekt sowie die verzahnte Kontaktfläche und die daraus resultierenden erhöhten Schubwiderstände abzubilden. Umsetzung mit Hilfe des sehr effizienten impliziten Closest Point Projection Iterationsverfahren auf Basis einer 3-D Kontaktformulierung (Kontakt-Elemente). 2-D Implementation in die Forschungssoftware SLang des Instituts für Strukturmechanik der Bauhaus-Universität Weimar. Verifikation der Modellcharakteristik mit signifikanten Belastungszuständen. Zwei Anwendungsbeispiele zur Rissfortschrittsberechnung sind unter Verwendung des umgesetzten Materialgesetzes zum Einsatz gekommen. Untersuchungen hinsichtlich der Materialparameter wurden vorgenommen.
We present an algebraically extended 2D image representation in this paper. In order to obtain more degrees of freedom, a 2D image is embedded into a certain geometric algebra. Combining methods of differential geometry, tensor algebra, monogenic signal and quadrature filter, the novel 2D image representation can be derived as the monogenic extension of a curvature tensor. The 2D spherical harmonics are employed as basis functions to construct the algebraically extended 2D image representation. From this representation, the monogenic signal and the monogenic curvature signal for modeling intrinsically one and two dimensional (i1D/i2D) structures are obtained as special cases. Local features of amplitude, phase and orientation can be extracted at the same time in this unique framework. Compared with the related work, our approach has the advantage of simultaneous estimation of local phase and orientation. The main contribution is the rotationally invariant phase estimation, which enables phase-based processing in many computer vision tasks.
Im Rahmen des Sonderforschungsbereiches 524 <Werkstoffe und Konstruktionen für die Revitalisierung von Bauwerken 1> ist das primäre Anliegen des Teilprojektes D2 <Bauplanungsrelevantes digitales Gebäudeaufnahme- und Informationssystem> die Entwicklung von Methoden und Techniken zur Aufnahme von Bestandsdaten vor Ort oder durch Auswertung vorhandener Dokumentationen und deren direkte Integration in ein Bauwerksmodell. [15] Das Vorhaben erarbeitet Grundlagen zu Aspekten der fachplanerischen Nutzung und der wissenschaftlichen Auswertungen arbeitsmethodischer Vorgehensweisen in der Bestandsaufnahme unter Einbeziehung softwaretechnischer Methoden. Dabei finden Sachverhalte der Strukturierung, die Herausarbeitung von Systematiken der wesentlichen Informations-/Datenmengen, die Ableitung von Methoden zur zerstörungsfreien Erfassung und die Darstellung planungsrelevanter Gebäudeinformationen in digitalen Systemen Berücksichtigung. Beim Bauaufmaß werden neben traditionellen Methoden und Techniken längst geodätische Verfahren wie die Tachymetrie, die Photogrammetrie und die Handlaserentfernungsmessung einbezogen. In der Praxis des Bestandsaufmaßes repräsentiert gegenwärtig die Tachymetrie, das am häufigsten zur Innen- und Außenaufnahme von Gebäuden eingesetzte geodätische Vermessungsverfahren. [9] [3] Ausgehend von der heutigen Situation in der Bestandsaufnahme wird aufgezeigt, inwieweit es nach dem gegenwärtigen Stand der Technik möglich ist, die in der Geodäsie verwendeten Tachymeter direkt in der Bestandsaufnahme einzusetzen. In einem weiteren Schwerpunkt wird die Konzeption eines rechnergestützten Bauaufnahmesystems basierend auf reflektorlos messenden tachymetrischen Geräten beschrieben. Das Konzept berücksichtigt nicht nur das Bauaufmaß, sondern unterstützt adäquat den gesamten Prozeß der Bauaufnahme – von der Erstbegehung bis hin zur konstruktiven Gliederung. Abschließend werden tendenzielle Möglichkeiten in der Bauaufnahme diskutiert.
Aktionsräume in Dresden
(2012)
In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten.
Die heutige Situation in der Tragwerksplanung ist durch das kooperative Zusammenwirken einer größeren Anzahl von Fachleuten verschiedener Disziplinen (Architektur, Tragwerksplanung, etc.) in zeitlich befristeten Projektgemeinschaften gekennzeichnet. Bei der Abstimmung der hierdurch bedingten komplexen, dynamischen und vernetzten Planungsprozesse kommt es dabei häufig zu Planungsmängeln und Qualitätseinbußen. Dieser Artikel zeigt auf, wie mit Hilfe der Agententechnologie Lösungsansätze zur Verbesserung der Planungssituation erreicht werden können. Hierzu wird ein Agentenmodell für die vernetzt-kooperative Tragwerksplanung vorgestellt und anhand der Planung einer Fußgängerbogenbrücke anschaulich demonstriert. Das Agentenmodell erfasst (1) die beteiligten Fachplaner und Organisationen, (2) die tragwerksspezifischen Planungsprozesse, (3) die zugehörigen (Teil-)Produktmodelle und (4) die genutzte (Ingenieur-)Software. Hieraus leiten sich die drei Teilmodelle (1) agentenbasiertes Kooperationsmodell, (2) agentenbasierte Produktmodellintegration und (3) Modell zur agentenbasierten Software-Integration ab. Der Fokus des Artikels liegt auf der Darstellung des agentenbasierten Kooperationsmodells.
Wissenschaftliches Kolloquium vom 27. bis 30. Juni 1996 in Weimar an der Bauhaus-Universität zum Thema: ‚Techno-Fiction. Zur Kritik der technologischen Utopien'
In this work, practice-based research is conducted to rethink the understanding of aesthetics, especially in relation to current media art. Granted, we live in times when technologies merge with living organisms, but we also live in times that provide unlimited resources of knowledge and maker tools. I raise the question: In what way does the hybridization of living organisms and non-living technologies affect art audiences in the culture that may be defined as Maker culture? My hypothesis is that active participation of an audience in an artwork is inevitable for experiencing the artwork itself, while also suggesting that the impact of the umwelt changes the perception of an artwork. I emphasize artistic projects that unfold through mutual interaction among diverse peers, including humans, non-human organisms, and machines. In my thesis, I pursue collaborative scenarios that lead to the realization of artistic ideas: (1) the development of ideas by others influenced by me and (2) the materialization of my own ideas influenced by others. By developing the scenarios of collaborative work as an artistic experience, I conclude that the role of an artist in Maker culture is to mediate different types of knowledge and different positions, whereas the role of the audience is to actively engage in the artwork itself. At the same time, aesthetics as experience is triggered by the other, including living and non-living actors. It is intended that the developed methodologies could be further adapted in artistic practices, philosophy, anthropology, and environmental studies.
The accurate representation of aerodynamic forces is essential for a safe, yet reasonable design of long-span bridges subjected to wind effects. In this paper, a novel extension of the Pseudo-three-dimensional Vortex Particle Method (Pseudo-3D VPM) is presented for Computational Fluid Dynamics (CFD) buffeting analysis of line-like structures. This extension entails an introduction of free-stream turbulent fluctuations, based on the velocity-based turbulence generation. The aerodynamic response of a long-span bridge is obtained by subjecting the 3D dynamic representation of the structure to correlated free-stream turbulence in two-dimensional (2D) fluid planes, which are positioned along the bridge deck. The span-wise correlation of the free-stream turbulence between the 2D fluid planes is established based on Taylor's hypothesis of frozen turbulence. Moreover, the application of the laminar Pseudo-3D VPM is extended to a multimode flutter analysis. Finally, the structural response from the Pseudo-3D flutter and buffeting analyses is verified with the response, computed using the semi-analytical linear unsteady model in the time-domain. Meaningful merits of the turbulent Pseudo-3D VPM with respect to the linear unsteady model are the consideration of the 2D aerodynamic nonlinearity, nonlinear fluid memory, vortex shedding and local non-stationary turbulence effects in the aerodynamic forces. The good agreement of the responses for the two models in the 3D analyses demonstrates the applicability of the Pseudo-3D VPM for aeroelastic analyses of line-like structures under turbulent and laminar free-stream conditions.
Stonecutters and Sutong Bridge have pushed the world record for main span length of cable-stayed bridges to over 1000m. The design of these bridges, both located in typhoon prone regions, is strongly influenced by wind effects during their erection. Rigorous wind tunnel test programmes have been devised and executed to determine the aerodynamic behaviour of the structures in the most critical erection conditions. Testing was augmented by analytical and numerical analyses to verify the safety of the structures throughout construction and to ensure that no serviceability problems would affect the erection process. This paper outlines the wind properties assumed for the bridge sites, the experimental test programme with some of its results, the dynamic properties of the bridges during free cantilevering erection and the assessment of their aerodynamic performance. Along the way, it discusses the similarities and some revealing differences between the two bridges in terms of their dynamic response to wind action.
Aerodynamic Analysis of Slender Vertical Structure and Response Control with Tuned Mass Damper
(2015)
Analysis of vortex induced vibration has gained more interest in practical held of civil engineering. The phenomenon often occurs in long and slender vertical structure like high rise building, tower, chimney or bridge pylon, which resulting in unfavorable responses and might lead to the collapse of the structures. The phenomenon appears when frequency of vortex shedding produced in the wake area of body meet the natural frequency of the structure. Even though this phenomenon does not necessarily generate a divergent amplitude response, the structure still may fail due to fatigue damage.
To reduce the effect of vortex induced vibration, engineers widely use passive vibration response control system. In this case, the thesis studies the effect of tuned mass damper. The objective of this thesis is to simulate the effect of tuned mass damper in reducing unfavorable responses due to vortex induced vibration and initiated by numerical model validation with respect to wind tunnel test report. The reference structure that being used inside the thesis is Stonecutter Bridge, Hongkong.
A numerical solver for computational uid dynamics named VX ow which developed by Morgenthal [6] is utilized for wind and structure simulation. The comparison between numerical model and wind tunnel result shows 10% maximum tip displacement diference in the model of full erection freestanding tower. The tuned mass damper (TMD) model itself built separately in finite element software SOFiSTiK, and the efective damping obtained from this model then applied inside input modal data of VX ow simulation. A single TMD with mass ratio of TMD 0.5% to the mass of first bending frequency, the maximum tip displacement is measured to be average 67% reduced.
Considering construction limitation and robustness of TMD, the effects of multiple TMD inside a structure are also studied. An uncoupled procedure of applying aeroelastic loads obtained from VX
ow inside finite element software SOFiSTiK is also done to observe the optimum distribution and optimum mass ratio of multiple tuned mass damper. The rest of the properties of TMD are calculated with Den Hartog's formula. The results are as follows: peak displacement in the case of multiple TMD that distributed with polynomial spacing achieve 7.8% more reduction performance than
the one that distributed with equal spacing. Optimum mass of tuned mass damper achieved with ratio 1.25% mass of first bending frequency corresponds to across wind direction.
This ethnographic study reports on emerging work processes and practices observed in the AEC (Architecture/Engineering/Construction) Global Teamwork program, i.e., what people experience when interacting with and through collaboration technologies, why people practice in the way they do, how the practice fits into the environment and changes the work patterns. It presents the experience of two high-performance typical but extreme AEC teamwork cases adopting and adapting to collaboration technologies and how these technologies in practice impact their work processes. The findings illustrate the importance of collaboration technologies in cross-disciplinary, global teamwork. Observations indicate that high performance teams that use the collaboration technologies effectively exhibit collaboration readiness at an early stage and manage to define a “third way” to meet the demands of the cross-disciplinary, multi cultural and geographically distributed AEC workspace. The observations and implications represent the blueprint for yearly innovations and improvements to the design of the AEC Global Teamwork program.
For the dynamic behavior of lightweight structures like thin shells and membranes exposed to fluid flow the interaction between the two fields is often essential. Computational fluid-structure interaction provides a tool to predict this interaction and complement or eventually replace expensive experiments. Partitioned analyses techniques enjoy great popularity for the numerical simulation of these interactions. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular analysis software. We use, for the fluid, GLS stabilized finite elements on a moving domain based on the incompressible instationary Navier-Stokes equations, where the formulation guarantees geometric conservation on the deforming domain. The structure is discretized by nonlinear, three-dimensional shell elements.
Commonly used sequential staggered coupling schemes may exhibit instabilities due to the so-called artificial added mass effect. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. Since iterative coupling algorithms are computationally very costly, their convergence rate is very decisive for their usability. To ensure and accelerate the convergence of this iteration the updates of the interface position are relaxed. The time dependent, 'optimal' relaxation parameter is determined automatically without any user-input via exploiting a gradient method or applying an Aitken iteration scheme.
This thesis presents new interactive visualization techniques and systems intended to support users with real-world decisions such as selecting a product from a large variety of similar offerings, finding appropriate wording as a non-native speaker, and assessing an alleged case of plagiarism.
The Product Explorer is a significantly improved interactive Parallel Coordinates display for facilitating the product selection process in cases where many attributes and numerous alternatives have to be considered. A novel visual representation for categorical and ordered data with only few occurring values, the so-called extended areas, in combination with cubic curves for connecting the parallel axes, are crucial for providing an effective overview of the entire dataset and to facilitate the tracing of individual products. The visual query interface supports users in quickly narrowing down the product search to a small subset or even a single product. The scalability of the approach towards a large number of attributes and products is enhanced by the possibility of setting some constraints on final attributes and, therefore, reducing the number of considered attributes and data items. Furthermore, an attribute repository allows users to focus on the most important attributes at first and to bring in additional criteria for product selection later in the decision process. A user study confirmed that the Product Explorer is indeed an excellent tool for its intended purpose for casual users.
The Wordgraph is a layered graph visualization for the interactive exploration of search results for complex keywords-in-context queries. The system relies on the Netspeak web service and is designed to support non-native speakers in finding customary phrases. Uncertainties about the commonness of phrases are expressed with the help of wildcard-based queries. The visualization presents the alternatives for the wildcards in a multi-column layout: one column per wildcard with the other query fragments in between. The Wordgraph visualization displays the sorted results for all wildcards at once by appropriately arranging the words of each column. A user study confirmed that this is a significant advantage over simple textual result lists. Furthermore, visual interfaces to filter, navigate, and expand the graph allow interactive refinement and expansion of wildcard-containing queries.
Furthermore, this thesis presents an advanced visual analysis tool for assessing and presenting alleged cases of plagiarism and provides a three-level approach for exploring the so-called finding spots in their context. The overview shows the relationship of the entire suspicious document to the set of source documents. An intermediate glyph-based view reveals the structural and textual differences and similarities of a set of finding spots and their corresponding source text fragments. Eventually, the actual fragments of the finding spot can be shown in a side-by-side view with a novel structured wrapping of both the source, as well as the suspicious text. The three different levels of detail are tied together by versatile navigation and selection operations. Reviews with plagiarism experts confirm that this tool can effectively support their workflow and provides a significant improvement over existing static visualizations for assessing and presenting plagiarism cases.
The three main contributions of this research have a lot in common aside from being carefully designed and scientifically grounded solutions to real-world decision problems. The first two visualizations facilitate the decision for a single possibility out of many alternatives, whereas the latter ones deal with text at varying levels of detail. All visual representations are clearly structured based on horizontal and vertical layers contained in a single view and they all employ edges for depicting the most important relationships between attributes, words, or different levels of detail. A detailed analysis considering the context of the established decision-making literature reveals that important steps of common decision models are well-supported by the three visualization systems presented in this thesis.
Seit die Datenverarbeitung in ihrer Komplexität sich der Thematik des Computer Integrated Manufacturing widmet gehört die Produktionsplanung und Steuerung zu jenen Bereichen, in denen eine Computerunterstützung am vordringlichsten erschien. Später sind betriebswirtschaftliche Gesamtlösungen entstanden, die (bis heute recht unpräzise) als Enterprise Resource Planning (ERP)-Systeme bezeichnet werden und in ihren Logistik-Modulen auch Funktionen der Produktionsplanung abdecken. Alle bekannten MRP-, PPS- und auch ERP-Systeme beruhen auf einer Sukzessivplanung. Advanced Planning and Scheduling (APS) Systems finden seit etwa 1995 zunehmend Interesse. Neben Demand Planning, Production Planning and Scheduling, Distribution Planning, Transportation Planning und Supply Chain Planning werden Lösungen für Anzahl und Standorte von Produktionsstätten und Auslieferungslagern, Zuordnung zu Produktionsstätten, Kapazitätsbestimmung für Arbeitskräfte und Betriebsmittel je Standort, Lagerhaltung je Teil und Lager, Bestimmung benötigter Transportmittel und Häufigkeit ihres Einsatzes, Zuordnung von Lagern zu Produktionsstätten von Märkten zu Lagern u.a.m. von APS-Systemen erwartet. D.h. APS-Systeme ergänzen ERP-Lösungen, nutzen die bereits durch das ERP-System vorhandenen Daten und benötigen neuartige Algorithmen und (Meta-) Heuristiken. Im Rahmen des Vortrages werden Modelle und Echtzeitalgorithmen zur Optimierung der Logistik für Prozesse mit kurzfristigen Anforderungen, geographisch verteilter Produktion, Lagerhaltung der Ausgangs-, Zwischen- und Endprodukte und wechselnden Transport-Bedingungen aus der Sicht der praktischen Umsetzung und Anwendung in Form einer ASP-Lösung aufgezeigt und diskutiert.
The laser beam is a small, flexible and fast polishing tool. With laser radiation it is possible to finish many outlines or geometries on quartz glass surfaces in the shortest possible time. It’s a fact that the temperature developing while polishing determines the reachable surface smoothing and, as a negative result, causes material tensions. To find out which parameters are important for the laser polishing process and the surface roughness respectively and to estimate material tensions, temperature simulations and extensive polishing experiments took place. During these experiments starting and machining parameters were changed and temperatures were measured contact-free. The accuracy of thermal and mechanical simulation was improved in the case of advanced FE-analysis.
Search engines are very good at answering queries that look for facts. Still, information needs that concern forming opinions on a controversial topic or making a decision remain a challenge for search engines. Since they are optimized to retrieve satisfying answers, search engines might emphasize a specific stance on a controversial topic in their ranking, amplifying bias in society in an undesired way. Argument retrieval systems support users in forming opinions about controversial topics by retrieving arguments for a given query. In this thesis, we address challenges in argument retrieval systems that concern integrating them in search engines, developing generalizable argument mining approaches, and enabling frame-guided delivery of arguments.
Adapting argument retrieval systems to search engines should start by identifying and analyzing information needs that look for arguments. To identify questions that look for arguments we develop a two-step annotation scheme that first identifies whether the context of a question is controversial, and if so, assigns it one of several question types: factual, method, and argumentative. Using this annotation scheme, we create a question dataset from the logs of a major search engine and use it to analyze the characteristics of argumentative questions. The analysis shows that the proportion of argumentative questions on controversial topics is substantial and that they mainly ask for reasons and predictions. The dataset is further used to develop a classifier to uniquely map questions to the question types, reaching a convincing F1-score of 0.78.
While the web offers an invaluable source of argumentative content to respond to argumentative questions, it is characterized by multiple genres (e.g., news articles and social fora). Exploiting the web as a source of arguments relies on developing argument mining approaches that generalize over genre. To this end, we approach the problem of how to extract argument units in a genre-robust way. Our experiments on argument unit segmentation show that transfer across genres is rather hard to achieve using existing sequence-to-sequence models.
Another property of text which argument mining approaches should generalize over is topic. Since new topics appear daily on which argument mining approaches are not trained, argument mining approaches should be developed in a topic-generalizable way. Towards this goal, we analyze the coverage of 31 argument corpora across topics using three topic ontologies. The analysis shows that the topics covered by existing argument corpora are biased toward a small subset of easily accessible controversial topics, hinting at the inability of existing approaches to generalize across topics. In addition to corpus construction standards, fostering topic generalizability requires a careful formulation of argument mining tasks. Same side stance classification is a reformulation of stance classification that makes it less dependent on the topic. First experiments on this task show promising results in generalizing across topics.
To be effective at persuading their audience, users of an argument retrieval system should select arguments from the retrieved results based on what frame they emphasize of a controversial topic. An open challenge is to develop an approach to identify the frames of an argument. To this end, we define a frame as a subset of arguments that share an aspect. We operationalize this model via an approach that identifies and removes the topic of arguments before clustering them into frames. We evaluate the approach on a dataset that covers 12,326 frames and show that identifying the topic of an argument and removing it helps to identify its frames.
It is widely accepted that most people spend the majority of their lives indoors. Most individuals do not realize that while indoors, roughly half of heat exchange affecting their thermal comfort is in the form of thermal infrared radiation. We show that while researchers have been aware of its thermal comfort significance over the past century, systemic error has crept into the most common evaluation techniques, preventing adequate characterization of the radiant environment. Measuring and characterizing radiant heat transfer is a critical component of both building energy efficiency and occupant thermal comfort and productivity. Globe thermometers are typically used to measure mean radiant temperature (MRT), a commonly used metric for accounting for the radiant effects of an environment at a point in space. In this paper we extend previous field work to a controlled laboratory setting to (1) rigorously demonstrate that existing correction factors used in the American Society of Heating Ventilation and Air-conditioning Engineers (ASHRAE) Standard 55 or ISO7726 for using globe thermometers to quantify MRT are not sufficient; (2) develop a correction to improve the use of globe thermometers to address problems in the current standards; and (3) show that mean radiant temperature measured with ping-pong ball-sized globe thermometers is not reliable due to a stochastic convective bias. We also provide an analysis of the maximum precision of globe sensors themselves, a piece missing from the domain in contemporary literature.
Framed-tube system with multiple internal tubes is analysed using an orthotropic box beam analogy approach in which each tube is individually modelled by a box beam that accounts for the flexural and shear deformations, as well as the shear-lag effects. A simple numerical modeling technique is proposed for estimating the shear-lag phenomenon in tube structures with multiple internal tubes. The proposed method idealizes the framed-tube structures with multiple internal tubes as equivalent multiple tubes, each composed of four equivalent orthotropic plate panels. The numerical analysis is based on the minimum potential energy principle in conjunction with the variational approach. The shear-lag phenomenon of such structures is studied taking into account the additional bending moments in the tubes. A detailed work is carried out through the numerical analysis of the additional bending moment. The moment factor is further introduced to identify the shear lag phenomenon along with the additional moment.
Die vorliegende Arbeit fokussiert die Optimierung freigeformter adaptiver Faserverbundflächentragwerke auf Basis einer entwickelten und auf einem parametrischen Gesamtmodell basierenden Entwurfsmethode. Die Übertragung adaptiver, natürlich inspirierter Vorgänge stellt eine weitreichende Inspirationsquelle dar. Adaptive Tragwerke können unter Anwendung von Smart Materials als materialsparende, filigrane Tragwerke ausgeführt werden. Die Erfüllung der Grenzzustände der Tragfähigkeit und der Gebrauchstauglichkeit wird nicht allein über die Querschnittsabmessungen sichergestellt. Die notwendige Bauteilsteifigkeit kann vielmehr durch Eintragung von Aktivierungsenergie (Operational Energy) realisiert werden. Auf diese Weise kann die aufgrund der Bauteilabmessungen gebundene Energie (Embodied Energy) minimiert werden. Die entwickelte Entwurfsmethode ermöglicht die Auslegung und Optimierung materialminimierter Schalentragwerke in einem mehrstufigen Prozess. Hierbei wird aus tragwerksplanerischer Sicht die numerische Formfindung, die statische Berechnung und die Aktor- und Sensorpositionierung berechnet. Zudem werden Analysen hinsichtlich der Nachhaltigkeit auf Basis einer Lebenszyklusanalyse durchgeführt. Aufgrund der unterschiedlichen, sich aber gegenseitig beeinflussenden Kriterien, ist eine Optimierung durchzuführen. In der vorliegenden Arbeit wird ein Ansatz zur Definition zulässiger Ökobilanzkennwerte von Smart Materials auf Basis der Energiedifferenz zwischen einer passiven und einer adaptiven Struktur vorgestellt. Anhand dieser Kennwerte kann die Entwicklung zukünftiger Smart Materials unter dem Aspekt der ganzheitlichen Nachhaltigkeit erfolgen. Die Allgemeingültigkeit und Übertragbarkeit der Entwurfsmethode auf weitere Tragsysteme im Bauwesen und speziell anderer Materialkonstellationen wird anhand verschiedener Beispiele aufgezeigt.
We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera–equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server–system for improving data acquisition and for supporting scale–invariant object recognition.
In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.
In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.
One major research focus in the Material Science and Engineering Community in the past decade has been to obtain a more fundamental understanding on the phenomenon 'material failure'. Such an understanding is critical for engineers and scientists developing new materials with higher strength and toughness, developing robust designs against failure, or for those concerned with an accurate estimate of a component's design life. Defects like cracks and dislocations evolve at
nano scales and influence the macroscopic properties such as strength, toughness and ductility of a material. In engineering applications, the global response of the system is often governed by the behaviour at the smaller length scales. Hence, the sub-scale behaviour must be computed accurately for good predictions of the full scale behaviour.
Molecular Dynamics (MD) simulations promise to reveal the fundamental mechanics of material failure by modeling the atom to atom interactions. Since the atomistic dimensions are of the order of Angstroms ( A), approximately 85 billion atoms are required to model a 1 micro- m^3 volume of Copper. Therefore, pure atomistic models are prohibitively expensive with everyday engineering computations involving macroscopic cracks and shear bands, which are much larger than the atomistic length and time scales. To reduce the computational effort, multiscale methods are required, which are able to couple a continuum description of the structure with an atomistic description. In such paradigms, cracks and dislocations are explicitly modeled at the atomistic scale, whilst a self-consistent continuum model elsewhere.
Many multiscale methods for fracture are developed for "fictitious" materials based on "simple" potentials such as the Lennard-Jones potential. Moreover, multiscale methods for evolving cracks are rare. Efficient methods to coarse grain the fine scale defects are missing. However, the existing multiscale methods for fracture do not adaptively adjust the fine scale domain as the crack propagates. Most methods, therefore only "enlarge" the fine scale domain and therefore drastically increase computational cost. Adaptive adjustment requires the fine scale domain to be refined and coarsened. One of the major difficulties in multiscale methods for fracture is to up-scale fracture related material information from the fine scale to the coarse scale, in particular for complex crack problems. Most of the existing approaches therefore were applied to examples with comparatively few macroscopic cracks.
Key contributions
The bridging scale method is enhanced using the phantom node method so that cracks can be modeled at the coarse scale. To ensure self-consistency in the bulk, a virtual atom cluster is devised providing the response of the intact material at the coarse scale. A molecular statics model is employed in the fine scale where crack propagation is modeled by naturally breaking the bonds. The fine scale and coarse scale models are coupled by enforcing the displacement boundary conditions on the ghost atoms. An energy criterion is used to detect the crack tip location. Adaptive refinement and coarsening schemes are developed and implemented during the crack propagation. The results were observed to be in excellent agreement with the pure atomistic simulations. The developed multiscale method is one of the first adaptive multiscale method for fracture.
A robust and simple three dimensional coarse graining technique to convert a given atomistic region into an equivalent coarse region, in the context of multiscale fracture has been developed. The developed method is the first of its kind. The developed coarse graining technique can be applied to identify and upscale the defects like: cracks, dislocations and shear bands. The current method has been applied to estimate the equivalent coarse scale models of several complex fracture patterns arrived from the pure atomistic simulations. The upscaled fracture pattern agree well with the actual fracture pattern. The error in the potential energy of the pure atomistic and the coarse grained model was observed to be acceptable.
A first novel meshless adaptive multiscale method for fracture has been developed. The phantom node method is replaced by a meshless differential reproducing kernel particle method. The differential reproducing kernel particle method is comparatively more expensive but allows for a more "natural" coupling between the two scales due to the meshless interpolation functions. The higher order continuity is also beneficial. The centro symmetry parameter is used to detect the crack tip location. The developed multiscale method is employed to study the complex crack propagation. Results based on the meshless adaptive multiscale method were observed to be in excellent agreement with the pure atomistic simulations.
The developed multiscale methods are applied to study the fracture in practical materials like Graphene and Graphene on Silicon surface. The bond stretching and the bond reorientation were observed to be the net mechanisms of the crack growth in Graphene. The influence of time step on the crack propagation was studied using two different time steps. Pure atomistic simulations of fracture in Graphene on Silicon surface are presented. Details of the three dimensional multiscale method to study the fracture in Graphene on Silicon surface are discussed.
The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate.
The nonlinear behavior of concrete can be attributed to the propagation of microcracks within the heterogeneous internal material structure. In this thesis, a mesoscale model is developed which allows for the explicit simulation of these microcracks. Consequently, the actual physical phenomena causing the complex nonlinear macroscopic behavior of concrete can be represented using rather simple material formulations. On the mesoscale, the numerical model explicitly resolves the components of the internal material structure. For concrete, a three-phase model consisting of aggregates, mortar matrix and interfacial transition zone is proposed. Based on prescribed grading curves, an efficient algorithm for the generation of three-dimensional aggregate distributions using ellipsoids is presented. In the numerical model, tensile failure of the mortar matrix is described using a continuum damage approach. In order to reduce spurious mesh sensitivities, introduced by the softening behavior of the matrix material, nonlocal integral-type material formulations are applied. The propagation of cracks at the interface between aggregates and mortar matrix is represented in a discrete way using a cohesive crack approach. The iterative solution procedure is stabilized using a new path following constraint within the framework of load-displacement-constraint methods which allows for an efficient representation of snap-back phenomena. In several examples, the influence of the randomly generated heterogeneous material structure on the stochastic scatter of the results is analyzed. Furthermore, the ability of mesoscale models to represent size effects is investigated. Mesoscale simulations require the discretization of the internal material structure. Compared to simulations on the macroscale, the numerical effort and the memory demand increases dramatically. Due to the complexity of the numerical model, mesoscale simulations are, in general, limited to small specimens. In this thesis, an adaptive heterogeneous multiscale approach is presented which allows for the incorporation of mesoscale models within nonlinear simulations of concrete structures. In heterogeneous multiscale models, only critical regions, i.e. regions in which damage develops, are resolved on the mesoscale, whereas undamaged or sparsely damage regions are modeled on the macroscale. A crucial point in simulations with heterogeneous multiscale models is the coupling of sub-domains discretized on different length scales. The sub-domains differ not only in the size of the finite elements but also in the constitutive description. In this thesis, different methods for the coupling of non-matching discretizations - constraint equations, the mortar method and the arlequin method - are investigated and the application to heterogeneous multiscale models is presented. Another important point is the detection of critical regions. An adaptive solution procedure allowing the transfer of macroscale sub-domains to the mesoscale is proposed. In this context, several indicators which trigger the model adaptation are introduced. Finally, the application of the proposed adaptive heterogeneous multiscale approach in nonlinear simulations of concrete structures is presented.
Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.
We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.
The numerical simulation of damage using phenomenological models on the macroscale was state of the art for many decades. However, such models are not able to capture the complex nature of damage, which simultaneously proceeds on multiple length scales. Furthermore, these phenomenological models usually contain damage parameters, which are physically not interpretable. Consequently, a reasonable experimental determination of these parameters is often impossible. In the last twenty years, the ongoing advance in computational capacities provided new opportunities for more and more detailed studies of the microstructural damage behavior. Today, multiphase models with several million degrees of freedom enable for the numerical simulation of micro-damage phenomena in naturally heterogeneous materials. Therewith, the application of multiscale concepts for the numerical investigation of the complex nature of damage can be realized. The presented thesis contributes to a hierarchical multiscale strategy for the simulation of brittle intergranular damage in polycrystalline materials, for example aluminum. The numerical investigation of physical damage phenomena on an atomistic microscale and the integration of these physically based information into damage models on the continuum meso- and macroscale is intended. Therefore, numerical methods for the damage analysis on the micro- and mesoscale including the scale transfer are presented and the transition to the macroscale is discussed. The investigation of brittle intergranular damage on the microscale is realized by the application of the nonlocal Quasicontinuum method, which fully describes the material behavior by atomistic potential functions, but reduces the number of atomic degrees of freedom by introducing kinematic couplings. Since this promising method is applied only by a limited group of researchers for special problems, necessary improvements have been realized in an own parallelized implementation of the 3D nonlocal Quasicontinuum method. The aim of this implementation was to develop and combine robust and efficient algorithms for a general use of the Quasicontinuum method, and therewith to allow for the atomistic damage analysis in arbitrary grain boundary configurations. The implementation is applied in analyses of brittle intergranular damage in ideal and nonideal grain boundary models of FCC aluminum, considering arbitrary misorientations. From the microscale simulations traction separation laws are derived, which describe grain boundary decohesion on the mesoscale. Traction separation laws are part of cohesive zone models to simulate the brittle interface decohesion in heterogeneous polycrystal structures. 2D and 3D mesoscale models are presented, which are able to reproduce crack initiation and propagation along cohesive interfaces in polycrystals. An improved Voronoi algorithm is developed in 2D to generate polycrystal material structures based on arbitrary distribution functions of grain size. The new model is more flexible in representing realistic grain size distributions. Further improvements of the 2D model are realized by the implementation and application of an orthotropic material model with Hill plasticity criterion to grains. The 2D and 3D polycrystal models are applied to analyze crack initiation and propagation in statically loaded samples of aluminum on the mesoscale without the necessity of initial damage definition.
The uncertainty existing in the construction industry is bigger than in other industries. Consequently, most construction projects do not go totally as planned. The project management plan needs therefore to be adapted repeatedly within the project lifecycle to suit the actual project conditions. Generally, the risks of change in the project management plan are difficult to be identified in advance, especially if these risks are caused by unexpected events such as human errors or changes in the client preferences. The knowledge acquired from different resources is essential to identify the probable deviations as well as to find proper solutions to the faced change risks. Hence, it is necessary to have a knowledge base that contains known solutions for the common exceptional cases that may cause changes in each construction domain. The ongoing research work presented in this paper uses the process modeling technique of Event-driven Process Chains to describe different patterns of structure changes in the schedule networks. This results in several so called “change templates”. Under each template different types of change risk/ response pairs can be categorized and stored in a knowledge base. This knowledge base is described as an ontology model populated with reference construction process data. The implementation of the developed approach can be seen as an iterative scheduling cycle that will be repeated within the project lifecycle as new change risks surface. This can help to check the availability of ready solutions in the knowledge base for the situation at hand. Moreover, if the solution is adopted, CPSP, “Change Project Schedule Plan „a prototype developed for the purpose of this research work, will be used to make the needed structure changes of the schedule network automatically based on the change template. What-If scenarios can be implemented using the CPSP prototype in the planning phase to study the effect of specific situations without endangering the success of the project objectives. Hence, better designed and more maintainable project schedules can be achieved.
Expert systems integrating fuzzy reasoning techniques represent a powerful tool to support practicing engineers during the early stages of structural design. In this context fuzzy models have proved themselves to be very suitable for the representation of complex design knowledge. However their definition is a laborious task. This paper introduces an approach for the design and the optimization of fuzzy systems based upon Genetic Programming. To keep the emerging fuzzy systems transparent a new framework for the definition of linguistic variables is also introduced.
Acoustic travel-time tomography (ATOM) determines the distribution of the temperature in a propagation medium by measuring the travel-time of acoustic signals between transmitters and receivers. To employ ATOM for indoor climate measurements, the impulse responses have been measured in the climate chamber lab of the Bauhaus-University Weimar and compared with the theoretical results of its image source model (ISM). A challenging task is distinguishing the reflections of interest in the reflectogram when the sound rays have similar travel-times. This paper presents a numerical method to address this problem by finding optimal positions of transmitter and receiver, since they have a direct impact on the distribution of travel times. These optimal positions have the minimum number of simultaneous arrival time within a threshold level. Moreover, for the tomographic reconstruction, when some of the voxels remain empty of sound-rays, it leads to inaccurate determination of the air temperature within those voxels. Based on the presented numerical method, the number of empty tomographic voxels are minimized to ensure the best sound-ray coverage of the room. Subsequently, a spatial temperature distribution is estimated by simultaneous iterative reconstruction technique (SIRT). The experimental set-up in the climate chamber verifies the simulation results.
The technique of Acoustic travel-time TOMography (ATOM) allows for measuring the distribution of air temperatures throughout the entire room based on the determined sound-travel-times of early reflections, currently up to second order reflections. The number of detected early reflections in the room impulse response (RIR) which stands for the desired sound paths inside the room, has a significant impact on the resolution of reconstructed temperatures. This study investigates the possibility of utilizing an array of directional sound sources for ATOM measurements instead of a single omnidirectional loudspeaker used in the previous studies [1–3]. The developed measurement setup consists of two directional sound sources placed near the edge of the floor in the climate chamber of the Bauhaus-University Weimar and one omnidirectional receiver at center of the room near the ceiling. In order to compensate for the reduced number of sound paths when using directional sound sources, it is proposed to take high-energy early reflections up to third order into account. For this purpose, the simulated travel times up to third-order image sources were implemented in the image source model (ISM) algorithm, by which these early reflections can be detected effectively for air temperature reconstructions. To minimize the uncertainties of travel-times estimation due to the positioning of the sound transducers inside the room, measurements were conducted to determine the exact emitting point of the utilized sound source i.e. its acoustic center (AC). For these measurements, three types of excitation signals (MLS, linear and logarithmic chirp signals) with various frequency ranges were used considering that the acoustic center of a sound source is a frequency dependent parameter [4]. Furthermore, measurements were conducted to determine an optimum excitation signal based on the given condition of the ATOM measurement set-up which defines an optimum method for the RIR estimation correspondingly. Finally, the uncertainty of the measuring system utilizing an array of directional sound sources was analyzed.
The lattice dynamics properties are investigated for twisting bilayer graphene. There are big jumps for the inter-layer potential at twisting angle θ=0° and 60°, implying the stability of Bernal-stacking and the instability of AA-stacking structures, while a long platform in [8,55]° indicates the ease of twisting bilayer graphene in this wide angle range. Significant frequency shifts are observed for the z breathing mode around θ=0° and 60°, while the frequency is a constant in a wide range [8,55]°. Using the z breathing mode, a mechanical nanoresonator is proposed to operate on a robust resonant frequency in terahertz range.