Refine
Document Type
- Master's Thesis (115) (remove)
Institute
- Professur Baubetrieb und Bauverfahren (12)
- Professur Grundbau (12)
- Professur Baustatik und Bauteilfestigkeit (8)
- Institut für Europäische Urbanistik (6)
- Professur Stahlbau (6)
- Professur Verkehrsbau (6)
- Institut für Strukturmechanik (ISM) (5)
- Professur Baumechanik (5)
- Professur Bauphysik (5)
- Professur Bodenmechanik (5)
Keywords
- Brückenbau (6)
- Finite-Elemente-Methode (4)
- Temperaturfeld (4)
- BIM (3)
- Computersimulation (3)
- Deich (3)
- IFC (3)
- Simulation (3)
- Analyse (2)
- Bauablauf (2)
In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies.
Das Hauptziel der vorliegenden Arbeit war es, eine stetige Kopplung zwischen der ananlytischen und numerischen Lösung von Randwertaufgaben mit Singularitäten zu realisieren. Durch die inter-polationsbasierte gekoppelte Methode kann eine globale C0 Stetigkeit erzielt werden. Für diesen Zweck wird ein spezielle finite Element (Kopplungselement) verwendet, das die Stetigkeit der Lösung sowohl mit dem analytischen Element als auch mit den normalen CST Elementen gewährleistet.
Die interpolationsbasierte gekoppelte Methode ist zwar für beliebige Knotenanzahl auf dem Interface ΓAD anwendbar, aber es konnte durch die Untersuchung von der Interpolationsmatrix und numerische Simulationen festgestellt werden, dass sie schlecht konditioniert ist. Um das Problem mit den numerischen Instabilitäten zu bewältigen, wurde eine approximationsbasierte Kopplungsmethode entwickelt und untersucht. Die Stabilität dieser Methode wurde anschließend anhand der Untersuchung von der Gramschen Matrix des verwendeten Basissystems auf zwei Intervallen [−π,π] und [−2π,2π] beurteilt. Die Gramsche Matrix auf dem Intervall [−2π,2π] hat einen günstigeren Konditionszahl in der Abhängigkeit von der Anzahl der Kopplungsknoten auf dem Interface aufgewiesen. Um die dazu gehörigen numerischen Instabilitäten ausschließen zu können wird das Basissystem mit Hilfe vom Gram-Schmidtschen Orthogonalisierungsverfahren auf beiden Intervallen orthogonalisiert. Das orthogonale Basissystem lässt sich auf dem Intervall [−2π,2π] mit expliziten Formeln schreiben. Die Methode des konsistentes Sampling, die häufig in der Nachrichtentechnik verwendet wird, wurde zur Realisierung von der approximationsbasierten Kopplung herangezogen. Eine Beschränkung dieser Methode ist es, dass die Anzahl der Sampling-Basisfunktionen muss gleich der Anzahl der Wiederherstellungsbasisfunktionen sein. Das hat dazu geführt, dass das eingeführt Basissys-tem (mit 2 n Basisfunktionen) nur mit n Basisfunktion verwendet werden kann.
Zur Lösung diese Problems wurde ein alternatives Basissystems (Variante 2) vorgestellt. Für die Verwendung dieses Basissystems ist aber eine Transformationsmatrix M nötig und bei der Orthogonalisierung des Basissystems auf dem Intervall [−π,π] kann die Herleitung von dieser Matrix kompliziert und aufwendig sein. Die Formfunktionen wurden anschließend für die beiden Varianten hergeleitet und grafisch (für n = 5) dargestellt und wurde gezeigt, dass diese Funktionen die Anforderungen an den Formfunktionen erfüllen und können somit für die FE- Approximation verwendet werden.
Anhand numerischer Simulationen, die mit der Variante 1 (mit Orthogonalisierung auf dem Intervall [−2π,2π]) durchgeführt wurden, wurden die grundlegenden Fragen (Beispielsweise: Stetigkeit der Verformungen auf dem Interface ΓAD, Spannungen auf dem analytischen Gebiet) über-
prüft.
Contemporary planning practice is often criticized as too design-driven with a lack of both quantitative evaluation criteria and employment of models that anticipate the self-organizational forces shaping cities, resulting in significant gaps between plan and reality.
This study aims to introduce a modular toolbox prototype for spatial-analysis in data-poor environments. It is proposed to integrate designing, evaluation, and monitoring of urban development into one framework, thus supporting data-driven, on-demand urban design, and planning processes.
The proposed framework’s value will exemplarily be tested, focussing on the analysis and simulation of spatiotemporal growth trajectories taking the Lanzhou New Area as a case-study - a large scale new town project that struggles to attract residents and businesses.
Conducted analysis suggests that more attention should be given to spatiotemporal development paths to ensure that cities work more efficiently throughout any stage of development. Finally, early hints on general design strategies to achieve this goal are discussed with the assistance of the proposed toolbox.
Scalarization methods are a category of multiobjective optimization (MOO) methods. These methods allow the usage of conventional single objective optimization algorithms, as scalarization methods reformulate the MOO problem into a single objective optimization problem. The scalarization methods analysed within this thesis are the Weighted Sum (WS), the Epsilon-Constraint (EC), and the MinMax (MM) method. After explaining the approach of each method, the WS, EC and MM are applied, a-posteriori, to three different examples: to the Kursawe function; to the ten bar truss, a common benchmark problem in structural optimization; and to the metamodel of an aero engine exit module.
The aim is to evaluate and compare the performance of each scalarization method that is examined within this thesis. The evaluation is conducted using performance metrics, such as the hypervolume and the generational distance, as well as using visual comparison.
The application to the three examples gives insight into the advantages and disadvantages of each method, and provides further understanding of an adequate application of the methods concerning high dimensional optimization problems.
Identifying cable force with vibration-based methods has become widely used in engineering practice due to simplicity of application. The string taut theory provides a simple definition of the relationship between natural frequencies and the tension force of a cable. However, this theory assumes a perfectly flexible non-sagging cable pinned at its ends. These assumptions do not reflect all cases, especially when the cable is short, under low tension forces or the supports are partially flexible. Extradosed bridges, which are distinguished from cable-stayed bridges by their low pylon height, have shorter cables. Therefore the application of the conventional string taut theory to identify cable forces on extradosed bridge cables might be inadequate to identify cable forces.
In this work, numerical modelling of an extradosed bridge cable saddled on a circular deviator at pylon is conducted. The model is validated with the catenary analytical solution and its static and dynamic behaviours are studied. The effect of a saddle support is found to positively affect the cable stiffness by geometric means; longer saddle radius increases the cable stiffness by suppressing the deformations near the saddle. Further, accounting the effects of bending stiffness in the numerical model by using beam elements show considerable deviation from models with truss elements (i.e. zero bending stiffness). This deviation is manifested when comparing the static and dynamic properties. This motivates a more thorough study of bending stiffness effects on short cables.
Bending stiffness effects are studied using two rods connected with several springs along their length. Under bending moments, the springs resist the rods' relative axial displacement by the springs' transverse component. This concept is used to identify bending stiffness values by utilizing the parallel axis theorem to quantify ratios of the second moment of area. These ratios are calculated based on the setup of the springs (e.g. number of springs per unit length, transverse stiffness, etc...). The numerical model based on this concept agrees well with the theoretical values computed using upper and lower bounds of the parallel axis theorem.
The proposed concept of quantifying ratios of the second moment of area using springs as connection between cable rods is applied on an actual extradosed bridge geometry. The model is examined by comparison to the previously validated global numerical model. The two models showed good correlation under various changing parameters. This allowed further study of the effects of stick/slip behaviour between cable rods on an actual bridge geometry.
Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines.
Bei einem marktüblichen Calciumsulfat-Fließestrich wurden in der Praxis schädigende Volu-menexpansionen festgestellt. Diese sind ein Resultat aus dem Zusammenwirken des einge-setzten Bindemittel-Compounds und einer kritischen Gesteinskörnung.
Das Ziel dieser Arbeit ist es, ein Calciumsulfat-Bindemittelsystem zu konfektionieren, welches in der Lage ist, die im Mörtel festgestellten Volumenexpansionen zu unterbinden. Es sollen verschiedene Bindemittel- und Additivzusammensetzungen untersucht werden, welche in Verbindung mit der kritischen Gesteinskörnung die Herstellung eines volumenstabilen Fließestrichs ermöglichen. Dazu soll folgende Fragestellung beantwortet werden: Welche Ursachen hat die Volumenzunahme und wie ist diese zu minimieren bzw. unterbinden?
Dabei werden unterschiedliche Bindemittelrezepturen aus α-Halbhydrat, Thermoanhydrit und Naturanhydrit, sowie verschiedene Additivzusammensetzungen hergestellt und untersucht.
Durch Längenänderungsmessungen in der Schwindrinne werden die Einflüsse der Binde-mittel, der Additivzusammensetzungen und der Wasser/Bindemittel-Werte auf das Län-genänderungsverhalten untersucht. Mittels Variation der einzelnen Compound-Bestandteile kann festgestellt werden, dass der Stabilisierer die Längenänderung negativ beeinflusst. Dieser bindet freies Wasser, welches für eine Reaktion zwischen Bindemittel und Gesteins-körnung im plastischen Zustand nicht mehr zur Verfügung steht. Diese Reaktion kann folglich erst im erhärteten Zustand ablaufen und verursacht die schädigende Volumenexpansion.
Abschließend wurde ein Bindemittel-Compound konfektioniert, welcher ohne Zusatz von Stabilisierern in Zusammenhang mit der kritischen Gesteinskörnung volumenstabil ist und keine Schäden auslöst.
Structural optimization has gained considerable attention in the design of structural engineering structures, especially in the preliminary phase.
This study introduces an unconventional approach for structural optimization by utilizing the Energy method with Integral Material Behavior (EIM), based on the Lagrange’s principle of minimum potential energy. An automated two-level optimization search process is proposed, which integrates the EIM, as an alternative method for nonlinear
structural analysis, and the bilevel optimization. The proposed procedure secures the equilibrium through minimizing the potential energy on one level, and on a higher level, a design objective function. For this, the most robust strategy of bilevel optimization, the nested method is used. The function of the potential energy is investigated along with its instabilities for physical nonlinear analysis through principle examples, by which the advantages and limitations using this method are reviewed. Furthermore, optimization algorithms are discussed.
A numerical fully functional code is developed for nonlinear cross section,
element and 2D frame analysis, utilizing different finite elements and is verified
against existing EIM programs. As a proof of concept, the method is applied on selected
examples using this code on cross section and element level. For the former one a
comparison is made with standard procedure, by employing the equilibrium equations
within the constrains. The validation of the element level was proven by a theoretical
solution of an arch bridge and finally, a truss bridge is optimized. Most of the
principle examples are chosen to be adequate for the everyday engineering practice, to
demonstrate the effectiveness of the proposed method.
This study implies that with further development, this method could become just as
competitive as the conventional structural optimization techniques using the Finite
Element Method.
The need for finding persuasive arguments can arise in a variety of domains such as politics, finance, marketing or personal entertainment. In these domains, there is a demand to make decisions by oneself or to convince somebody about a specific topic. To obtain a conclusion, one has to search thoroughly different sources in literature and on the web to compare various arguments. Voice interfaces, in form of smartphone applications or smart speakers, present the user with natural conversations in a comfortable way to make search requests in contrast to a traditional search interface with keyboard and display. Benefits and obstacles of such a new interface are analyzed by conducting two studies. The first one consists of a survey for analyzing the target group with questions about situations, motivations, and possible demanding features. The latter one is a wizard-of-oz experiment to investigate possible queries on how a user formulates requests to such a novel system. The results indicate that a search interface with conversational abilities can build a helpful assistant, but to satisfy the demands of a broader audience some additional information retrieval and visualization features need to be implemented.
The aim of my research is to observe the variance of energy efficiency of a typical multi-story office building under the exposure of different climatic conditions. Energy efficiency requirements in building codes or energy standards are among the most important single measures for buildings’ energy efficiency. Therefore, this study can be set up for a better understanding of how energy efficiency of a building changes under the effect of adverse to moderate climatic conditions which possess a mentionable effect on the operation of a building.
This thesis is structured in three balanced and conceptual steps. Following the aim of the project, the virtual building model is to be analyzed under the effect of seven distinct climatic conditions namely work environment of New Delhi, Mumbai, Berlin, Lisbon, Copenhagen, Dubai and Montreal. Firstly, the task is to do a complete literature research based on the scope of similar researches and studying the problems in detail along with the theoritical background all the concepts which are implemented to get the numerical results. This chapter also comprises a detailed study of the climatic conditions of the above-mentioned cities. Different climatic traits like temperature variations, count of heating and cooling degree days, relative humidity, temperature range and comfort zonal charts for the specified cities are studied in detail. This study helps to understand the effect of these adverse to moderate climates on the operation of the building. On the second step, the virtual building model is prepared on a software platform named Revit Structures. This virtual building model is not necessarily a complete building, but it has the relevant functionalities of a real building. We perform the energy analysis and the heating and cooling analysis on this virtual building model to study the operational outcome of the building under different climatic conditions in detail. By the end of these above two tasks, two scenarios are observed. On one hand, we have a literature research and on the other hand we have the numerical results. Therefore, finally we present a comparative scenario based on the energy efficient performances of the building under such variant climatic conditions. This is followed by the prediction of thermal comfort level inside the building and it based on Fanger’s PMV Model. Understanding the literature and the numerical values in detail helps us to predict the index thermal comfort level inside the building.
The conclusion of this master thesis focuses mainly on the scopes of improvement of energy efficiency requirements in energy codes if any, differentiated according to specific locations. The initial aim of my hypothesis which is to study the impacts of climatic variations on the energy efficient performances of a building is fulfilled but as such topics have very deep and broad roots, the scope of further improvements is always predominant.