Refine
Document Type
- Article (1015)
- Conference Proceeding (857)
- Doctoral Thesis (494)
- Master's Thesis (115)
- Part of a Book (50)
- Book (45)
- Report (43)
- Periodical (28)
- Preprint (27)
- Bachelor Thesis (22)
- Diploma Thesis (13)
- Other (11)
- Study Thesis (10)
- Habilitation (9)
- Review (7)
- Working Paper (5)
- Course Material (1)
- Lecture (1)
- Magister's Thesis (1)
- Sound (1)
Institute
- Professur Theorie und Geschichte der modernen Architektur (493)
- Professur Informatik im Bauwesen (484)
- Institut für Strukturmechanik (ISM) (346)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (201)
- Professur Baubetrieb und Bauverfahren (145)
- Institut für Europäische Urbanistik (71)
- Professur Bauphysik (53)
- Professur Stochastik und Optimierung (46)
- Graduiertenkolleg 1462 (42)
- F. A. Finger-Institut für Baustoffkunde (FIB) (38)
Keywords
- Weimar (446)
- Bauhaus-Kolloquium (442)
- Angewandte Mathematik (331)
- Computerunterstütztes Verfahren (289)
- Architektur (247)
- Architektur <Informatik> (201)
- Strukturmechanik (189)
- CAD (184)
- Angewandte Informatik (155)
- Bauhaus (125)
We present a stochastic deep collocation method (DCM) based on neural architecture search (NAS) and transfer learning for heterogeneous porous media. We first carry out a sensitivity analysis to determine the key hyper-parameters of the network to reduce the search space and subsequently employ hyper-parameter optimization to finally obtain the parameter values. The presented NAS based DCM also saves the weights and biases of the most favorable architectures, which is then used in the fine-tuning process. We also employ transfer learning techniques to drastically reduce the computational cost. The presented DCM is then applied to the stochastic analysis of heterogeneous porous material. Therefore, a three dimensional stochastic flow model is built providing a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. The performance of the presented NAS based DCM is verified in different dimensions using the method of manufactured solutions. We show that it significantly outperforms finite difference methods in both accuracy and computational cost.
This paper presents numerical analysis of the discrete fundamental solution of the discrete Laplace operator on a rectangular lattice. Additionally, to provide estimates in interior and exterior domains, two different regularisations of the discrete fundamental solution are considered. Estimates for the absolute difference and lp-estimates are constructed for both regularisations. Thus, this work extends the classical results in the discrete potential theory to the case of a rectangular lattice and serves as a basis for future convergence analysis of the method of discrete potentials on rectangular lattices.
The spread of breathing air when playing wind instruments and singing was investigated and visualized using two methods: (1) schlieren imaging with a schlieren mirror and (2) background-oriented schlieren (BOS). These methods visualize airflow by visualizing density gradients in transparent media. The playing of professional woodwind and brass instrument players, as well as professional classical trained singers were investigated to estimate the spread distances of the breathing air. For a better comparison and consistent measurement series, a single high note, a single low note, and an extract of a musical piece were investigated. Additionally, anemometry was used to determine the velocity of the spreading breathing air and the extent to which it was quantifiable. The results showed that the ejected airflow from the examined instruments and singers did not exceed a spreading range of 1.2 m into the room. However, differences in the various instruments have to be considered to assess properly the spread of the breathing air. The findings discussed below help to estimate the risk of cross-infection for wind instrument players and singers and to develop efficacious safety precautions, which is essential during critical health periods such as the current COVID-19 pandemic.
Bauablaufplänen kommt bei der Realisierung von Bauprojekten eine zentrale Rolle zu. Sie dienen der Koordination von Schnittstellen und bilden für die am Projekt Beteiligten die Grundlage für ihre individuelle Planung. Eine verlässliche Terminplanung ist daher von großer Bedeutung, tatsächlich sind aber gerade Bauablaufpläne für ihre Unzuverlässigkeit bekannt.
Aufgrund der langen Vorlaufzeiten bei der Planung von Bauprojekten sind zum Zeitpunkt der Planung viele Informationen nur als Schätzwerte bekannt. Auf der Grundlage dieser geschätzten und damit mit Unsicherheiten behafteten Daten werden im Bauwesen deterministische Terminpläne erstellt. Kommt es während der Realisierung zu Diskrepanzen zwischen Schätzungen und Realität, erfordert dies die Anpassung der Pläne. Aufgrund zahlreicher Abhängigkeiten zwischen den geplanten Aktivitäten können einzelne Planänderungen vielfältige weitere Änderungen und Anpassungen nach sich ziehen und damit einen reibungslosen Projektablauf gefährden.
In dieser Arbeit wird ein Vorgehen entwickelt, welches Bauablaufpläne erzeugt, die im Rahmen der durch das Projekt definierten Abhängigkeiten und Randbedingungen in der Lage sind, Änderungen möglichst gut zu absorbieren. Solche Pläne, die bei auftretenden Änderungen vergleichsweise geringe Anpassungen des Terminplans erfordern, werden hier als robust bezeichnet.
Ausgehend von Verfahren der Projektplanung und Methoden zur Berücksichtigung von Unsicherheiten werden deterministische Terminpläne bezüglich ihres Verhaltens bei eintretenden Änderungen betrachtet. Hierfür werden zunächst mögliche Unsicherheiten als Ursachen für Änderungen benannt und mathematisch abgebildet. Damit kann das Verhalten von Abläufen für mögliche Änderungen betrachtet werden, indem die durch Änderungen erzwungenen angepassten Terminpläne simuliert werden. Für diese Monte-Carlo-Simulationen der angepassten Terminpläne wird sichergestellt, dass die angepassten Terminpläne logische Weiterentwicklungen des deterministischen Terminplans darstellen. Auf der Grundlage dieser Untersuchungen wird ein stochastisches Maß zur Quantifizierung der Robustheit erarbeitet, welches die Fähigkeit eines Planes, Änderungen zu absorbieren, beschreibt. Damit ist es möglich, Terminpläne bezüglich ihrer Robustheit zu vergleichen.
Das entwickelte Verfahren zur Quantifizierung der Robustheit wird in einem Optimierungsverfahren auf Basis Genetischer Algorithmen angewendet, um gezielt robuste Terminpläne zu erzeugen. An Beispielen werden die Methoden demonstriert und ihre Wirksamkeit nachgewiesen.
Strategien der Sichtbarkeit und Sichtbarmachung von ‚Wearable Enhancement‘ im Bereich Smart Health
(2022)
Die vorliegende Forschungsarbeit befasst sich mit der Entwicklung und Gestaltung von körpernahen, tragbaren Artefakten für den digitalisierten Gesundheitsbereich. Unter dem entwickelten Begriff des Wearable Enhancements werden die verschiedenen Termini aus smarten Textilien, Fashion Technologies, Wearable Technologies sowie elektronischen Textilien zusammengefasst und zwei zentrale Forschungsfragen untersucht. Wie kann Wearable Enhancement im Bereich Smart Health ethisch, sozial und ökologisch entwickelt und gestaltet werden? Inwiefern können textile Schnittstellen die Wahrnehmung und die Wahrnehmbarkeit des Körpers verändern? Mit der ersten Forschungsfrage sollen vorrangig Ansätze und Strategien der Sichtbarkeit für die Entwicklung und Gestaltung diskutiert werden, welche Aussagen für die Designpraxis, den Gestaltungs- und Designforschungsprozess sowie die Designlösungen selbst generieren sollen. Die zweite Forschungsfrage zielt darauf, Formen der Sichtbarmachung von sowie für Wearable Enhancement zu untersuchen.
Anhand von drei konkreten Fallstudien werden wesentliche Aspekte der Rezeption, Perzeption, Konstruktion, Konfiguration und Konzeption von soziotechnischen Artefakten zur Funktionssteigerung des menschlichen Körpers untersucht und verschiedene Formen der Sichtbarkeit und Sichtbarmachung entwickelt. In der Arbeit wird ein dual-angelegter transdisziplinärer Designforschungsansatz entwickelt und praktiziert, welcher sowohl die menschlichen Bedürfnisse der Nutzer*innen als auch die Weiterentwicklung von Technologien berücksichtigt. Auf dieser Grundlage wird versucht Anregungen für ein zukunftsfähiges und zugleich verantwortungsorientiertes Design zu geben.
The detailed structural analysis of thin-walled circular pipe members often requires the use of a shell or solid-based finite element method. Although these methods provide a very good approximation of the deformations, they require a higher degree of discretization which causes high computational costs. On the other hand, the analysis of thin-walled circular pipe members based on classical beam theories is easy to implement and needs much less computation time, however, they are limited in their ability to approximate the deformations as they cannot consider the deformation of the cross-section.
This dissertation focuses on the study of the Generalized Beam Theory (GBT) which is both accurate and efficient in analyzing thin-walled members. This theory is based on the separation of variables in which the displacement field is expressed as a combination of predetermined deformation modes related to the cross-section, and unknown amplitude functions defined on the beam's longitudinal axis. Although the GBT was initially developed for long straight members, through the consideration of complementary deformation modes, which amend the null transverse and shear membrane strain assumptions of the classical GBT, problems involving short members, pipe bends, and geometrical nonlinearity can also be analyzed using GBT. In this dissertation, the GBT formulation for the analysis of these problems is developed and the application and capabilities of the method are illustrated using several numerical examples. Furthermore, the displacement and stress field results of these examples are verified using an equivalent refined shell-based finite element model.
The developed static and dynamic GBT formulations for curved thin-walled circular pipes are based on the linear kinematic description of the curved shell theory. In these formulations, the complex problem in pipe bends due to the strong coupling effect of the longitudinal bending, warping and the cross-sectional ovalization is handled precisely through the derivation of the coupling tensors between the considered GBT deformation modes. Similarly, the geometrically nonlinear GBT analysis is formulated for thin-walled circular pipes based on the nonlinear membrane kinematic equations. Here, the initial linear and quadratic stress and displacement tangent stiffness matrices are built using the third and fourth-order GBT deformation mode coupling tensors.
Longitudinally, the formulation of the coupled GBT element stiffness and mass matrices are presented using a beam-based finite element formulation. Furthermore, the formulated GBT elements are tested for shear and membrane locking problems and the limitations of the formulations regarding the membrane locking problem are discussed.
In the wake of the news industry’s digitization, novel organizations that differ considerably from traditional media firms in terms of their functional roles and organizational practices of media work are emerging. One new type is the field repair organization, which is characterized by supporting high‐quality media work to compensate for the deficits (such as those which come from cost savings and layoffs) which have become apparent in legacy media today. From a practice‐theoretical research perspective and based on semi‐structured interviews, virtual field observations, and document analysis, we have conducted a single case study on Science Media Center Germany (SMC), a unique non‐profit news start‐up launched in 2016 in Cologne, Germany. Our findings show that, in addition to field repair activities, SMC aims to facilitate progress and innovation in the field, which we refer to as field advancement. This helps to uncover emerging needs and anticipates problems before they intensify or even occur, proactively providing products and tools for future journalism. This article contributes to our understanding of novel media organizations with distinct functions in the news industry, allowing for advancements in theory on media work and the organization of journalism in times of digital upheaval.
The objective of this thesis was to understand the 20th-century history of informal urbanisation in Europe and its origins in Madrid and Paris. The concept of informal urbanisation was employed to refer to the process of developing shacks and precarious single-family housing areas that were not planned by the public powers and were considered to be substandard because of their below-average materials and social characteristics. Our main hypothesis was that despite being a phenomenon with ancient roots, informal urbanisation emerged as a public problem and was subsequently prohibited in connection with another historical process occurred: the birth of contemporary urban planning. Therefore, its transformation into a deviant and illegal urban growth mechanism would have been a pan-European process occurring at the same pace that urban planning developed during the first decades of the 20th century.
Analysing the 20th-century history of informal urbanisation in Europe was an ambitious task that required using a large number of sources. To contend with this issue, this thesis combined two main methods: historiographical research about informal urbanisation in Europe and archival research of two case studies, Madrid and Paris, to make the account more precise by analysing primary sources of the subject.
Our research of these informal areas, which were produced mainly through poor private allotments and housing developed on land squats, revealed two key moments of explosive growth across Europe: the 1920s and 1960s. The near disappearance of informal urbanisation throughout the continent seemed to be a consequence not of the historical development of urban planning—which was commonly transgressed and bypassed—but of the exacerbation of global economic inequalities, permitting the development of a geography of privilege in Europe.
Concerning the cases of Paris and Madrid, the origins of informal urbanisation—that is, the moment the issue started to be problematised—seemed to occur in the second half of the 19th century, when a number of hygienic norms and surveillance devices began to control housing characteristics. From that moment onwards, informal urbanisation areas formed peripheral belts in both cities. This growth became the object of an illegalisation process of which we have identified three phases: (i) the unregulated development of the phenomenon during the second half of the 20th century, (ii) the institutional production of “exception regulations” to permit a controlled development of substandard housing in the peripheral fringes of both cities, and (iii) the synchronic prohibition of informal urbanisation in the 1920s and its illegal reproduction.
Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics.
As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects.
As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models.
Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated.
According to Eurocode, the computation of bending strength for steel cantilever beams is a straightforward process. The approach is based on an Ayrton-Perry formula adaptation of buckling curves for steel members in compression, which involves the computation of an elastic critical buckling load for considering the instability. NCCI documents offer a simplified formula to determine the critical bending moment for cantilevers beams with symmetric cross-section. Besides the NCCI recommendations, other approaches, e.g. research literature or Finite-Element-Analysis, may be employed to determine critical buckling loads. However, in certain cases they render different results. Present paper summarizes and compares the abovementioned analytical and numerical approaches for determining critical loads and it exemplarily analyses corresponding cantilever beam capacities using numerical approaches based on plastic zones theory (GMNIA).
Global structural analyses in civil engineering are usually performed considering linear-elastic material behavior. However, for steel structures, a certain degree of plasticization depending on the member classification may be considered. Corresponding plastic analyses taking material nonlinearities into account are effectively realized using numerical methods. Frequently applied finite elements of two and three-dimensional models evaluate the plasticity at defined nodes using a yield surface, i.e. by a yield condition, hardening rule, and flow rule. Corresponding calculations are connected to a large numerical as well as time-consuming effort and they do not rely on the theoretical background of beam theory, to which the regulations of standards mainly correspond. For that reason, methods using beam elements (one-dimensional) combined with cross-sectional analyses are commonly applied for steel members in terms of plastic zones theories. In these approaches, plasticization is in general assessed by means of axial stress only. In this paper, more precise numerical representation of the combined stress states, i.e. axial and shear stresses, is presented and results of the proposed approach are validated and discussed.
In der kritischen Stadtforschung wird die These der postdemokratischen Stadt aktuell immer wieder aufgegriffen und dabei eng mit Prozessen der Neoliberalisierung verknüpft. Ausgehend von einer kritischen Diskussion der konzeptionellen Zugänge bei Colin Crouch und Jacques Rancière geht der Beitrag anhand der Geschichte der kommunalen Selbstverwaltung in Frankfurt am Main dem Gehalt der beiden Begriffsbestimmungen in der konkreten historischen Analyse nach. Verwiesen wird dabei auf die unterschiedliche Analysetiefe der beiden Konzepte. Entgegen der bei Crouch vorherrschenden Annahme, dass es vor der neoliberalen Stadt eine demokratische Form städtischen Regierens gegeben hat, wird unter Rückbezug auf die Argumentation Rancières zur Demokratie betont, dass der Fordismus keinesfalls als egalitärer, inklusiver oder demokratischer charakterisiert werden kann. Vielmehr vertreten wir die These, dass die fordistische Stadt zwar aus anderen Gründen, aber vom Grundsatz her nicht weniger postdemokratisch gewesen ist als die neoliberale der Gegenwart und dass die demokratischen Momente am ehesten in den Brüchen und Spalten der sozialen Konflikte der 1970er und 1980er Jahre gefunden werden können.
Der Text folgt in essayistischer Form einem Spaziergang durch das politische Zentrum Brasílias in Brasilien. Die Konzentration liegt auf der Gestaltung des Bodens. Wie ist die Planhauptstadt „vom Reißbrett“ in der Horizontalen gestaltet? Wie sehen repräsentative Plätze einer Stadt aus, die vor allem für Autos gebaut worden ist? Der forschende Blick liegt auf dem erlebten Ist-Zustand und wird assoziativ mit Ergebnissen der Forschungsarbeit aus Deutschland reflektiert. „Mächtiger Boden“ entstand als Satellit zur aktuellen Forschung der Autorin im Rahmen eines Aufenthalts in Brasilien.
Campus Bockenheim Revisited
(2019)
Jürgen Schardt legt eine umfassende Studie zur Architektur der Stadt- und Universitätsbauten in Frankfurt am Main von 1906 bis 1956 vor. Besonderer Schwerpunkt liegt dabei auf den gesellschaftlichen und politischen Rahmenbedingungen der Architekturproduktion und einer kritischen Revision der etablierten Historiografie der Goethe-Universität. Der Autor widmet sich in drei Teilen jeweils dem Kaiserreich und der Gründung der Hochschule im Jahr 1914, der Zeit der Weimarer Republik sowie der Jahre des Aufbaus nach dem Zweiten Weltkrieg. Schardt untersucht die entstandenen Gebäude hinsichtlich schlüssiger Kriterien bürgerlicher Architektur, beleuchtet aber auch andere für die Hochschulgeschichte relevante Rahmenbedingungen. Die lesenswerte Studie verbindet dabei Aspekte der Sozial-, Wissenschafts- und Architekturgeschichte.
Die Zukunft war jetzt
(2021)
Die US-amerikanische Kulturanthropologin Christina Schwenkel legt mit Building socialism eine quellengesättigte ethnografische Studie über Zerstörung, Wiederaufbau und Nutzungsperspektiven der vietnamesischen Stadt Vinh vor. Ein besonderes Augenmerk liegt auf den agencies der Beteiligten. Im Zentrum der Untersuchung steht ein Quartier, dessen Wohnblocks mit materieller und ideeller Unterstützung der DDR errichtet wurden. Nicht nur sind die methodischen Zugänge der Untersuchung vielversprechend und gewinnbringend – angesichts des drohenden Stadtumbaus, der für die Bewohner:innen des Quartiers Quang Trung Abriss und Verdrängung bedeuten würde, gewinnt ihre städtebauhistorische Ethnografie auch an politischer Relevanz.
Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square.
Bolted connections are commonly used in steel construction. The load-bearing behavior of bolt fittings has extensively been studied in various research activities and the bearing capacity of bolted connections can be assessed well by standard regulations for practical applications. With regard to tensile loading, the nut does not have strong influence on resistances, since the failure occurs in the bolts due to higher material strengths of the nuts. In some applications, so-called “blind holes” are used to connect plated components. In a manner of speaking, the nut is replaced by the “outer” plate with a prefabricated hole and thread, in which the bolt can be screwed and tightened. In such connections, the limit load capacity cannot solely be assessed by the bolt resistance, since the threaded hole in the base material has strong influence on the structural behavior. In this context, the available screw-in depth of the blind hole is of fundamental importance. The German National Annex of EN 1993-1-8 provides information on a necessary depth in order to transfer the full tensile capacity of the bolt. However, some connections do not allow to fabricate such depths. In these cases, the capacity of the connection is unclear and not specified. In this paper, first experiments on corresponding connections with different screw-in depths are presented and compared to limit load capacities according to the standard.
Das Ziel der Arbeit ist, eine mögliche Verbesserung der Güte der Lebensdauervorhersage für Gusseisenwerkstoffe mit Kugelgraphit zu erreichen, wobei die Gießprozesse verschiedener Hersteller berücksichtigt werden.
Im ersten Schritt wurden Probenkörper aus GJS500 und GJS600 von mehreren Gusslieferanten gegossen und daraus Schwingproben erstellt.
Insgesamt wurden Schwingfestigkeitswerte der einzelnen gegossenen Proben sowie der Proben des Bauteils von verschiedenen Gussherstellern weltweit entweder durch direkte Schwingversuche oder durch eine Sammlung von Betriebsfestigkeitsversuchen bestimmt.
Dank der metallografischen Arbeit und Korrelationsanalyse konnten drei wesentliche Parameter zur Bestimmung der lokalen Dauerfestigkeit festgestellt werden: 1. statische Festigkeit, 2. Ferrit- und Perlitanteil der Mikrostrukturen und 3. Kugelgraphitanzahl pro Flächeneinheit.
Basierend auf diesen Erkenntnissen wurde ein neues Festigkeitsverhältnisdiagramm (sogenanntes Sd/Rm-SG-Diagramm) entwickelt.
Diese neue Methodik sollte vor allem ermöglichen, die Bauteildauerfestigkeit auf der Grundlage der gemessenen oder durch eine Gießsimulation vorhersagten lokalen Zugfestigkeitswerte sowie Mikrogefügenstrukturen besser zu prognostizieren.
Mithilfe der Versuche sowie der Gießsimulation ist es gelungen, unterschiedliche Methoden der Lebensdauervorhersage unter Berücksichtigung der Herstellungsprozesse weiterzuentwickeln.
Polylactic acid (PLA) is a highly applicable material that is used in 3D printers due to some significant features such as its deformation property and affordable cost. For improvement of the end-use quality, it is of significant importance to enhance the quality of fused filament fabrication (FFF)-printed objects in PLA. The purpose of this investigation was to boost toughness and to reduce the production cost of the FFF-printed tensile test samples with the desired part thickness. To remove the need for numerous and idle printing samples, the response surface method (RSM) was used. Statistical analysis was performed to deal with this concern by considering extruder temperature (ET), infill percentage (IP), and layer thickness (LT) as controlled factors. The artificial intelligence method of artificial neural network (ANN) and ANN-genetic algorithm (ANN-GA) were further developed to estimate the toughness, part thickness, and production-cost-dependent variables. Results were evaluated by correlation coefficient and RMSE values. According to the modeling results, ANN-GA as a hybrid machine learning (ML) technique could enhance the accuracy of modeling by about 7.5, 11.5, and 4.5% for toughness, part thickness, and production cost, respectively, in comparison with those for the single ANN method. On the other hand, the optimization results confirm that the optimized specimen is cost-effective and able to comparatively undergo deformation, which enables the usability of printed PLA objects.
This dataset presents the numerical analysis of the heat and moisture transport through a facade equipped with a living wall system designated for greywater treatment. While such greening systems provide many environmental benefits, they involve pumping large quantities of water onto the wall assembly, which can increase the risk of moisture in the wall as well as impaired energetic performance due to increased thermal conductivity with increased moisture content in the building materials. This dataset was acquired through numerical simulation using the coupling of two simulation tools, namely Envi-Met and Delphin. This coupling was used to include the complex role the plants play in shaping the near-wall environmental parameters in the hygrothermal simulations. Four different wall assemblies were investigated, each assembly was assessed twice: with and without the living wall. The presented data include the input and output parameters of the simulations, which were presented in the co-submitted article [1].
This study demonstrates the application and combination of multiple imaging techniques [light microscopy, micro-X-ray computer tomography (μ-CT), scanning electron microscopy (SEM) and focussed ion beam – nano-tomography (FIB-nT)] to the analysis of the microstructure of hydrated alite across multiple scales. However, by comparing findings with mercury intrusion porosimetry (MIP), it becomes obvious that the imaged 3D volumes and 2D images do not sufficiently overlap at certain scales to allow a continuous quantification of the pore size distribution (PSD). This can be overcome by improving the resolution and increasing the measured volume. Furthermore, results show that the fibrous morphology of calcium-silicate-hydrates (C-S-H) phases is preserved during FIB-nT. This is a requirement for characterisation of nano-scale porosity. Finally, it was proven that the combination of FIB-nT with energy-dispersive X-ray spectroscopy (EDX) data facilitates the phase segmentation of a 11 × 11 × 7.7 μm3 volume of hydrated alite.
Burning of clinker is the most influencing step of cement quality during the production process. Appropriate characterisation for quality control and decision-making is therefore the critical point to maintain a stable production but also for the development of alternative cements. Scanning electron microscopy (SEM) in combination with energy dispersive X-ray spectroscopy (EDX) delivers spatially resolved phase and chemical information for cement clinker. This data can be used to quantify phase fractions and chemical composition of identified phases.
The contribution aims to provide an overview of phase fraction quantification by semi-automatic phase segmentation using high-resolution backscattered electron (BSE) images and lower-resolved EDX element maps. Therefore, a tool for image analysis was developed that uses state-of-the-art algorithms for pixel-wise image segmentation and labelling in combination with a decision tree that allows searching for specific clinker phases. Results show that this tool can be applied to segment sub-micron scale clinker phases and to get a quantification of all phase fractions. In addition, statistical evaluation of the data is implemented within the tool to reveal whether the imaged area is representative for all clinker phases.
Knapp 30.000 Windenergieanlagen zwischen Nordsee und Alpen lassen unübersehbar erkennen, dass sich unser Energiesystem in einer umfassenden Transformation befindet. Allenthalben erfährt diese Entwicklung eine breite und kontroverse Rezeption und auch in der Denkmalpflege werden Windenergieanlagen aufgrund ihrer mitunter erheblichen Auswirkungen auf die Landschaft noch überwiegend als Störung wahrgenommen. Diese Arbeit nimmt dagegen die historische Entwicklung in den Blick und plädiert dafür, Windenergieanlagen als bedeutendes Kulturerbe zu verstehen. Angesichts des Voranschreitens der Energiewende wird angenommen, dass gerade älteren Modellen als baulichen Zeugnissen umfangreicher energiepolitischer Veränderungen seit den 1970er Jahren eine hohe Bedeutung zugeschrieben werden kann. Daher besteht das Ziel darin, Windenergieanlagen herauszuarbeiten, welche als hervorragende Zeugnisse der Entwicklung der Windenergienutzung in Deutschland zu bewerten sind.
Zur Annäherung werden diese zunächst als Untersuchungsgegenstand typologisch abgegrenzt. Eine wesentliche Besonderheit von Windenergieanlagen besteht darin, dass sie im Verhältnis zur eigentlichen Flächenversiegelung durch ihre vertikale Struktur erhebliche visuelle Auswirkungen auf die Landschaft haben. Anschließend wird die Entwicklung der Windenergienutzung seit den 1970er Jahren genauer betrachtet, welche insgesamt nicht linear verlief und von vielen Konflikten gekennzeichnet ist. Diese muss im Kontext eines wachsenden Umweltbewusstseins verstanden werden, das umfangreiche energiepolitische Veränderungen zur Folge hatte. Auf dieser Grundlage werden schließlich in einer denkmalkundlichen Reihenuntersuchung Windenergieanlagen herausgearbeitet, welche in hervorragender Weise von der Entwicklung zeugen. Die Auswahl bleibt allerdings mit sechs Objekten im Verhältnis zum Gesamtbestand von knapp 30.000 Anlagen relativ beschränkt, weil das auf die Abgrenzung von Besonderheiten ausgelegte etablierte Denkmalverständnis bei einem zeitlich so dichten Bestand gleichartiger Bauwerke an eine Grenze kommt.
Abschließend werden mögliche Erhaltungsperspektiven sowie denkmaltheoretische und -praktische Schlussfolgerungen diskutiert. Dabei ist unbedingt ein Erhalt am Ursprungsstandort anzustreben, wobei im Einzelfall entschieden werden muss, ob Belange des Funktions- oder Substanzerhaltes höher zu gewichten sind. Die skizzierten Auswahlprobleme regen darüber hinaus zur Diskussion zusätzlicher denkbarer Bewertungskategorien an, wobei sich insbesondere die gesellschaftliche Wahrnehmung und ökologische Werte aufdrängen. Zudem kann für die stärkere Berücksichtigung von Funktionszusammenhängen bei der Betrachtung technischer Infrastruktur in der Denkmalpflege plädiert werden. Insgesamt führt die denkmalkundliche Auseinandersetzung mit Windenergieanlagen damit weit über die Herausarbeitung einzelner Objekte hinaus und macht eindrücklich auf aktuelle Herausforderungen der Denkmalpflege und darüber hinaus aufmerksam.
Bauhaus-Gastprofessorin Mirjam Wenzel referierte am 30. Juni 2021 im Audimax der Bauhaus-Universität Weimar zur Entstehungsgeschichte und Konzeption Jüdischer Museen. Dabei ging sie darauf ein, inwiefern diese Museen besonders relevant für aktuelle gesellschaftliche und politische Fragestellungen sind. Prof. Wenzels zweiter öffentlicher Vortrag an der Bauhaus-Universität Weimar skizzierte die Potentiale von Kultureinrichtungen in Zeiten gesellschaftspolitischer Veränderungen im Allgemeinen und die Bedeutung Jüdischer Museen angesichts verbaler und tätlicher Gewalt gegen Jüdinnen und Juden im Besonderen.
In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk.
Marine macroalgae such as Ulva intestinalis have promising properties as feedstock for cosmetics and pharmaceuticals. However, since the quantity and quality of naturally grown algae vary widely, their exploitability is reduced – especially for producers in high-priced markets. Moreover, the expansion of marine or shore-based cultivation systems is unlikely in Europe, since promising sites either lie in fishing zones, recreational areas, or natural reserves. The aim was therefore to develop a closed photobioreactor system enabling full control of abiotic environmental parameters and an effective reconditioning of the cultivation medium in order to produce marine macroalgae at sites distant from the shore. To assess the feasibility and functionality of the chosen technological concept, a prototypal plant has been implemented in central Germany – a site distant from the sea. Using a newly developed, submersible LED light source, cultivation experiments with Ulva intestinalis led to growth rates of 7.72 ± 0.04 % day−1 in a cultivation cycle of 28 days. Based on the space demand of the production system, this results in fresh mass productivity of 3.0 kg m−2, respectively, of 1.1 kg m−2 per year. Also considering the ratio of biomass to energy input amounting to 2.76 g kWh−1, significant future improvements of the developed photobioreactor system should include the optimization of growth parameters, and the reduction of the system’s overall energy demand.
Scaling of concrete due to salt frost attack is an important durability issue in moderate and cold climates. The actual damage mechanism is still not completely understood. Two recent damage theories—the glue spall theory and the cryogenic suction theory—offer plausible, but conflicting explanations for the salt frost scaling mechanism. The present study deals with the cryogenic suction theory, which assumes that freezing concrete can take up unfrozen brine from a partly frozen deicing solution during salt frost attack. According to the model hypothesis, the resulting saturation of the concrete surface layer intensifies the ice formation in this layer and causes salt frost scaling. In this study an experimental technique was developed that makes it possible to quantify to which extent brine uptake can increase ice formation in hardened cement paste (used as a model material for concrete). The experiments were carried out with low temperature differential scanning calorimetry, where specimens were subjected to freeze–thaw cycles while being in contact with NaCl brine. Results showed that the ice content in the specimens increased with subsequent freeze–thaw cycles due to the brine uptake at temperatures below 0 °C. The ability of the hardened cement paste to bind chlorides from the absorbed brine at the same time affected the freezing/melting behavior of the pore solution and the magnitude of the ice content.
The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate.
Within the scope of literature, the influence of openings within the infill walls that are bounded by a reinforced concrete frame and excited by seismic drift forces in both in- and out-of-plane direction is still uncharted. Therefore, a 3D micromodel was developed and calibrated thereafter, to gain more insight in the topic. The micromodels were calibrated against their equivalent physical test specimens of in-plane, out-of-plane drift driven tests on frames with and without infill walls and openings, as well as out-of-plane bend test of masonry walls. Micromodels were rectified based on their behavior and damage states. As a result of the calibration process, it was found that micromodels were sensitive and insensitive to various parameters, regarding the model’s behavior and computational stability. It was found that, even within the same material model, some parameters had more effects when attributed to concrete rather than on masonry. Generally, the in-plane behavior of infilled frames was found to be largely governed by the interface material model. The out-of-plane masonry wall simulations were governed by the tensile strength of both the interface and masonry material model. Yet, the out-of-plane drift driven test was governed by the concrete material properties.
Antimicrobial resistance (AMR) is identified by the World Health Organization (WHO) as one of the top ten threats to public health worldwide. In addition to public health, AMR also poses a major threat to food security and economic development. Current sanitation systems contribute to the emergence and spread of AMR and lack effective AMR mitigation measures. This study assesses source separation of blackwater as a mitigation measure against AMR. A source-separation-modified combined sanitation system with separate collection of blackwater and graywater is conceptually described. Measures taken at the source, such as the separate collection and discharge of material flows, were not considered so far on a load balance basis, i.e., they have not yet been evaluated for their effectiveness. The sanitation system described is compared with a combined system and a separate system regarding AMR emissions by means of simulation. AMR is represented in the simulation model by one proxy parameter each for antibiotics (sulfamethoxa-zole), antibiotic-resistant bacteria (extended-spectrum beta-lactamase E. Coli), and antibiotic re-sistance genes (blaTEM). The simulation results suggest that the source-separation-based sanitation system reduces emissions of antibiotic-resistant bacteria and antibiotic resistance genes into the aquatic environment by more than six logarithm steps compared to combined systems. Sulfa-methoxazole emissions can be reduced by 75.5% by keeping blackwater separate from graywater and treating it sufficiently. In summary, sanitation systems incorporating source separation are, to date, among the most effective means of preventing the emission of AMR into the aquatic envi-ronment.
Cultural Heritage on Mobile Devices: Building Guidelines for UNESCO World Heritage Sites' Apps
(2021)
Technological improvements and access provide a fertile scenario for creating and developing mobile applications (apps). This scenario results in a myriad of Apps providing information regarding touristic destinations, including those with a cultural profile, such as those dedicated to UNESCO World Heritage Sites (WHS). However, not all of the Apps have the same efficiency. In order to have a successful app, its development must consider usability aspects and features aligned with reliable content. Despite the guidelines for mobile usability being broadly available, they are generic, and none of them concentrates specifically into cultural heritage places, especially on those placed in an open-air scenario. This research aims to fulfil this literature gap and discusses how to adequate and develop specific guidelines for a better outdoor WHS experience. It uses an empirical approach applied to an open-air WHS city: Weimar and its Bauhaus and Classical Weimar sites. In order to build a new set of guidelines applied for open-air WHS, this research used a systematic approach to compare literature-based guidelines to industry-based ones (based on affordances), extracted from the available Apps dedicated to WHS set in Germany. The instructions compiled from both sources have been comparatively tested by using two built prototypes from the distinctive guidelines, creating a set of recommendations collecting the best approach from both sources, plus suggesting new ones the evaluation.
Multi-user virtual reality systems enable collocated as well as distributed users to perform collaborative activities in immersive virtual environments. A common activity in this context is to move from one location to the next as a group to explore the environment together. The simplest solution to realize these multi-user navigation processes is to provide each participant with a technique for individual navigation. However, this approach entails some potentially undesirable consequences such as the execution of a similar navigation sequence by each participant, a regular need for coordination within the group, and, related to this, the risk of losing each other during the navigation process.
To overcome these issues, this thesis performs research on group navigation techniques that move group members together through a virtual environment. The presented work was guided by four overarching research questions that address the quality requirements for group navigation techniques, the differences between collocated and distributed settings, the scalability of group navigation, and the suitability of individual and group navigation for various scenarios. This thesis approaches these questions by introducing a general conceptual framework as well as the specification of central requirements for the design of group navigation techniques. The design, implementation, and evaluation of corresponding group navigation techniques demonstrate the applicability of the proposed framework.
As a first step, this thesis presents ideas for the extension of the short-range teleportation metaphor, also termed jumping, for multiple users. It derives general quality requirements for the comprehensibility of the group jumping process and introduces a corresponding technique for two collocated users. The results of two user studies indicate that sickness symptoms are not affected by user roles during group jumping and confirm improved planning accuracy for the navigator, increased spatial awareness for the passenger, and reduced cognitive load for both user roles.
Next, this thesis explores the design space of group navigation techniques in distributed virtual environments. It presents a conceptual framework to systematize the design decisions for group navigation techniques based on Tuckman's model of small-group development and introduces the idea of virtual formation adjustments as part of the navigation process. A quantitative user study demonstrates that the corresponding extension of Multi-Ray Jumping for distributed dyads leads to more efficient travel sequences and reduced workload. The results of a qualitative expert review confirm these findings and provide further insights regarding the complementarity of individual and group navigation in distributed virtual environments.
Then, this thesis investigates the navigation of larger groups of distributed users in the context of guided museum tours and establishes three central requirements for (scalable) group navigation techniques. These should foster the awareness of ongoing navigation activities as well as facilitate the predictability of their consequences for all group members (Comprehensibility), assist the group with avoiding collisions in the virtual environment (Obstacle Avoidance), and support placing the group in a meaningful spatial formation for the joint observation and discussion of objects (View Optimization). The work suggests a new technique to address these requirements and reports on its evaluation in an initial usability study with groups of five to ten (partially simulated) users. The results indicate easy learnability for navigators and high comprehensibility for passengers. Moreover, they also provide valuable insights for the development of group navigation techniques for even larger groups.
Finally, this thesis embeds the previous contributions in a comprehensive literature overview and emphasizes the need to study larger, more heterogeneous, and more diverse group compositions including the related social factors that affect group dynamics.
In summary, the four major research contributions of this thesis are as follows:
- the framing of group navigation as a specific instance of Tuckman's model of small-group development
- the derivation of central requirements for effective group navigation techniques beyond common quality factors known from single-user navigation
- the introduction of virtual formation adjustments during group navigation and their integration into concrete group navigation techniques
- evidence that appropriate pre-travel information and virtual formation adjustments lead to more efficient travel sequences for groups and lower workloads for both navigators and passengers
Overall, the research of this thesis confirms that group navigation techniques are a valuable addition to the portfolio of interaction techniques in multi-user virtual reality systems. The conceptual framework, the derived quality requirements, and the development of novel group navigation techniques provide effective guidance for application developers and inform future research in this area.
Die vorliegende Arbeit richtet sich an Ingenieur*innen und Wissenschaftler*innen der technischen Gebäudeausrüstung. Sie greift einen sich abzeichnenden Änderungsbedarf in der Umwelt- und Nachhaltigkeitsbewertung von Gebäuden und wärmetechnischen Anlagen auf. Der aktuell genutzte nicht erneuerbare Primärenergiebedarf wird insbesondere hinsichtlich künftiger politischer Klima- und Umweltschutzziele als alleinige Bewertungsgröße nicht ausreichend sein. Die mit dieser Arbeit vorgestellte Ökoeffizienzbewertungsmethode kann als geeignetes Instrument zur Lösung der Probleme beitragen. Sie ermöglicht systematische, ganzheitliche Bewertungen und reproduzierbare Vergleiche wärmetechnischer Anlagen bezüglich ihrer ökologischen und ökonomischen Nachhaltigkeit. Die wesentlichsten Neuentwicklungen sind die spezifische Umweltleistung, in Erweiterung zum genutzten Primärenergiefaktor, und der Ökoeffizienzindikator UWI.
Modern cryptography has become an often ubiquitous but essential part of our daily lives. Protocols for secure authentication and encryption protect our communication with various digital services, from private messaging, online shopping, to bank transactions or exchanging sensitive information. Those high-level protocols can naturally be only as secure as the authentication or encryption schemes underneath. Moreover, on a more detailed level, those schemes can also at best inherit the security of their underlying primitives. While widespread standards in modern symmetric-key cryptography, such as the Advanced Encryption Standard (AES), have shown to resist analysis until now, closer analysis and design of related primitives can deepen our understanding.
The present thesis consists of two parts that portray six contributions: The first part considers block-cipher cryptanalysis of the round-reduced AES, the AES-based tweakable block cipher Kiasu-BC, and TNT. The second part studies the design, analysis, and implementation of provably secure authenticated encryption schemes.
In general, cryptanalysis aims at finding distinguishable properties in the output distribution. Block ciphers are a core primitive of symmetric-key cryptography which are useful for the construction of various higher-level schemes, ranging from authentication, encryption, authenticated encryption up to integrity protection. Therefore, their analysis is crucial to secure cryptographic schemes at their lowest level. With rare exceptions, block-cipher cryptanalysis employs a systematic strategy of investigating known attack techniques. Modern proposals are expected to be evaluated against these techniques. The considerable effort for evaluation, however, demands efforts not only from the designers but also from external sources.
The Advanced Encryption Standard (AES) is one of the most widespread block ciphers nowadays. Therefore, it is naturally an interesting target for further analysis. Tweakable block ciphers augment the usual inputs of a secret key and a public plaintext by an additional public input called tweak. Among various proposals through the previous decade, this thesis identifies Kiasu-BC as a noteworthy attempt to construct a tweakable block cipher that is very close to the AES. Hence, its analysis intertwines closely with that of the AES and illustrates the impact of the tweak on its security best. Moreover, it revisits a generic tweakable block cipher Tweak-and-Tweak (TNT) and its instantiation based on the round-reduced AES.
The first part investigates the security of the AES against several forms of differential cryptanalysis, developing distinguishers on four to six (out of ten) rounds of AES. For Kiasu-BC, it exploits the additional freedom in the tweak to develop two forms of differential-based attacks: rectangles and impossible differentials. The results on Kiasu-BC consider an additional round compared to attacks on the (untweaked) AES. The authors of TNT had provided an initial security analysis that still left a gap between provable guarantees and attacks. Our analysis conducts a considerable step towards closing this gap. For TNT-AES - an instantiation of TNT built upon the AES round function - this thesis further shows how to transform our distinguisher into a key-recovery attack.
Many applications require the simultaneous authentication and encryption of transmitted data. Authenticated encryption (AE) schemes provide both properties. Modern AE schemes usually demand a unique public input called nonce that must not repeat. Though, this requirement cannot always be guaranteed in practice. As part of a remedy, misuse-resistant and robust AE tries to reduce the impact of occasional misuses. However, robust AE considers not only the potential reuse of nonces. Common authenticated encryption also demanded that the entire ciphertext would have to be buffered until the authentication tag has been successfully verified. In practice, this approach is difficult to ensure since the setting may lack the resources for buffering the messages. Moreover, robustness guarantees in the case of misuse are valuable features.
The second part of this thesis proposes three authenticated encryption schemes: RIV, SIV-x, and DCT. RIV is robust against nonce misuse and the release of unverified plaintexts. Both SIV-x and DCT provide high security independent from nonce repetitions. As the core under SIV-x, this thesis revisits the proof of a highly secure parallel MAC, PMAC-x, revises its details, and proposes SIV-x as a highly secure authenticated encryption scheme. Finally, DCT is a generic approach to have n-bit secure deterministic AE but without the need of expanding the ciphertext-tag string by more than n bits more than the plaintext.
From its first part, this thesis aims to extend the understanding of the (1) cryptanalysis of round-reduced AES, as well as the understanding of (2) AES-like tweakable block ciphers. From its second part, it demonstrates how to simply extend known approaches for (3) robust nonce-based as well as (4) highly secure deterministic authenticated encryption.
Aufgrund des visko-elastoplastischen Materialverhaltens von Beton wird Probekörpern und Bauteilen infolge zyklischer Beanspruchungen Energie zugeführt. Die entsprechenden Energiegrößen werden durch Hystereseflächen der Spannungs-Dehnungslinien beschrieben. In der Literatur finden sich dabei unterschiedliche Ansätze, wofür diese Energie verwendet wird. Erste Untersuchungen zeigen, dass zumindest ein Teil dieser dissipierten Energie in thermische Energie umgewandelt wird. Mithilfe der in diesem Beitrag beschriebenen Methodik lassen sich diese Energiegrößen für jeden Lastwechsel eines Ermüdungsversuches schnell und zuverlässig bestimmen. Anschließend wurden mit dem implementierten Algorithmus die dissipierten Energien von insgesamt 27 zyklischen Versuchen ausgewertet. Analog zu der Dehnungsentwicklung und der Steifigkeitsdegradation weisen auch die Verläufe der dissipierten Energie über die Lastwechselzahl einen dreiphasigen Verlauf auf. Die Auswertung zeigt außerdem eine Korrelation zwischen der Bruchlastwechselzahl und der dissipierten Energie. Auch der Zusammenhang zwischen Probekörpererwärmung und dissipierter Energie konnte bestätigt werden.
Electric trains are considered one of the most eco-friendly and safest means of transportation. Catenary poles are used worldwide to support overhead power lines for electric trains. The performance of the catenary poles has an extensive influence on the integrity of the train systems and, consequently, the connected human services. It became a must nowadays to develop SHM systems that provide the instantaneous status of catenary poles in- service, making the decision-making processes to keep or repair the damaged poles more feasible. This study develops a data-driven, model-free approach for status monitoring of cantilever structures, focusing on pre-stressed, spun-cast ultrahigh-strength concrete catenary poles installed along high-speed train tracks. The pro-posed approach evaluates multiple damage features in an unfied damage index, which leads to straightforward interpretation and comparison of the output. Besides, it distinguishes between multiple damage scenarios of the poles, either the ones caused by material degradation of the concrete or by the cracks that can be propagated during the life span of the given structure. Moreover, using a logistic function to classify the integrity of structure avoids the expensive learning step in the existing damage detection approaches, namely, using the modern machine and deep learning methods. The findings of this study look very promising when applied to other types of cantilever structures, such as the poles that support the power transmission lines, antenna masts, chimneys, and wind turbines.
Personalized ventilation (PV) is a mean of delivering conditioned outdoor air into the breathing zone of the occupants. This study aims to qualitatively investigate the personalized flows using two methods of visualization: (1) schlieren imaging using a large schlieren mirror and (2) thermography using an infrared camera. While the schlieren imaging was used to render the velocity and mass transport of the supplied flow, thermography was implemented to visualize the air temperature distribution induced by the PV. Both studies were conducted using a thermal manikin to simulate an occupant facing a PV outlet. As a reference, the flow supplied by an axial fan and a cased axial fan was visualized with the schlieren system as well and compared to the flow supplied by PV. Schlieren visualization results indicate that the steady, low-turbulence flow supplied by PV was able to penetrate the thermal convective boundary layer encasing the manikin's body, providing clean air for inhalation. Contrarily, the axial fan diffused the supplied air over a large target area with high turbulence intensity; it only disturbed the convective boundary layer rather than destroying it. The cased fan supplied a flow with a reduced target area which allowed supplying more air into the breathing zone compared to the fan. The results of thermography visualization showed that the supplied cool air from PV penetrated the corona-shaped thermal boundary layer. Furthermore, the supplied air cooled the surface temperature of the face, which indicates the large impact of PV on local thermal sensation and comfort.
Encapsulation-based self-healing concrete has received a lot of attention nowadays in civil engineering field. These capsules are embedded in the cementitious matrix during concrete mixing. When the cracks appear, the embedded capsules which are placed along the path of incoming crack are fractured and then release of healing agents in the vicinity of damage. The materials of capsules need to be designed in a way that they should be able to break with small deformation, so the internal fluid can be released to seal the crack. This study focuses on computational modeling of fracture in encapsulation-based selfhealing concrete. The numerical model of 2D and 3D with randomly packed aggreates and capsules have been developed to analyze fracture mechanism that plays a significant role in the fracture probability of capsules and consequently the self-healing process. The capsules are assumed to be made of Poly Methyl Methacrylate (PMMA) and the potential cracks are represented by pre-inserted cohesive elements with tension and shear softening laws along the element boundaries of the mortar matrix, aggregates, capsules, and at the interfaces between these phases. The effects of volume fraction, core-wall thickness ratio, and mismatch fracture properties of capsules on the load carrying capacity of self-healing concrete and fracture probability of the capsules are investigated. The output of this study will become valuable tool to assist not only the experimentalists but also the manufacturers in designing an appropriate capsule material for self-healing concrete.
Mitigating Risks of Corruption in Construction: A theoretical rationale for BIM adoption in Ethiopia
(2021)
This PhD thesis sets out to investigate the potentials of Building Information Modeling (BIM) to mitigate risks of corruption in the Ethiopian public construction sector. The wide-ranging capabilities and promises of BIM have led to the strong perception among researchers and practitioners that it is an indispensable technology. Consequently, it has become the frequent subject of science and research. Meanwhile, many countries, especially the developed ones, have committed themselves to applying the technology extensively. Increasing productivity is the most common and frequently cited reason for that.
However, both technology developers and adopters are oblivious to the potentials of BIM in addressing critical challenges in the construction sector, such as corruption. This particularly would be significant in developing countries like Ethiopia, where its problems and effects are acute. Studies reveal that bribery and corruption have long pervaded the construction industry worldwide. The complex and fragmented nature of the sector provides an environment for corruption. The Ethiopian construction sector is not immune from this epidemic reality. In fact, it is regarded as one of the most vulnerable sectors owing to varying socio-economic and political factors. Since 2015, Ethiopia has started adopting BIM, yet without clear goals and strategies. As a result, the potential of BIM for combating concrete problems of the sector remains untapped. To this end, this dissertation does pioneering work by showing how collaboration and coordination features of the technology contribute to minimizing the opportunities for corruption. Tracing loopholes, otherwise, would remain complex and ineffective in the traditional documentation processes.
Proceeding from this anticipation, this thesis brings up two primary questions: what are areas and risks of corruption in case of the Ethiopian public construction projects; and how could BIM be leveraged to mitigate these risks? To tackle these and other secondary questions, the research employs a mixed-method approach. The selected main research strategies are Survey, Grounded Theory (GT) and Archival Study. First, the author disseminates an online questionnaire among Ethiopian construction engineering professionals to pinpoint areas of vulnerability to corruption. 155 responses are compiled and scrutinized quantitatively. Then, a semi-structured in-depth interview is conducted with 20 senior professionals, primarily to comprehend opportunities for and risks of corruption in those identified highly vulnerable project stages and decision points. At the same time, open interviews (consultations) are held with 14 informants to be aware of state of the construction documentation, BIM and loopholes for corruption in the country. Consequently, these qualitative data are analyzed utilizing the principles of GT, heat/risk mapping and Social Network Analysis (SNA). The risk mapping assists the researcher in the course of prioritizing corruption risks; whilst through SNA, methodically, it is feasible to identify key actors/stakeholders in the corruption venture. Based on the generated research data, the author constructs a [substantive] grounded theory around the elements of corruption in the Ethiopian public construction sector. This theory, later, guides the subsequent strategic proposition of BIM. Finally, 85 public construction related cases are also analyzed systematically to substantiate and confirm previous findings.
By ways of these multiple research endeavors that is based, first and foremost, on the triangulation of qualitative and quantitative data analysis, the author conveys a number of key findings. First, estimations, tender document preparation and evaluation, construction material as well as quality control and additional work orders are found to be the most vulnerable stages in the design, tendering and construction phases respectively. Second, middle management personnel of contractors and clients, aided by brokers, play most critical roles in corrupt transactions within the prevalent corruption network. Third, grand corruption persists in the sector, attributed to the fact that top management and higher officials entertain their overriding power, supported by the lack of project audits and accountability. Contrarily, individuals at operation level utilize intentional and unintentional 'errors’ as an opportunity for corruption.
In light of these findings, two conceptual BIM-based risk mitigation strategies are prescribed: active and passive automation of project audits; and the monitoring of project information throughout projects’ value chain. These propositions are made in reliance on BIM’s present dimensional capabilities and the promises of Integrated Project Delivery (IPD). Moreover, BIM’s synchronous potentials with other technologies such as Information and Communication Technology (ICT), and Radio Frequency technologies are topics which received a treatment. All these arguments form the basis for the main thesis of this dissertation, that BIM is able to mitigate corruption risks in the Ethiopian public construction sector. The discourse on the skepticisms about BIM that would stem from the complex nature of corruption and strategic as well as technological limitations of BIM is also illuminated and complemented by this work. Thus, the thesis uncovers possible research gaps and lays the foundation for further studies.
Accurate prediction of stable alluvial hydraulic geometry, in which erosion and sedimentation are in equilibrium, is one of the most difficult but critical topics in the field of river engineering. Data mining algorithms have been gaining more attention in this field due to their high performance and flexibility. However, an understanding of
the potential for these algorithms to provide fast, cheap, and accurate predictions of hydraulic geometry is lacking. This study provides the first quantification of this potential. Using at-a-station field data, predictions of flow depth, water-surface width and longitudinal water surface slope are made using three standalone data mining techniques -, Instance-based Learning (IBK), KStar, Locally Weighted Learning (LWL) - along with four types of novel hybrid algorithms in which the standalone models are trained with Vote, Attribute Selected
Classifier (ASC), Regression by Discretization (RBD), and Cross-validation Parameter Selection (CVPS) algorithms (Vote-IBK, Vote-Kstar, Vote-LWL, ASC-IBK, ASC-Kstar, ASC-LWL, RBD-IBK, RBD-Kstar, RBD-LWL, CVPSIBK, CVPS-Kstar, CVPS-LWL). Through a comparison of their predictive performance and a sensitivity analysis of the driving variables, the results reveal: (1) Shield stress was the most effective parameter in the prediction of all geometry dimensions; (2) hybrid models had a higher prediction power than standalone data mining models,
empirical equations and traditional machine learning algorithms; (3) Vote-Kstar model had the highest performance in predicting depth and width, and ASC-Kstar in estimating slope, each providing very good prediction performance. Through these algorithms, the hydraulic geometry of any river can potentially be predicted accurately and with ease using just a few, readily available flow and channel parameters. Thus, the results reveal that these models have great potential for use in stable channel design in data poor catchments, especially in developing nations where technical modelling skills and understanding of the hydraulic and sediment processes occurring in the river system may be lacking.
Zur Erstellung von dekorativen Plastiken sollten Mörtel entwickelt werden, die eine hohe Biegezugfestigkeit aufweisen und eine breite Palette von Konsistenzen für verschiedene Anwendungsarten, wie Gießen, Spachteln oder Stampfen abdecken. Als Basis für die Rezepturen wurde ein niedrigviskoses Epoxidharzsystem gewählt, dessen Aminhärter einen Wasseranteil von 44 % beinhaltet. Dies ermöglichte es, durch Wasserzugabe verschiedene Viskositäten einzustellen. Um dieses Wasser in massiveren Bauteilen zu binden, wurde neben Sand auch Zement als Füllstoff eingesetzt.
Die erstellten Rezepturen zeigten nach 56 Tagen hohe Druckfestigkeiten von über 50 N/mm². Mit zunehmendem Epoxidharzgehalt ergaben sich zwar steigende Biegezugfestigkeiten, jedoch unter Laborlagerung auch größere Längenänderungen. Diese konnten durch den Einsatz eines PCE-Fließmittels, PVA-Kurzfasern und einer optimierten Sieblinie verringert werden. Das Fließmittel verlängerte die Erhärtungszeiten jedoch auf bis zu 1,5 Tage.
Zur Ermittlung der Dauerhaftigkeit des Materials wurde es für drei Wochen Temperaturen von -20 bis +60 °C, einer künstlichen Sonnenbestrahlung sowie künstlicher Beregnung ausgesetzt. Im Vergleich zur Laborlagerung ergab sich bei steigendem Epoxidharzanteil ein geringerer Schwund, während die Biegezugfestigkeit der Probeköper nur geringfügig abnahm.
Rasterelektronenmikroskopische Untersuchungen zeigten, dass auch bei geringeren Epoxidharzzusätzen Störungen der Zementhydratation auftraten. Weiterhin zeigen sich bei geringen Epoxidharzzusätzen in der Matrix kugelförmige Einschlüsse, die von dispergierten Epoxidharzpartikeln stammen.
Objektive Oberflächenbewertung von (P)SCC-Sichtbeton mittels automatisierter Analyse von Bilddaten
(2019)
Sichtbeton ist aufgrund seiner Vielfältigkeit in der Formgebung eines der am meisten verbreiteten Gestaltungsmittel der modernen Architektur und optimal für neue Bauweisen sowie steigende Anforderungen an das Erscheinungsbild öffentlicher Bauwerke geeignet. Die Herstellung qualitativ hochwertiger Sichtbetonoberflächen hängt im hohen Maße von den Wechselwirkungen zwischen Beton und Trennmittel, zwischen Trennmittel und Schalmaterial, sowie von der Applikationsart und -menge des Trennmittels ab. In Laborversuchen wurden diese Einflüsse auf die Sichtbetonoberflächen eines polymermodifizierten selbstverdichtenden Betons (PSCC) im Vergleich zu einem herkömmlichen selbstverdichtenden Beton (SCC) untersucht. Im Rahmen dieser Arbeiten wurde eine Methode zur Beurteilung der Sichtbetonqualität entwickelt, mit welcher Ausschlusskriterien, wie maximale Porosität und Gleichmäßigkeit, objektiv und automatisiert bestimmt werden können. Veränderungen dieser Werte durch Witterungseinflüsse ließen zudem erste Rückschlüsse auf die Dauerhaftigkeit der Sichtbetonoberflächen zu.
Die Auseinandersetzung mit der Digitalisierung ist in den letzten Jahren in den Medien, auf Konferenzen und in Ausschüssen der Bau- und Immobilienbranche angekommen. Während manche Bereiche Neuerungen hervorbringen und einige Akteure als Pioniere zu bezeichnen sind, weisen andere Themen noch Defizite hinsichtlich der digitalen Transformation auf. Zu dieser Kategorie kann auch das Baugenehmigungsverfahren gezählt werden. Unabhängig davon, wie Architekten und Ingenieure in den Planungsbüros auf innovative Methoden setzen, bleiben die Bauvorlagen bisher zuhauf in Papierform oder werden nach der elektronischen Einreichung in der Behörde ausgedruckt. Vorhandene Ressourcen, beispielsweise in Form eines Bauwerksinformationsmodells, die Unterstützung bei der Baugenehmigungsfeststellung bieten können, werden nicht ausgeschöpft. Um mit digitalen Werkzeugen eine Entscheidungshilfe für die Baugenehmigungsbehörden zu erarbeiten, ist es notwendig, den Ist-Zustand zu verstehen und Gegebenheiten zu hinterfragen, bevor eine Gesamtautomatisierung der innerbehördlichen Vorgänge als alleinige Lösung zu verfolgen ist.
Mit einer inhaltlich-organisatorischen Betrachtung der relevanten Bereiche, die Einfluss auf die Baugenehmigungsfeststellung nehmen, wird eine Optimierung des Baugenehmigungsverfahrens in den
Behörden angestrebt. Es werden die komplexen Bereiche, wie die Gesetzeslage, der Einsatz von Technologie aber auch die subjektiven Handlungsalternativen, ermittelt und strukturiert. Mit der Entwicklung eines Modells zur Feststellung der Baugenehmigungsfähigkeit wird sowohl ein Verständnis für Einflussfaktoren vermittelt als auch eine Transparenzsteigerung für alle Beteiligten geschaffen.
Neben einer internationalen Literaturrecherche diente eine empirische Studie als Untersuchungsmethode. Die empirische Studie wurde in Form von qualitativen Experteninterviews durchgeführt, um den Ist-Zustand im Bereich der Baugenehmigungsverfahren festzustellen. Das erhobene Datenmaterial wurde aufbereitet und anschließend einer softwaregestützten Inhaltsanalyse unterzogen. Die Ergebnisse wurden in Kombination mit den Erkenntnissen der Literaturrecherche in verschiedenen Analysen als Modellgrundlage aufgearbeitet.
Ergebnis der Untersuchung stellt ein Entscheidungsmodell dar, welches eine Lücke zwischen den gegenwärtigen
Abläufen in den Baubehörden und einer Gesamtautomatisierung der Baugenehmigungsprüfung schließt. Die prozessorientierte Strukturierung entscheidungsrelevanter Sachverhalte im Modell ermöglicht eine Unterstützung bei der Baugenehmigungsfeststellung für Prüfer und Antragsteller. Das theoretische Modell konnte in Form einer Webanwendung in die Praxis übertragen werden.
Als Frühform des Tourismus war die Sommerfrische ab Mitte des 19. Jahrhunderts eng mit der flächendeckenden Verstädterung in Mitteleuropa und der Herausbildung eines urbanen Bürgertums verknüpft. Nachdem sich der Begriff in der zweiten Hälfte des 20. Jahrhunderts im Sprachgebrauch verlor, erfährt er gegenwärtig - auch im Zuge einer neuen positiven Konnotation des Ländlichen - eine Art Renaissance. Im Schwarzatal als einer historischen Zielregion dieser Reisepraxis arbeitet der lokale Verein Zukunftswerkstatt Schwarzatal gemeinsam mit der IBA Thüringen an einer zeitgenössischen Form der Sommerfrische, die auch Antworten auf den tiefgreifenden Strukturwandel nach 1989 geben soll.
Die Arbeit thematisiert die programmatische Renaissance der Sommerfrische im Schwarzatal und stellt sie dem historischen Phänomen der Sommerfrische des 19. und 20. Jahrhunderts unter den drei Forschungsfragen gegenüber. Die Forschungsfragen nehmen Bezug auf die soziale, die materielle sowie die symbolische Dimension der Raumkonstitution beider Sommerfrischen, indem nach den jeweiligen Akteuren, den Regeln der physischen Umgestaltung sowie den hegemonialen Vorstellungen über die Zielorte gefragt wird. Die Arbeit ordnet sich entsprechend in eine sozialkonstruktivistische Geographie des Ländlichen ein. Sie gibt schließlich Auskunft darüber, warum es sich bei der Sommerfrische Schwarzatal um eine regional selbstbestimmte Form der Sommerfrische handelt.
Für die Verminderung der betonspezifischen CO2-Emissionen wird ein verstärkter Einsatz klinkerreduzierter Zemente bzw. Betone angestrebt. Die Reduzierung des Klinkergehaltes darf jedoch nicht zu einer lebensdauerrelevanten Beeinträchtigung der Betondauerhaftigkeit führen. In diesem Zusammenhang stellt der Frost-Tausalz-Widerstand eine kritische Größe dar, da er bei höheren Klinkersubstitutionsraten häufig negativ beeinflusst wird. Erschwerend kommt hinzu, dass für klinkerreduzierte Betone nur ein unzureichender Erfahrungsschatz vorliegt. Ein hoher Frost-Tausalz-Widerstand kann daher nicht ausschließlich anhand deskriptiver Vorgaben gewährleistet werden. Demgemäß sollte perspektivisch auch für frost-tausalzbeanspruchte Bauteile eine performancebasierte Lebensdauerbetrachtung erfolgen.
Eine unverzichtbare Grundlage für das Erreichen dieser Ziele ist ein Verständnis für die Schadensvorgänge beim Frost-Tausalz-Angriff. Der Forschungsstand ist jedoch geprägt von widersprüchlichen Schadenstheorien. Somit wurde als Zielstellung für diese Arbeit abgeleitet, die existierenden Schadenstheorien unter Berücksichtigung des aktuellen Wissensstandes zu bewerten und mit eigenen Untersuchungen zu prüfen und einzuordnen. Die Sichtung des Forschungsstandes zeigte, dass nur zwei Theorien das Potential haben, den Frost-Tausalz-Angriff umfassend abzubilden – die Glue Spall Theorie und die Cryogenic Suction Theorie.
Die Glue Spall Theorie führt die Entstehung von Abwitterungen auf die mechanische Schädigung der Betonoberfläche durch eine anhaftende Eisschicht zurück. Dabei sollen nur bei moderaten Tausalzkonzentrationen in der einwirkenden Lösung kritische Spannungszustände in der Eisschicht auftreten, die eine Schädigung der Betonoberfläche hervorrufen können. In dieser Arbeit konnte jedoch nachgewiesen werden, dass starke Abwitterungen auch bei Tausalz¬konzentrationen auftreten, bei denen eine mechanische Schädigung des Betons durch das Eis auszuschließen ist. Damit wurde die fehlende Eignung der Glue Spall Theorie aufgezeigt.
Die Cryogenic Suction Theorie fußt auf den eutektischen Eigenschaften von Tausalz-lösungen, die im gefrorenen Zustand immer als Mischung auf festem Wassereis und flüssiger, hochkonzentrierter Salzlösung bestehen, solange ihre Eutektikumstemperatur nicht unter¬schritten wird. Die flüssige Phase im salzhaltigen Eis stellt für gefrorenen Beton ein bisher nicht berücksichtigtes Flüssigkeitsreservoir dar, welches trotz der hohen Salzkonzentration die Eisbildung in der Betonrandzone verstärken und so die Entstehung von Abwitterungen verursachen soll. In dieser Arbeit wurde bestätigt, dass die Eisbildung im Zementstein beim Gefrieren in hochkonzentrierter Tausalzlösung tatsächlich verstärkt wird. Das Ausmaß der zusätzlichen Eisbildung wurde dabei auch von der Fähigkeit des Zementsteins zur Bindung von Chloridionen aus der Tausalzlösung beeinflusst.
Zusammenfassend wurde festgestellt, dass die Cryogenic Suction Theorie eine gute Beschreibung des Frost-Tausalz-Angriffes darstellt, aber um weitere Aspekte ergänzt werden muss. Die Berücksichtigung der intensiven Sättigung von Beton durch den Prozess der Mikroeislinsenpumpe stellt hier die wichtigste Erweiterung dar. Basierend auf dieser Überlegung wurde eine kombinierte Schadenstheorie aufgestellt. Wichtige Annahmen dieser Theorie konnten experimentell bestätigt werden. Im Ergebnis wurde so die Grundlage für ein tiefergehendes Verständnis des Frost-Tausalz-Angriffes geschaffen. Zudem wurde ein neuer Ansatz identifiziert, um die (potentielle) Verringerung des Frost-Tausalz-Widerstandes klinkerreduzierter Betone zu erklären.
Mit dem stetigen Steigen des Anteils an erneuerbaren Energien wird der Einsatz von Speichern immer bedeutsamer. Neben der Speicherung elektrischer Energie ist die Speicherung anfallender solarer bzw. industrieller Wärme eine wichtige Herausforderung. Aufgrund der hohen Energiespeicherdichte kommt dabei der thermochemischen Wärmespeicherung eine entscheidende Rolle zu. Eine Klasse dieser Speichermaterialien bilden Kompositmaterialien, die aus einer offenporigen Matrix und einem darin eingelagerten Salzhydrat bestehen.
Ausschlaggebend für eine hohe Speicherdichte ist bei dieser Materialklasse der schnelle Abtransport der durch Wasserdampfsorption entstandenen Wärme. Das entscheidende Kriterium für eine Anwendung als Speichermaterial ist somit die Wärmeleitfähigkeit des Materials. Im Rahmen der Arbeit wurden deshalb die Wärmeleitfähigkeiten ausgewählter Salze (NaCl, MgSO4 und ZnSO4) mit verschiedenen Kristallwassergehalten, Trägermaterialien wie Aktivkohle (Pellets und Pulver) und Zeolitpulver und an den daraus hergestellten Kompositmaterialien untersucht.
Ziel war es außerdem Aussagen zu einer günstigen Materialkombination aus offenporigem Trägermaterial und Salzhydrat sowie eines geeigneten Porenfüllgrades zu treffen und Ansätze für die Modellierung der Wärmeleitfähigkeit der Komposite zu liefern.
Das Kernthema dieser Arbeit ist die Beschäftigung mit den Folgen des Uranbergbaus in dem Gebiet um die ehemalige Abbauregion der Wismut SAG/SDAG in Ronneburg (Ostthüringen). Dieses Thema wird unter historischen, sozialen, kulturanthropologischen und künstlerischen Aspekten betrachtet und in den Zusammenhang mit den weltweiten Voraussetzungen der Nuklearindustrie und Auswirkungen des Uranbergbaus und seiner Folgen gestellt. Die Arbeit legt dar, wie eine Uranbergbaufolgelandschaft entsteht und welches Wissen ist für ein angemessenes Verständnis des Phänomens wichtig ist. Es wird untersucht, ob Kunst bezüglich der Uranbergbaufolgelandschaft einen relevanten Beitrag leisten kann bzw. in welcher Form dies versucht wurde, bzw. stellte Arbeiten vor, die verwandete Themen bearbeitet haben. In Kombination dieser beiden Hauptaspekte geht die Arbeit der Frage nach, welche Faktoren die Uranbergbaufolgelandschaft prägen und ob es sinnvolle Beteiligungsfelder für künstlerisches Forschen oder Handeln gibt sowie welche Bedingungen hierfür erfüllt werdenmüssten. Die Kernthese der Arbeit ist, dass künstlerische Arbeiten im Themenfeld des Uranbergbaus unter bestimmten Bedingungen relevante Beiträge leisten können.
In den letzten Jahrzehnten unterlag der Straßenbetriebsdienst tiefgreifenden Veränderungen. Diese Veränderungen schließt auch die betriebliche Steuerungsphilosophie ein, um eine planungsrationale und ökonomische Gestaltung des Straßenbetriebsdienstes zu unterstützen. Dabei erfolgt eine verbindliche Vorgabe der Leistungsinhalte und -umfänge und ermöglicht eine Budgetierung für das vorgesehene Jahresarbeitsprogramm.
Ziel der Untersuchung ist die Entwicklung eines Modells für die Ermittlung von leistungsbezogenen Musterjahresganglinien zur Unterstützung der Jahresarbeitsplanung. Dafür lagen für jede Leistung des Leistungsbereiches „Grünpflege“ jeweils 260 einzelne Jahresganglinien vor.
Im Ergebnis der Untersuchung wird die leistungsbezogene Musterjahresganglinie in vier Schritten ermittelt. Im ersten Schritt erfolgt die Prüfung der Datenqualität; im zweiten Schritt eine Korrelationsanalyse; im dritten Schritt die fachliche Überprüfung der Leistungsausprägung und im vierten Schritt die Ermittlung der leistungsbezogenen Musterjahresganglinie aus den verbliebenen leistungsbezogenen Jahresganglinien.
What is Afghan music and what is its contemporary state? This question seems important to ask, considering the conflictual recent past of the country that particularly affected cultural expressions such as music. In these four articles diverse perspectives on Afghan music are explored. From traditional music of the Afghan rubab and its masters up to popular music of the Afghan-German hiphop producer “Farhot”, various insights are given into phenomena yet barely covered in academic works. This collection provides glimpses into the variety of the music of Afghanistan and the Afghan diaspora and help to shape the Western views on the music of the country into more diverse perspectives. We move further away from the bias of the majority of media representations predominantly showing the conflictual sides of Afghanistan, at the same time avoiding a limiting and narrow view on Afghanistan with solely a musical tradition located in the past. These essays go beyond and outline that apart from a rich tradition, there are present forms of musical expression. We move from “Tradition to Television” and beyond, exploring views on the future of music connected to Afghanistan.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
The growing complexity of modern practical problems puts high demand on mathematical modelling. Given that various models can be used for modelling one physical phenomenon, the role of model comparison and model choice is becoming particularly important. Methods for model comparison and model choice typically used in practical applications nowadays are computationbased, and thus time consuming and computationally costly. Therefore, it is necessary to develop other approaches to working abstractly, i.e., without computations, with mathematical models. An abstract description of mathematical models can be achieved by the help of abstract mathematics, implying formalisation of models and relations between them. In this paper, a category theory-based approach to mathematical modelling is proposed. In this way, mathematical models are formalised in the language of categories, relations between the models are formally defined and several practically relevant properties are introduced on the level of categories. Finally, an illustrative example is presented, underlying how the category-theory based approach can be used in practice. Further, all constructions presented in this paper are also discussed from a modelling point of view by making explicit the link to concrete modelling scenarios.
The contribution explores the migratory situation on the Balkans and more specifically in the so-called Refugee District in Belgrade from a spatial perspective. By visualizing the areas of tensions in the Refugee District, the city of Belgrade, Serbia and Europe it aims to disentangle the political and socio-spatial levels that lead to the stuck situation of in-betweenness at the gates of the European Union.
Schlagworte wir Kooperation, Koproduktion und Kollaboration sind in Planungswissenschaften und Architektur schwer in Mode. Selten wird allerdings dargelegt, was diese Art(en) der Zusammenarbeit zwischen zivilgesellschaftlichen Akteuren auf der einen Seite und staatlichen Akteuren aus Politik und Verwaltung auf der anderen Seite von gewöhnlichen Formen der Partizipation und Bürger_innenbeteiligung unterscheidet. Anders im Buch von Michael Ziehl: Ihm gelingt es, anhand eines Fallbeispiels die intensive Zusammenarbeit zwischen Aktivist_innen rund ums Hamburger Gängeviertel und verschiedenen städtischen Institutionen detailliert nachzuzeichnen und damit den qualitativen Unterschied zwischen Kooperation und Beteiligung nachvollziehbar zu machen.
Durch internationale Fluchtbewegungen über die sogenannte Balkanroute bildete sich in Serbiens Hauptstadt Belgrad in den letzten Jahren ein sogenannter Refugee District heraus. Im Kontext von Migration und Flucht werden dabei zahlreiche Spannungsfelder auf unterschiedlichen räumlichen und politischen Ebenen sichtbar. Für Flüchtende kreieren diese eine Situation, die von Stillstand, Ausweglosigkeit, Kontrolle, Gefahr und Verdrängung geprägt ist. Allerdings führen die Vielschichtigkeit und die Diversität unterschiedlicher Akteur*innen, die bezüglich der Situation von Flüchtenden auf der Balkanroute wirkmächtig sind, auch zu Nischen, Widerständigkeiten und der Möglichkeit (neuer) Allianzen. Auf diese Weise entsteht eine kollektive Praktik der Nicht-Bewegung im Widerstand gegen die Unterdrückung und für globale Bewegungsfreiheit.
In this paper we present a theoretical background for a coupled analytical–numerical approach to model a crack propagation process in two-dimensional bounded domains. The goal of the coupled analytical–numerical approach is to obtain the correct solution behaviour near the crack tip by help of the analytical solution constructed by using tools of complex function theory and couple it continuously with the finite element solution in the region far from the singularity. In this way, crack propagation could be modelled without using remeshing. Possible directions of crack growth can be calculated through the minimization of the total energy composed of the potential energy and the dissipated energy based on the energy release rate. Within this setting, an analytical solution of a mixed boundary value problem based on complex analysis and conformal mapping techniques is presented in a circular region containing an arbitrary crack path. More precisely, the linear elastic problem is transformed into a Riemann–Hilbert problem in the unit disk for holomorphic functions. Utilising advantages of the analytical solution in the region near the crack tip, the total energy could be evaluated within short computation times for various crack kink angles and lengths leading to a potentially efficient way of computing the minimization procedure. To this end, the paper presents a general strategy of the new coupled approach for crack propagation modelling. Additionally, we also discuss obstacles in the way of practical realisation of this strategy.
In this work, extensive reactive molecular dynamics simulations are conducted to analyze the nanopore creation by nano-particles impact over single-layer molybdenum disulfide (MoS2) with 1T and 2H phases. We also compare the results with graphene monolayer. In our simulations, nanosheets are exposed to a spherical rigid carbon projectile with high initial velocities ranging from 2 to 23 km/s. Results for three different structures are compared to examine the most critical factors in the perforation and resistance force during the impact. To analyze the perforation and impact resistance, kinetic energy and displacement time history of the projectile as well as perforation resistance force of the projectile are investigated.
Interestingly, although the elasticity module and tensile strength of the graphene are by almost five times higher than those of MoS2, the results demonstrate that 1T and 2H-MoS2 phases are more resistive to the impact loading and perforation than graphene. For the MoS2nanosheets, we realize that the 2H phase is more resistant to impact loading than the 1T counterpart.
Our reactive molecular dynamics results highlight that in addition to the strength and toughness, atomic structure is another crucial factor that can contribute substantially to impact resistance of 2D materials. The obtained results can be useful to guide the experimental setups for the nanopore creation in MoS2or other 2D lattices.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
Conventional superplasticizers based on polycarboxylate ether (PCE) show an intolerance to clay minerals due to intercalation of their polyethylene glycol (PEG) side chains into the interlayers of the clay mineral. An intolerance to very basic media is also known. This makes PCE an unsuitable choice as a superplasticizer for geopolymers. Bio-based superplasticizers derived from starch showed comparable effects to PCE in a cementitious system. The aim of the present study was to determine if starch superplasticizers (SSPs) could be a suitable additive for geopolymers by carrying out basic investigations with respect to slump, hardening, compressive and flexural strength, shrinkage, and porosity. Four SSPs were synthesized, differing in charge polarity and specific charge density. Two conventional PCE superplasticizers, differing in terms of molecular structure, were also included in this study. The results revealed that SSPs improved the slump of a metakaolin-based geopolymer (MK-geopolymer) mortar while the PCE investigated showed no improvement. The impact of superplasticizers on early hardening (up to 72 h) was negligible. Less linear shrinkage over the course of 56 days was seen for all samples in comparison with the reference. Compressive strengths of SSP specimens tested after 7 and 28 days of curing were comparable to the reference, while PCE led to a decline. The SSPs had a small impact on porosity with a shift to the formation of more gel pores while PCE caused an increase in porosity. Throughout this research, SSPs were identified as promising superplasticizers for MK-geopolymer mortar and concrete.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
Hans Ruin: Being with the Dead—Burial, ancestral politics, and the roots of historical consciousness
(2020)
How can society be thought of as something in which the living and the dead interact throughout history? In Being with the Dead. Burial, Ancestral Politics, and the Roots of Historical Consciousness, Hans Ruin turns to the relationship between the living and the dead as well as ‘historical consciousness’. He is referring to the expression ‘being with the dead’ (Mitsein mit dem Toten). Rather en passant, Martin Heidegger (1962: 282) shaped this existential-ontological term, which so far has hardly received any consideration. But for Ruin, it now forms the starting point for his “expanded phenomenological social ontology” (p. XI). By illuminating history and historical consciousness with the category ‘being with the dead,’ he gains remarkable insights into the meaning of ancestrality. Concerning ‘necropolitics,’ Ruin shows that the political space includes the living as well as the dead and how they constitute it. The foci of his considerations are the human sciences, above all sociology, anthropology, archaeology, philology and history. Ruin’s book aims at a “metacritical thanatology,” which he elaborates as “an exploration of the social ontology of being with the dead mediated through critical analyses of the human-historical sciences themselves” (p. XII). As a result, in a total of seven chapters, he succeeds astonishingly in emphasizing the political and ethical importance of a scientific gaze that cultivates the interaction of the living and the dead.
When it comes to monitoring of huge structures, main issues are limited time, high costs and how to deal with the big amount of data. In order to reduce and manage them, respectively, methods from the field of optimal design of experiments are useful and supportive. Having optimal experimental designs at hand before conducting any measurements is leading to a highly informative measurement concept, where the sensor positions are optimized according to minimal errors in the structures’ models. For the reduction of computational time a combined approach using Fisher Information Matrix and mean-squared error in a two-step procedure is proposed under the consideration of different error types. The error descriptions contain random/aleatoric and systematic/epistemic portions. Applying this combined approach on a finite element model using artificial acceleration time measurement data with artificially added errors leads to the optimized sensor positions. These findings are compared to results from laboratory experiments on the modeled structure, which is a tower-like structure represented by a hollow pipe as the cantilever beam. Conclusively, the combined approach is leading to a sound experimental design that leads to a good estimate of the structure’s behavior and model parameters without the need of preliminary measurements for model updating.
Few studies have investigated how search behavior affects complex writing tasks. We analyze a dataset of 150 long essays whose authors searched the ClueWeb09 corpus for source material, while all querying, clicking, and writing activity was meticulously recorded. We model the effect of search and writing behavior on essay quality using path analysis. Since the boil-down and build-up writing strategies identified in previous research have been found to affect search behavior, we model each writing strategy separately. Our analysis shows that the search process contributes significantly to essay quality through both direct and mediated effects, while the author's writing strategy moderates this relationship. Our models explain 25–35% of the variation in essay quality through rather simple search and writing process characteristics alone, a fact that has implications on how search engines could personalize result pages for writing tasks. Authors' writing strategies and associated searching patterns differ, producing differences in essay quality. In a nutshell: essay quality improves if search and writing strategies harmonize—build-up writers benefit from focused, in-depth querying, while boil-down writers fare better with a broader and shallower querying strategy.
This article is focused on the research and development of new cellulose ether derivatives as innovative superplasticizers for mortar systems. Several synthetic strategies have been pursued to obtain new compounds to study their properties on cementitious systems as new bio-based additives. The new water-soluble admixtures were synthesized using a complex carboxymethylcellulose-based backbone that was first hydrolyzed and then sulfo-ethylated in the presence of sodium vinyl sulphonate. Starting with a complex biopolymer that is widely known as a thickening agent was very challenging. Only by varying the hydrolysis times and temperatures of the reactions was achieved the aimed goal. The obtained derivatives showed different molecular weight (Mw) and anionic charges on their backbones. An improvement in shear stress and dynamic viscosity values of CEM II 42.5R cement was observed with the samples obtained with a longer time of higher temperature hydrolysis and sulfo-ethylation. Investigations into the chemical nature of the pore solution, calorimetric studies and adsorption experiments clearly showed the ability of carboxymethyl cellulose superplasticizer (CMC SP) to interact with cement grains and influence hydration processes within a 48-h time window, causing a delay in hydration reactions in the samples. The fluidity of the cementitious matrices was ascertained through slump test and preliminary studies of mechanical and flexural strength of the hardened mortar formulated with the new ecological additives yielded values in terms of mechanical properties. Finally, the computed tomography (CT) images completed the investigation of the pore network structure of hardened specimens, highlighting their promising structure porosity.
Compiling and disseminating information about incidents and disasters are key to disaster management and relief. But due to inherent limitations of the acquisition process, the required information is often incomplete or missing altogether. To fill these gaps, citizen observations spread through social media are widely considered to be a promising source of relevant information, and many studies propose new methods to tap this resource. Yet, the overarching question of whether and under which circumstances social media can supply relevant information (both qualitatively and quantitatively) still remains unanswered. To shed some light on this question, we review 37 disaster and incident databases covering 27 incident types, compile a unified overview of the contained data and their collection processes, and identify the missing or incomplete information. The resulting data collection reveals six major use cases for social media analysis in incident data collection: (1) impact assessment and verification of model predictions, (2) narrative generation, (3) recruiting citizen volunteers, (4) supporting weakly institutionalized areas, (5) narrowing surveillance areas, and (6) reporting triggers for periodical surveillance. Furthermore, we discuss the benefits and shortcomings of using social media data for closing information gaps related to incidents and disasters.
“How to understand the interaction between urban space and social processes” is a significant question in urban studies. To answer that, the city needs to be recognized as both a physical and a social entity and urban theory and practice need to connect these (Hillier 2007). The present research aims to re-examine the complex correlation between spatial and social inequality manifestations in the city of Tehran regarding the concept of segregation.
It observes the causes and consequences of segregation in Tehran and provides an insight into both concepts of socio-spatial segregation and neighborhood effects and creates a link between them. First, I argue when, where, and for whom spatial locations affect the chances of social networks in Tehran. Then, I discuss how neighborhood effects can emerge via social network mechanisms and thus affect the perceptions of residents in the neighborhoods.
Entrepreneurship and start-up activities are seen as a key response to recent upheavals in the media industry: Newly founded ventures can act as important drivers for industry transformation and renewal, pioneering new products, business models, and organizational designs (e.g. Achtenhagen, 2017; Buschow & Laugemann, 2020).
In principle, media students represent a crucial population of nascent entrepreneurs: individuals who will likely become founders of start-ups (Casero-Ripollés et al., 2016). However, their willingness to start a new business is generally considered to be rather low (Goyanes, 2015), and for journalism students, the idea of innovation tends to be conservative, following traditional norms and professional standards (Singer & Broersma, 2020). In a sample of Spanish journalism students, López-Meri et al. (2020) found that one of the main barriers to entrepreneurial intentions is that students feel they lack knowledge and training in entrepreneurship.
In the last 10 years, a wide variety of entrepreneurship education courses have been set up in media departments of colleges and universities worldwide.
These programs have been designed to sensitize and prepare communications, media and journalism students to think and act entrepreneurially (e.g. Caplan et al., 2020; Ferrier, 2013; Ferrier & Mays, 2017; Hunter & Nel, 2011). Entrepreneurial competencies
and practices not only play a crucial role for start-ups, but, in imes of digital transformation, are increasingly sought after by legacy media companies as well (Küng, 2015).
At the Department of Journalism and Communication Research, Hanover University of Music, Drama and Media, Germany, we have been addressing these developments with the “Media Entrepreneurship” program. The course, established in 2013, aims to provide fundamental knowledge of entrepreneurship, as well as promoting students‘ entrepreneurial thinking and behavior. This article presents the pedagogical approach of the program and investigates learning outcomes. By outlining and evaluating the Media Entrepreneurship program, this article aims to promote good practices of entrepreneurship education in communications, media and journalism, and to reflect on the limitations of such programs.
The main purpose of the thesis is to ensure the safe demolition of old guyed antenna masts that are located in different parts of Germany. The major problem in demolition of this masts is the falling down of the masts in unexpected direction because of buckling problem. The objective of this thesis is development of a numerical models using finite element method (FEM) and assuring a controlled collapse by coming up with different time setups for the detonation of explosives which are responsible for cutting down the cables. The result of this thesis will avoid unexpected outcomes during the demolition processes and prevent risk of collapsing of the mast over near by structures.
The computational analysis of argumentation strategies is substantial for many downstream applications. It is required for nearly all kinds of text synthesis, writing assistance, and dialogue-management tools. While various tasks have been tackled in the area of computational argumentation, such as argumentation mining and quality assessment, the task of the computational analysis of argumentation strategies in texts has so far been overlooked.
This thesis principally approaches the analysis of the strategies manifested in the persuasive argumentative discourses that aim for persuasion as well as in the deliberative argumentative discourses that aim for consensus. To this end, the thesis presents a novel view of argumentation strategies for the above two goals. Based on this view, new models for pragmatic and stylistic argument attributes are proposed, new methods for the identification of the modelled attributes have been developed, and a new set of strategy principles in texts according to the identified attributes is presented and explored.
Overall, the thesis contributes to the theory, data, method, and evaluation aspects of the analysis of argumentation strategies. The models, methods, and principles developed and explored in this thesis can be regarded as essential for promoting the applications mentioned above, among others.
Überwachungspraktiken und –technologien sind in der heutigen Welt omnipräsent und wohl nicht mehr wegzudenken. Ob CCTV-Systeme, Biometrie oder Data Mining – unsere Gesellschaft befindet sich in einem ständigen Überwachungsmodus, der sich weit über einen begrenzten Raum oder zeitlichen Rahmen hinausstreckt. Überwacht wird überall: privat, am Arbeitsplatz oder im Cyberspace, und alles: Interaktionen, Äußerungen, Verhalten. Es werden Unmengen von Daten gesammelt, strukturiert, kombiniert, gekauft und verkauft.
Dieser Modus stellt mehr als eine bloße Neuauflage des Bentham-Foucaultschen Panoptikons dar: der aktuelle Überwachungsmodus, die informationelle Asymmetrie als ihren tragenden Pfeiler beibehaltend, dient nicht nur der Disziplinierung, sondern viel mehr der Kontrolle, die nicht primär negativ-sanktionierend, sondern positiv-leistungssteigernd wirkt: es ist nicht das Ziel, die Individuen zu bestrafen und ein bestimmtes Verhalten zu verbieten, sondern sie durch Belohnung, Interaktion und spielerische Elemente dazu zu bringen, sich auf die gewünschte Art zu verhalten und im Endeffekt sich selbst zu überwachen. Die Kontrolle wird auf diese Weise zum zentralen Schauplatz der Machtausübung, die sich über das Beobachten, Speichern, Auswerten und Sortieren vollzieht. Diese Prozesse hinterlassen keinen Frei- oder Spielraum für Ambiguität; sie verwirklichen die Diktatur der klaren Kante, der Klassifizierung und Kategorisierung ohne Schattierungen. Die Macht selbst befindet sich in einem kontinuierlichen Fluss, sie ist ubiquitär, dennoch schwer lokalisierbar. Sie fungiert nicht mehr unter dem Signum einer pseudosakralen zentralen Instanz, sondern wird durch diverse Akteure und Assemblages kolportiert. Die durch sie implizierten Praktiken der Selbstkontrolle, kulturgeschichtlich ebenfalls religiös oder zumindest philosophisch konnotiert, sind die neuen Rituale des Sehens und Gesehen-Werdens.
Im Zeitalter der elektronischen Datentechnologien gibt es diverse Agenten der Überwachung. Vom besonderen Interesse sind dabei die Wearables, weil sie intim, affektiv und haptisch arbeiten und so, über das Sehen und Gesehen-Werden hinaus, das Berühren und Berührt-Werden und somit die Neuregulierung von Nähe und Distanz ins Spiel bringen. Sie schreiben sich zwar in eine Vermessungstradition eins, die ihre Ursprünge mindestens im 19. Jahrhundert hat, unterscheiden sich aber von dieser in ihrer Intensität und Sinnlichkeit.
Utilizing Modern FIB/SEM Technology and EDS for 3D Imaging of Hydrated Alite and its Pore Space
(2021)
The exploration of cementitious materials using scanning electron microscopes (SEM) is mainly done using fractured or polished surfaces. This leads to high-resolution 2D-images that can be combined using EDX and EBSD to unveil details of the microstructure and composition of materials. Nevertheless, this does not provide a quantitative insight into the three-dimensional fine structure of for example C-S-H phases.
The focused ion beam (FIB) technology can cut a block of material in thin layers of less than 10 nm. This gives us a volume of 1000 μm³ with a voxel resolution of down to 4 x 4 x 10 nm³. The results can be combined with simultaneously acquired EDX data to improve image segmentation. Results of the investigation demonstrate that it is possible to obtain close-to-native 3D-visualisation of the spatial distribution of unreacted C3S, C-S-H and CH. Additionally, an optimized preparation method allows us to quantify the fine structure of C-S-H phases (length, aspect ratio, …) and the pore space.
Chemical glass frosting processes are widely used to create visual attractive glass surfaces. A commonly used frosting bath mainly contains ammonium bifluoride (NH4HF2) mixed with hydrochloric acid (HCl). The frosting process consists of several baths. Firstly, the preliminary bath to clean the object. Secondly, the frosting bath which etches the rough light scattering structure into the glass surface. Finally, the washing baths to clean the frosted object. This is where the constituents of the preceding steps accumulate and have to be filtered from the sewage. In the present contribution, phosphoric acid (H3PO4) was used as a substitute for HCl to reduce the amount of ammonium (NH4+) and chloride (Cl−) dissolved in the waste water. In combination with magnesium carbonate (MgCO3), it allows the precipitation of ammonium within the sewage as ammonium magnesium phosphate (MgNH4PO4). However, a trivial replacement of HCl by H3PO4 within the frosting process causes extensive frosting errors, such as inhomogeneous size distributions of the structures or domains that are not fully covered by these structures. By modifying the preliminary bath composition, it was possible to improve the frosting result considerably. To determine the optimal composition of the preliminary bath, a semi-automatic evaluation method has been developed. This method renders the objective comparison of the resulting surface quality possible.
Die interdisziplinäre Dissertationsschrift lässt sich im Horizont internationaler Forschungen zu Denkmalwerten, neuer Ansätze in der Kultur- und Wissensvermittlung rund um Baudenkmale sowie künstlerisch- ethnographischem Forschen an und mit Denkmalen verorten.
Der erste Teil der Arbeit widmet sich Denkmalen und der Denkmalpflege im Kontext künstlerischer und sozialwissenschaftlicher Allianzen. Ausgangspunkt ist die Feststellung, dass die Denkmalpflege zwar sehr vieles über Denkmale weiß, aber kaum etwas über deren Rezeption beim breiten Publikum. Im Mittelpunkt steht die Frage, wie hier Praktiken der bildenden Kunst und Arbeitsweisen der Kulturanthropologie die Disziplin der Denkmalpflege bereichern können, oder sogar müssen.
Den zweiten Teil bildet eine empirische Studie, in der die populäre Wahrnehmung von Denkmalen qualitativ erforscht wird. Das Schloss und Rittergut Bedheim im südlichen ländlichen Thüringen dient dabei als konkreter Untersuchungsort. Reaktionen von Besucherinnen und Besuchern werden mit Hilfe von drei künstlerischen Eingriffen angeregt und diese dann ethnographisch-offen dokumentiert und ausgewertet.
Auf dieser Basis werden Zugänge zum Denkmal ermittelt. Während die meisten BesucherInnen das Denkmal als „Arbeit“ wahrnehmen, geraten einige ins „Träumen“ oder „Erinnern“, man „genießt“ das Ensemble als authentische und ästhetische Ressource, oder findet Zugang über das spontane „Erklären“ baukonstruktiver oder baulicher Situationen. Für andere bedeutet der Besuch die „Teilhabe“ an einem Prozess. Schloss Bedheim wird als Ort stetiger Veränderung geschätzt. In der Wahrnehmung der BesucherInnen verquicken sich Aspekte des Bewunderns mit solchen des Abgrenzens. Die eigene Alltagswelt und das eigene Zuhause bilden hierbei Bezugspunkte. Schloss Bedheim wird auf diese Weise zum Imaginationsraum, zur Energietankstelle und zur gern besuchten Problemwelt.
Die Ergebnisse der Arbeit liegen in zwei Erkenntnisfeldern: Auf einer methodischen Ebene zeigt sie, wie in der Denkmalpflege vertiefte Fachlichkeit mit einer tatsächlichen Kontaktaufnahme mit dem Publikum verbunden werden kann und damit soziale Gefüge an Baudenkmalen qualitativ ermittelt werden können. Ebenso wird deutlich, dass künstlerische Eingriffe Auslöser von Gesprächen sind, als Kontaktflächen zur Alltagswelt dienen und so zu einer vielfältigen Auseinandersetzung mit Denkmalen führen.
Auf einer inhaltlichen Ebene liefert die Arbeit Erkenntnisse zu Wahrnehmungsweisen von Denkmalen. Neben den erwähnten Zugängen, wird die Existenz und Bedeutung einer regional vernetzten Wahrnehmung von Denkmalen aufgedeckt. Des Weiteren zeigen die Ergebnisse, dass das Öffnen von Baudenkmalen als und im Prozess ungenutzte Potentiale birgt und es wird angeregt, dies in zukünftigen denkmalpflegerischen Konzepten eine größere Rolle spielen zu lassen. Die Vision eines „Kompendiums der Zugänge“ wird entwickelt, mit dessen Hilfe sich ein enormes Wissen über Rollen und Bedeutungen die Baudenkmale in unserer Gesellschaft spielen, sammeln ließe.
Die Auswirkungen der durch den Kapitalismus befeuerten Umweltzerstörung werden immer deutlicher erkennbar. Unsere Gesellschaft ist nun damit konfrontiert, dass ihre kulturelle Identität, aber auch ihr Wohlstand sowohl eng mit Konsum und Wirtschaftswachstum als auch mit der Gesundheit der Natur verbunden sind. Es scheint ein geeigneter Moment, um die Perspektive zu wechseln und einer neuen Form des Wachstums eine Chance zu geben. Pilze sind eine von der Region unabhängige, natürlich vorkommende Ressource, die lokal angebaut und verarbeitet werden kann, ohne die Umwelt zu belasten. Pilze sind klimafreundlich, müllvermeidend und in bestehende natürliche Kreisläufe inkludierbar. Kurzum, Pilze sind cool, doch das wissen nicht Viele. Das sollte sich ändern. Mit Myzelwachstum gegen das Wachstumsparadigma.
Corviale en dérive
(2021)
Die Haltungen des Architekten Luigi Snozzi. Untersucht am Beispiel des Projektes Monte Carasso
(2021)
Welche Haltung spricht aus den Werken von Architekt*innen? Lassen sich Werte und Handlungsanweisungen von Mauern und Plänen ablesen? Luigi Snozzis Entwürfe für Monte Carasso werden in dieser Arbeit exemplarisch darauf untersucht. Sie zeugen von der Verantwortung, die jede*r Architekt*in für das Umfeld hat, in dem sie oder er baut.
Im Südwesten Weimars befindet sich ein leerstehender Gebäudekomplex, der im Stadtraum heute unter dem Namen Funkhaus und vor allem für studentisch organisierte Partys bekannt ist. Doch das Gebäude entstand ursprünglich nicht als Radiostation, sondern zwischen 1937 und 1944 als Prestigeprojekt des nationalsozialistisch zugespitzten Nietzsche-Kults. Diese Projektarbeit beleuchtet anhand von Archivalien und Expertinneninterviews die Nutzungsgeschichte der ehemaligen »Nietzsche-Gedächtnishalle« und wirft die Frage auf, ob und wie ein solcher NS-Bau als Partylocation genutzt werden kann.
This thesis examines urban partition in Nicosia, the capital of Cyprus, and how its changing roles and shifting perceptions in a post-conflict setting reflect power relations, and their constant renegotiation. Nicosia, the capital of Cyprus, was officially divided in 1974 in the aftermath of an eighteen-year-long conflict between the island’s Turkish- and Greek-Cypriot communities. As a result, a heavily militarized Buffer Zone, established as an emergency measure against perpetuation of intercommunal violence, has been cutting through its historic centre ever since.
This thesis departs from a genuine interest in the material and ideational dimensions of urban partition. How is it constructed, not merely in physical terms but in the minds of the societies affected by conflict? How is it established in official and everyday discourses? What kinds of mechanisms have been developed to maintain it, and make an inseparable part of the urban experience? Moreover, taking into account the consensus in relevant literature pertaining to the imperative for its removal, this thesis is inquiring into the relevance of peace agreements to overcoming urban partition. For this purpose, it also looks at narratives and practices that have attempted to contest it.
The examples examined in this thesis offer pregnant analytical moments to understand Nicosia’s Buffer Zone as a dynamic social construct, accommodating multiple visions of and for the city. Its space ‘in-between’ facilitates encounters between various actors, accommodates new meanings, socio-spatial practices and diverse imaginaries. In this sense, urban partition is explored in this thesis as a phenomenon that transcends scales as well as temporalities, entwining past, present, and future.
This work presents a robust status monitoring approach for detecting damage in cantilever structures based on logistic functions. Also, a stochastic damage identification approach based on changes of eigenfrequencies is proposed. The proposed algorithms are verified using catenary poles of electrified railways track. The proposed damage features overcome the limitation of frequency-based damage identification methods available in the literature, which are valid to detect damage in structures to Level 1 only. Changes in eigenfrequencies of cantilever structures are enough to identify possible local damage at Level 3, i.e., to cover damage detection, localization, and quantification. The proposed algorithms identified the damage with relatively small errors, even at a high noise level.
Im ersten Working Paper des Forschungsprojekts „Städtische Ko-Produktion von Teilhabe und Gemeinwohl. Aushandlungsprozesse zwischen zivilgesellschaft lichen Akteuren und kommunalen Verwaltungen“ möchten wir die von uns verwendeten zentralen Begrifflichkeiten definieren sowie einige Grundannahmen erläutern. Im Anschluss an Definitionen der Begriff e Wohlfahrtsregime, Teilhabe, Gemeinwohl, Governance, Zivilgesellschaft und soziale Bewegungen erfolgt eine Analyse der heutigen Krise von Teilhabe, die wir als Ausgangspunkt zur Untersuchung unserer Fallstudien definieren.
Das Working Paper dient sowohl der internen Selbstverständigung im Projekt als auch dem Austausch mit anderen Forschenden in der Förderlinie „Teilhabe und Gemeinwohl“ des Bundesministeriums für Bildung und Forschung (BMBF) sowie darüber hinaus mit Projekten, die sich ähnlichen Themen widmen.
Die Arbeit leistet einen wissenschaftlichen Beitrag zur Erforschung der Einsatzmöglichkeiten eines Immobilienportfoliomanagements für öffentliche museale Schlösserverwaltungen in Deutschland. Insbesondere wird ein für deren Organisation spezifisches Modell zur Investitionssteuerung herausgearbeitet und dessen Anwendbarkeit in der Praxis mit Experten diskutiert.
Biofeedback constitutes a well-established, non-invasive method to voluntary interfere in emotional processing by means of cognitive strategies. However, treatment durations exhibit strong inter-individual variations and first successes can often be achieved only after a large number of sessions. Sham feedback constitutes a rather untapped approach by providing feedback that does not correspond to the participant’s actual state. The current study aims to gain insights into mechanisms of sham feedback processing in order to support new techniques in biofeedback therapy. We carried out two experiments and applied different types of sham feedback on skin conductance responses and pupil size changes during affective processing. Results indicate that standardized but context-sensitive sham signals based on skin conductance responses exert a stronger influence on emotional regulation compared to individual sham feedback from ongoing pupil dynamics. Also, sham feedback should forego unnatural signal behavior to avoid irritation and skepticism among participants. Altogether, a reasonable combination of stimulus features and sham feedback characteristics enables to considerably reduce the actual bodily responsiveness already within a single session.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
Natural Urban Resilience: Understanding general urban resilience through Addis Ababa’s inner city
(2021)
This dissertation describes the urban actors and spatial practices that contribute to natural urban resilience in Addis Ababa’s inner city. Natural urban resilience is a non-strategical and bottom-up, everyday form of general urban resilience – an urban system’s ability to maintain its essential characteristics under any change. This study gains significance by exposing conceptual gaps in the current un-derstanding of general urban resilience and highlighting its unconvincing applicability to African cities. This study attains further relevance by highlighting the danger of the ongoing large-scale redevelopment of the inner city. The inner city has naturally formed, and its urban memory, spaces, and social cohesion contribute to its primarily low-income population’s resilience. This thesis argues that the inner city’s demolition poses an incalculable risk of maladaptation to future stresses and shocks for Addis Ababa. The city needs a balanced urban discourse that highlights the inner city’s qualities and suggests feasible urban transformation measures. “Natural Urban Resilience” contributes an empirical study to the debate by identifying those aspects of the inner city that contribute to general resilience and identifies feasible action areas. This study develops a qualitative research design for a single case study in Addis Ababa. The data is obtained through expert interviews, interviews with resi-dents, and the analysis of street scene photos, which are abstracted using Grounded Theory. That way, this thesis provides first-time knowledge about who and what generates urban resilience in the inner city of Addis Ababa and how. Furthermore, the study complements existing theories on general urban resilience. It provides a detailed understanding of the change mechanisms in resilience, of which it identifies four: adaptation, upgrading, mitigation, and resistance. It also adapts the adaptive cycle, a widely used concept in resilience thinking, conceptually for urban environments. The study concludes that the inner city’s continued redevelopment poses an incalculable threat to the entire city. Therefore, “Natural urban resilience” recommends carefully weighing any intervention in the inner city to promote Addis Ababa’s overall resilience. This dissertation proposes a pattern language for natural urban resilience to support these efforts and to translate the model of natural urban resilience into practice.
While Public-Private Partnership (PPP) is widely adopted across various sectors, it raises a question on its meagre utilisation in the housing sector. This paper, therefore, gauges the perspective of the stakeholders in the building industry towards the application of PPP in various building sectors together with housing. It assesses the performance reliability of PPP for housing by learning possible take-aways from other sectors. The role of key stakeholders in the industry becomes highly responsible for an informed understanding and decision-making. To this end, a two-tier investigation was conducted including surveys and expert interviews, with several stakeholders in the PPP industry in Europe, involving the public sector, private sector, consultants, as well as other community/user representatives.
The survey results demonstrated the success rate with PPPs, major factors important for PPPs such as profitability or end-user acceptability, the prevalent practices and trends in the PPP world, and the majority of support expressed in favour of the suitability of PPP for housing. The interviews added more detailed dimensions to the understanding of the PPP industry, its functioning and enabling the formation of a comprehensive outlook. The results present the perspective, approaches, and experiences of stakeholders over PPP practices, current trends and scenarios and their take on PPP in housing. It shall aid in understanding the challenges prevalent in the PPP approach for implementation in housing and enable the policymakers and industry stakeholders to make provisions for higher uptake to accelerate housing provision.
In the last two decades, Peridynamics (PD) attracts much attention in the field of fracture mechanics. One key feature of PD is the nonlocality, which is quite different from the ideas in conventional methods such as FEM and meshless method. However, conventional PD suffers from problems such as constant horizon, explicit algorithm, hourglass mode. In this thesis, by examining the nonlocality with scrutiny, we proposed several new concepts such as dual-horizon (DH) in PD, dual-support (DS) in smoothed particle hydrodynamics (SPH), nonlocal operators and operator energy functional. The conventional PD (SPH) is incorporated in the DH-PD (DS-SPH), which can adopt an inhomogeneous discretization and inhomogeneous support domains. The DH-PD (DS-SPH) can be viewed as some fundamental improvement on the conventional PD (SPH). Dual formulation of PD and SPH allows h-adaptivity while satisfying the conservations of linear momentum, angular momentum and energy. By developing the concept of nonlocality further, we introduced the nonlocal operator method as a generalization of DH-PD. Combined with energy functional of various physical models, the nonlocal forms based on dual-support concept are derived. In addition, the variation of the energy functional allows implicit formulation of the nonlocal theory. At last, we developed the higher order nonlocal operator method which is capable of solving higher order partial differential equations on arbitrary domain in higher dimensional space. Since the concepts are developed gradually, we described our findings chronologically.
In chapter 2, we developed a DH-PD formulation that includes varying horizon sizes and solves the "ghost force" issue. The concept of dual-horizon considers the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly with arbitrary particle discretization. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. A simple adaptive refinement procedure (h-adaptivity) is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method.
In chapter 3, a nonlocal operator method (NOM) based on the variational principle is proposed for the solution of waveguide problem in computational electromagnetic field. Common differential operators as well as the variational forms are defined within the context of nonlocal operators. The present nonlocal formulation allows the assembling of the tangent stiffness matrix with ease, which is necessary for the eigenvalue analysis of the waveguide problem. The present formulation is applied to solve 1D Schrodinger equation, 2D electrostatic problem and the differential electromagnetic vector wave equations based on electric fields.
In chapter 4, a general nonlocal operator method is proposed which is applicable for solving partial differential equations (PDEs) of mechanical problems. The nonlocal operator can be regarded as the integral form, ``equivalent'' to the differential form in the sense of a nonlocal interaction model. The variation of a nonlocal operator plays an equivalent role as the derivatives of the shape functions in the meshless methods or those of the finite element method. Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease. The nonlocal operator method is enhanced here also with an operator energy functional to satisfy the linear consistency of the field. A highlight of the present method is the functional derived based on the nonlocal operator can convert the construction of residual and stiffness matrix into a series of matrix multiplications using the predefined nonlocal operators. The nonlocal strong forms of different functionals can be obtained easily via the concept of support and dual-support. Several numerical examples of different types of PDEs are presented.
In chapter 5, we extended the NOM to higher order scheme by using a higher order Taylor series expansion of the unknown field. Such a higher order scheme improves the original NOM in chapter 3 and chapter 4, which can only achieve one-order convergence. The higher order NOM obtains all partial derivatives with specified maximal order simultaneously without resorting to shape functions. The functional based on the nonlocal operators converts the construction of residual and stiffness matrix into a series of matrix multiplication on the nonlocal operator matrix. Several numerical examples solved by strong form or weak form are presented to show the capabilities of this method.
In chapter 6, the NOM proposed as a particle-based method in chapter 3,4,5, has difficulty in imposing accurately the boundary conditions of various orders. In this paper, we converted the particle-based NOM into a scheme with interpolation property. The new scheme describes partial derivatives of various orders at a point by the nodes in the support and takes advantage of the background mesh for numerical integration. The boundary conditions are enforced via the modified variational principle. The particle-based NOM can be viewed a special case of NOM with interpolation property when nodal integration is used. The scheme based on numerical integration greatly improves the stability of the method, as a consequence, the operator energy functional in particle-based NOM is not required. We demonstrated the capabilities of current method by solving the gradient solid problems and comparing the numerical results with the available exact solutions.
In chapter 7, we derived the DS-SPH in solid within the framework of variational principle. The tangent stiffness matrix of SPH can be obtained with ease, and can be served as the basis for the present implicit SPH. We proposed an hourglass energy functional, which allows the direct derivation of hourglass force and hourglass tangent stiffness matrix. The dual-support is {involved} in all derivations based on variational principles and is automatically satisfied in the assembling of stiffness matrix. The implementation of stiffness matrix comprises with two steps, the nodal assembly based on deformation gradient and global assembly on all nodes. Several numerical examples are presented to validate the method.
Broadband dielectric measurement methods based on vector network analyzer coupled with coaxial transmission line cell (CC) and open-ended coaxial probe (OC) are simply reviewed, by which the dielectric behaviors in the frequency range of 1 MHz to 3 GHz of two practical geomaterials are investigated. Kaolin after modified compaction with different water contents is measured by using CC. The results are consistent with previous study on standardized compacted kaolin and suggest that the dielectric properties at frequencies below 100 MHz are not only a function of water content but also functions of other soil state parameters including dry density. The hydration process of a commercial grout is monitored in real time by using OC. It is found that the time dependent dielectric properties can accurately reveal the different stages of the hydration process. These measurement results demonstrate the practicability of the introduced methods in determining dielectric properties of soft geomaterials.
This dissertation investigates the interactions between urban form, allocation of activities, and pedestrian movement in the context of urban planning. The ability to assess the long-term impact of urban planning decisions on what people do and how they get there is of central importance, with various disciplines addressing this topic. This study focuses on approaches proposed by urban morphologists, urban economists, and transportation planners, each aiming the attention at a different part of the form-activity-movement interaction. Even though there is no doubt about the advantages of these highly focused approaches, it remains unclear what is the cost of ignoring the effect of some interactions while considering others. The general aim of this dissertation is to empirically test the validity of the individual models and quantify the impact of this isolationist approach on their precision and bias.
For this purpose, we propose a joined form-activity-movement interaction model and conduct an empirical study in Weimar, Germany. We estimate how the urban form and activities affect movement as well as how movement and urban form affect activities. By estimating these effects in isolation and simultaneously, we assess the bias of the individual models.
On the one hand, the empirical study results confirm the significance of all interactions suggested by the individual models. On the other hand, we were able to show that when these interactions are estimated in isolation, the resulting predictions are biased. To conclude, we do not question the knowledge brought by transportation planners, urban morphologists, and urban economists. However, we argue that it might be of little use on its own.
We see the relevance of this study as being twofold. On the one hand, we proposed a novel methodological framework for the simultaneous estimation of the form-activity-movement interactions. On the other hand, we provide empirical evidence about the strengths and limitations of current approaches.
Although it is impractical to avert subsequent natural disasters, advances in simulation science and seismological studies make it possible to lessen the catastrophic damage. There currently exists in many urban areas a large number of structures, which are prone to damage by earthquakes. These were constructed without the guidance of a national seismic code, either before it existed or before it was enforced. For instance, in Istanbul, Turkey, as a high seismic area, around 90% of buildings are substandard, which can be generalized into other earthquakeprone regions in Turkey. The reliability of this building stock resulting from earthquake-induced collapse is currently uncertain. Nonetheless, it is also not feasible to perform a detailed seismic vulnerability analysis on each building as a solution to the scenario, as it will be too complicated and expensive. This indicates the necessity of a reliable, rapid, and computationally easy method for seismic vulnerability assessment, commonly known as Rapid Visual Screening (RVS). In RVS methodology, an observational survey of buildings is performed, and according to the data collected during the visual inspection, a structural score is calculated without performing any structural calculations to determine the expected damage of a building and whether the building needs detailed assessment. Although this method might save time and resources due to the subjective/qualitative judgments of experts who performed the inspection, the evaluation process is dominated by vagueness and uncertainties, where the vagueness can be handled adequately through the fuzzy set theory but do not cover all sort of uncertainties due to its crisp membership functions. In this study, a novel method of rapid visual hazard safety assessment of buildings against earthquake is introduced in which an interval type-2 fuzzy logic system (IT2FLS) is used to cover uncertainties. In addition, the proposed method provides the possibility to evaluate the earthquake risk of the building by considering factors related to the building importance and exposure. A smartphone app prototype of the method has been introduced. For validation of the proposed method, two case studies have been selected, and the result of the analysis presents the robust efficiency of the proposed method.
Stadtpolitik für alle
(2021)
Die Corona-Krise hat die Erosion städtischer Solidarität offen zu Tage treten lassen. Dagegen bringen Anton Brokow-Loga und Frank Eckardt in dieser Schrift die praktische Utopie einer solidarischen Postwachstumsstadt „auf den Punkt“.
Vom Commoning über die Umverteilung der städtischen Flächen bis zu einer sozial-ökologischen Verkehrswende: Eine progressive Stadtpolitik für alle überwindet bisheriges Schubladendenken. Sie setzt stattdessen auf heterogene Zusammenhänge und ungewöhnliche Bündnisse. Zu dem hier umrissenen Vorhaben gehört auch, eine basisdemokratisch orientierte Stadtpolitik mit dem Ziel einer umfassenden Transformation von Stadt und Gesellschaft zu verknüpfen.
Wie kann ein Blick auf die kommunale Ebene helfen, globalen Ungerechtigkeiten zu begegnen? Welchen Weg weisen munizipalistische Plattformen und Vergemeinschaftungen jenseits von Privat- oder Staatseigentum?
In vielen Leichtbauanwendungen ist der begrenzende Faktor die Schwingungsanfälligkeit der Bauteile. Eine Möglichkeit der Begrenzung von Schwingungsamplituden ist der gezielte Einsatz von Reibungsdämpfung in Leichtbaustrukturen. In dieser Arbeit wird der Einfluss dieser Art von Energiedissipation auf Leichtmetallstrukturen sowie topologieoptimierte Bauteil untersucht. Betrachtet werden dabei die Positionierung, Dimensionierung sowie die Reibeigenschaften dissipativer Elemente.
Housing estates were fundamentally conceived upon state socialist utopia ideas to provide standard housing for citizens. While former state socialist housing estates have been extensively researched in the field of architecture, urban and sociology studies, there is still a gap in identifying how production processes affect morphological changes during the post-socialist era. This thesis compares the processes in the production of the largest housing estates of Marzahn in GDR and Petržalka in Czechoslovakia from 1970 to 1989 through contextual analysis of primary and secondary sources, which include visual maps, diagrams from professional architecture and planning journals, government documents and textbooks, as well as academic journals, books and newspaper articles. Then it discusses how these processes inadvertently created conducive conditions affecting their development in the market economy after 1989. It then interprets the results through application of Actor-Network Theory and Historical Institutionalism, while conceptualising them through David Harvey’s dialectical utopianism theory. Harvey (2000) delineates two types of utopia, one of spatial form and one of process. The former refers to materialised ideals in physical forms whereas the latter refers to the ongoing process of spatializing. The thesis aims to show how the production of Marzahn in GDR was more path dependent on policies established in 1950s and 1960s whereas Petržalka was a product of new Czechoslovakian policies in 1970s, changing aspects of the urban planning process, a manifestation of a more emphatic technocratic thinking on a wider scale. This ultimately influences the trajectories of development after 1989, showing more effects in Petržalka.
In the last decades, Finite Element Method has become the main method in statics and dynamics analysis in engineering practice. For current problems, this method provides a faster, more flexible solution than the analytic approach. Prognoses of complex engineer problems that used to be almost impossible to solve are now feasible.
Although the finite element method is a robust tool, it leads to new questions about engineering solutions. Among these new problems, it is possible to divide into two major groups: the first group is regarding computer performance; the second one is related to understanding the digital solution.
Simultaneously with the development of the finite element method for numerical solutions, a theory between beam theory and shell theory was developed: Generalized Beam Theory, GBT. This theory has not only a systematic and analytical clear presentation of complicated structural problems, but also a compact and elegant calculation approach that can improve computer performance.
Regrettably, GBT was not internationally known since the most publications of this theory were written in German, especially in the first years. Only in recent years, GBT has gradually become a fertile research topic, with developments from linear to non-linear analysis.
Another reason for the misuse of GBT is the isolated application of the theory. Although recently researches apply finite element method to solve the GBT's problems numerically, the coupling between finite elements of GBT and other theories (shell, solid, etc) is not the subject of previous research. Thus, the main goal of this dissertation is the coupling between GBT and shell/membrane elements. Consequently, one achieves the benefits of both sides: the versatility of shell elements with the high performance of GBT elements.
Based on the assumptions of GBT, this dissertation presents how the separation of variables leads to two calculation's domains of a beam structure: a cross-section modal analysis and the longitudinal amplification axis. Therefore, there is the possibility of applying the finite element method not only in the cross-section analysis, but also the development for an exact GBT's finite element in the longitudinal direction.
For the cross-section analysis, this dissertation presents the solution of the quadratic eigenvalue problem with an original separation between plate and membrane mechanism. Subsequently, one obtains a clearer representation of the deformation mode, as well as a reduced quadratic eigenvalue problem.
Concerning the longitudinal direction, this dissertation develops the novel exact elements, based on hyperbolic and trigonometric shape functions. Although these functions do not have trivial expressions, they provide a recursive procedure that allows periodic derivatives to systematise the development of stiffness matrices. Also, these shape functions enable a single-element discretisation of the beam structure and ensure a smooth stress field.
From these developments, this dissertation achieves the formulation of its primary objective: the connection of GBT and shell elements in a mixed model. Based on the displacement field, it is possible to define the coupling equations applied in the master-slave method. Therefore, one can model the structural connections and joints with finite shell elements and the structural beams and columns with GBT finite element.
As a side effect, the coupling equations limit the displacement field of the shell elements under the assumptions of GBT, in particular in the neighbourhood of the coupling cross-section.
Although these side effects are almost unnoticeable in linear analysis, they lead to cumulative errors in non-linear analysis. Therefore, this thesis finishes with the evaluation of the mixed GBT-shell models in non-linear analysis.
Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing.
Ohne das fast achtzigjährige Wirken des Allgemeinen Deutschen Musikvereins (ADMV) würde das deutsche Musikleben in seiner heutigen Form nicht existieren. Die Dokumentation der Programme zu seinen von 1859 bis 1937 nahezu jährlich anderen Orts veranstalteten Musikfesten erschließt erstmals grundlegende Quellen hierzu. Diese Datenbasis vertieft für den Zeitraum von vier deutschen Systemen den Diskurs über Probleme der Repertoirebildung, Institutionalisierung, Kommerzialisierung und Mediation von Musik.
1861 als erster überregionaler deutscher Musikverein mit dem offiziellen Ziel der Integration musikalisch gegensätzlicher zeitgenössischer Richtungen und künstlerischen Nachwuchsförderung konstituiert, trat der ADMV einerseits kosmopolitisch auf, und seine Musikfeste entwickelten sich zu einem Forum für internationale zeitgenössische Musik wie einem Podium für die Wiederentdeckung älterer Musik. Hier erlebten Werke von Richard Strauss, Gustav Mahler und Arnold Schönberg frühe, vielbeachtete Aufführungen. Anderseits reiften parallel dazu protonationalistische Tendenzen zu einem Nationalismus heran, der die Musikfeste 1938 nahtlos in die nationalsozialistischen Reichsmusiktage überführen konnte.
Nach zahlreichen Standortwechseln sind die nunmehr restaurierten Materialien der einstigen Vereinsbibliothek und Vereinsakten im Hochschularchiv / Thüringischen Landesmusikarchiv Weimar sowie Goethe- und Schiller-Archiv Weimar wieder zugänglich. So wird jetzt eine erste kritische Edition der Festprogramme angeboten, in welcher die Programmfolge zu jeder Aufführung mit Werkangaben und dem Nachweis der Aufführenden prozessual von der Planung bis zur Präzisierung und Ergänzung bzw. Modifizierung rekonstruiert wurde. Darüber hinaus werden alle wissenschaftlichen Vorträge, Haupt- u. a. Versammlungen, Beratungen und die gesamten Personalia des Vereins und der jeweiligen Lokalkomitees dokumentiert sowie lassen sich Komponisten und Interpreten über Indizes erschließen.
Diese Publikation ist im von der Deutschen Forschungsgemeinschaft geförderten Projekt Der Allgemeine Deutsche Musikverein (ADMV, 1861–1937) – ein internationales Forum der Musik in Deutschlands Mitte am Gemeinsamen Institut für Musikwissenschaft Weimar-Jena der Hochschule für Musik FRANZ LISZT und der Friedrich-Schiller-Universität entstanden.
Ohne das fast achtzigjährige Wirken des Allgemeinen Deutschen Musikvereins (ADMV) würde das deutsche Musikleben in seiner heutigen Form nicht existieren. Die kritische Edition Die Musikfeste des Allgemeinen Deutschen Musikvereins von 1859 bis 1937 (Eine Dokumentation der Veranstaltungen), herausgegeben von Jan Neubauer und Thomas Radecke, erschließt bereits erstmals grundlegende Quellen der Programme zu seinen nahezu jährlich anderen Orts veranstalteten Musikfesten und ist ebenfalls hier online abrufbar. Diese Datenbasis vertieft für den Zeitraum von vier deutschen Systemen den Diskurs über Probleme der Repertoirebildung, Institutionalisierung, Kommerzialisierung und Mediation von Musik.
1861 als erster überregionaler deutscher Musikverein mit dem offiziellen Ziel der Integration musikalisch gegensätzlicher zeitgenössischer Richtungen und künstlerischen Nachwuchsförderung konstituiert, trat der ADMV einerseits kosmopolitisch auf, und seine Musikfeste entwickelten sich zu einem Forum für internationale zeitgenössische Musik wie einem Podium für die Wiederentdeckung älterer Musik. Hier erlebten Werke von Richard Strauss, Gustav Mahler und Arnold Schönberg frühe, vielbeachtete Aufführungen. Anderseits reiften parallel dazu protonationalistische Tendenzen zu einem Nationalismus heran, der die Musikfeste 1938 nahtlos in die nationalsozialistischen Reichsmusiktage überführen konnte.
Mit dem zur Mitte des 19. Jahrhunderts ausgebrochenen Parteienstreit der Konservativen um Brahms mit der Neudeutschen Schule Liszts und Wagners trat mit der Institutionalisierung der letzteren im ADMV ein bis heute singuläres nationales Musikfestkonzept auf den Plan, das von Anbeginn medial gestützt war: die Tonkünstler-Versammlungen. Vor Ort berichteten neudeutsche Autoren pro domo für das Vereinsorgan Neue Zeitschrift für Musik, was von Rezensenten der konservativen Musikpresse scharf konterkariert wurde und so ein allseitig umfassendes Bild dieser musikalischen Novitätenmessen in ihrem stetigen Wandel abgibt.
Diese Publikation ist im von der Deutschen Forschungsgemeinschaft geförderten Projekt Der Allgemeine Deutsche Musikverein (ADMV, 1861–1937) – ein internationales Forum der Musik in Deutschlands Mitte am Gemeinsamen Institut für Musikwissenschaft Weimar-Jena der Hochschule für Musik FRANZ LISZT und der Friedrich-Schiller-Universität entstanden.
This thesis presents the advances and applications of phase field modeling in fracture analysis. In this approach, the sharp crack surface topology in a solid is approximated by a diffusive crack zone governed by a scalar auxiliary variable. The uniqueness of phase field modeling is that the crack paths are automatically determined as part of the solution and no interface tracking is required. The damage parameter varies continuously over the domain. But this flexibility comes with associated difficulties: (1) a very fine spatial discretization is required to represent sharp local gradients correctly; (2) fine discretization results in high computational cost; (3) computation of higher-order derivatives for improved convergence rates and (4) curse of dimensionality in conventional numerical integration techniques. As a consequence, the practical applicability of phase field models is severely limited.
The research presented in this thesis addresses the difficulties of the conventional numerical integration techniques for phase field modeling in quasi-static brittle fracture analysis. The first method relies on polynomial splines over hierarchical T-meshes (PHT-splines) in the framework of isogeometric analysis (IGA). An adaptive h-refinement scheme is developed based on the variational energy formulation of phase field modeling. The fourth-order phase field model provides increased regularity in the exact solution of the phase field equation and improved convergence rates for numerical solutions on a coarser discretization, compared to the second-order model. However, second-order derivatives of the phase field are required in the fourth-order model. Hence, at least a minimum of C1 continuous basis functions are essential, which is achieved using hierarchical cubic B-splines in IGA. PHT-splines enable the refinement to remain local at singularities and high gradients, consequently reducing the computational cost greatly. Unfortunately, when modeling complex geometries, multiple parameter spaces (patches) are joined together to describe the physical domain and there is typically a loss of continuity at the patch boundaries. This decrease of smoothness is dictated by the geometry description, where C0 parameterizations are normally used to deal with kinks and corners in the domain. Hence, the application of the fourth-order model is severely restricted. To overcome the high computational cost for the second-order model, we develop a dual-mesh adaptive h-refinement approach. This approach uses a coarser discretization for the elastic field and a finer discretization for the phase field. Independent refinement strategies have been used for each field.
The next contribution is based on physics informed deep neural networks. The network is trained based on the minimization of the variational energy of the system described by general non-linear partial differential equations while respecting any given law of physics, hence the name physics informed neural network (PINN). The developed approach needs only a set of points to define the geometry, contrary to the conventional mesh-based discretization techniques. The concept of `transfer learning' is integrated with the developed PINN approach to improve the computational efficiency of the network at each displacement step. This approach allows a numerically stable crack growth even with larger displacement steps. An adaptive h-refinement scheme based on the generation of more quadrature points in the damage zone is developed in this framework. For all the developed methods, displacement-controlled loading is considered. The accuracy and the efficiency of both methods are studied numerically showing that the developed methods are powerful and computationally efficient tools for accurately predicting fractures.
Klangwelten gestalten. Zur Aktualität des Bauhauses in Sound Design und auditiver Stadtplanung
(2021)
Die Gestaltung von Klangwelten ist in den letzten Jahrzehnten in den Fokus von Stadtplanern und Architekten, Produkt-Designern und Musikproduzenten, aber auch der historischen und kulturwissenschaftlichen Forschung gerückt. Der Tagungsband versteht sich als ein Beitrag zu diesem neuen Praxis- und Forschungsfeld. Er will zugleich Bezugspunkte zu Konzepten des Bauhauses als einem historischen Vorläufer aufzeigen.