TY - JOUR A1 - Zhang, Yongzheng A1 - Ren, Huilong T1 - Implicit implementation of the nonlocal operator method: an open source code JF - Engineering with computers N2 - In this paper, we present an open-source code for the first-order and higher-order nonlocal operator method (NOM) including a detailed description of the implementation. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combined with the method of weighed residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. The implementation in this paper is focused on linear elastic solids for sake of conciseness through the NOM can handle more complex nonlinear problems. The NOM can be very flexible and efficient to solve partial differential equations (PDEs), it’s also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Finally, we present some classical benchmark problems including the classical cantilever beam and plate-with-a-hole problem, and we also make an extension of this method to solve complicated problems including phase-field fracture modeling and gradient elasticity material. KW - Strukturmechanik KW - Nonlocal operator method KW - Operator energy functional KW - Implicit KW - Dual-support KW - Variational principle KW - Taylor series expansion KW - Stiffness matrix Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220216-45930 UR - https://link.springer.com/article/10.1007/s00366-021-01537-x VL - 2022 SP - 1 EP - 35 PB - Springer CY - London ER - TY - JOUR A1 - Zhang, Yongzheng T1 - Nonlocal dynamic Kirchhoff plate formulation based on nonlocal operator method JF - Engineering with Computers N2 - In this study, we propose a nonlocal operator method (NOM) for the dynamic analysis of (thin) Kirchhoff plates. The nonlocal Hessian operator is derived based on a second-order Taylor series expansion. The NOM does not require any shape functions and associated derivatives as ’classical’ approaches such as FEM, drastically facilitating the implementation. Furthermore, NOM is higher order continuous, which is exploited for thin plate analysis that requires C1 continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for the time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation. KW - Angewandte Mathematik KW - nonlocal operator method KW - nonlocal Hessian operator KW - operator energy functional KW - dual-support KW - variational principle Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220209-45849 UR - https://link.springer.com/article/10.1007/s00366-021-01587-1 VL - 2022 SP - 1 EP - 35 PB - Springer CY - London ER - TY - THES A1 - Zhang, Yongzheng T1 - A Nonlocal Operator Method for Quasi-static and Dynamic Fracture Modeling N2 - Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons. Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form. The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows: -The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method. -A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation. -A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,9 KW - Variationsprinzip KW - Partial Differential Equations KW - Taylor Series Expansion KW - Peridynamics KW - Variational principle KW - Phase field method KW - Peridynamik KW - Phasenfeldmodell KW - Partielle Differentialgleichung KW - Nichtlokale Operatormethode Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221026-47321 ER - TY - GEN A1 - Zanders, Theresa A1 - Bein, Laura Eleana ED - Calbet i Elias, Laura ED - Vollmer, Lisa ED - Zanders, Theresa T1 - Der anonyme Behandlungsschein – von der Idee zur Umsetzung. Ein Handlungsleitfaden N2 - Der vorliegende Handlungsleitfaden hilft zivilgesellschaftlichen Organisationen und staatlichen Einrichtungen bei der Installation eines anonymen Behandlungs- oder Krankenschein für Menschen ohne Krankenversicherung. Dabei bündelt sich hier der Erfahrungsschatz verschiedener Initiativen aus dem gesamten Bundesgebiet. T3 - KoopWohl - Working Paper - 4 KW - Gesundheitsversorgung KW - Migration KW - Anonymer Krankenschein KW - Anonymer Behandlungsschein KW - public private partnership KW - zivilgesellschaftliches Engagement Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220928-47161 ER - TY - RPRT A1 - Zanders, Theresa T1 - Teilhabe an Gesundheitsversorgung von aufenthaltsrechtlich illegalisierten Menschen in Deutschland N2 - Die Gesundheitsversorgung in Deutschland ist seit den Bismarckschen Sozialreformen ein zunehmend institutionalisierter Teil der staatlichen Daseinsvorsorge im wohlfahrtsstaatlichen Gefüge. Institutionalisiert ist die Gesundheitsversorgung in korporatistischer Logik, das heißt in kooperativen Beziehungen zum privatwirtschaftlichen und zivilgesellschaftlichen Sektor und mit Befugnissen der Selbstverwaltung. Zudem fußt das Gesundheitssystem auf einem Versicherungssystem mit lohnabhängigen Abgaben. Institutionalisiert ist die staatliche Daseinsvorsorge jedoch auch in seinen Ausschlüssen. So werden Menschen ohne Bürgerrechte von vielen sozialen Rechten, wie von der Gesundheitsversorgung, ausgeschlossen, obwohl dieser Ausschluss im Widerspruch zu anderen konstitutiven Elementen des Nationalstaats steht. In diesem Working Paper werden die grundlegende Strukturen des deutschen Gesundheitssystems und darin innewohnende Funktionslogiken der Produktion von Teilhabe dargestellt. Abschließend werden in Anlehnung an Kronauer die verschiedenen Dimensionen von Teilhabe an Gesundheitsversorgung in ihrer Produktions- und Ausschlusslogik im Wohlfahrtsregime dargelegt dabei auf die Gruppe der aufenthaltsrechtlich Illegalisierten fokussiert, denen gesellschaftliche Teilhabe in vielen Lebensbereichen, wie auch stark im Gesundheitsbereich, untersagt wird. Gleichzeitig soll dargestellt werden, wie zivilgesellschaftliche Akteur*innen auch gegen staatliche Vorgaben oder Anreize, Teilhabe (wieder-)herstellen. T3 - KoopWohl - Working Paper - 3 KW - Gesundheit KW - Migration KW - Anonymer Krankenschein Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230530-63968 ER - TY - THES A1 - Zacharias, Christin T1 - Numerical Simulation Models for Thermoelastic Damping Effects N2 - Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment. This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy. The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping. Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation. The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential. The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes. N2 - Die Finite-Elemente Simulation von dynamisch angeregten Strukturen wird im Wesentlich durch die Steifigkeits-, Massen- und Dämpfungseigenschaften des Systems sowie durch die äußere Belastung bestimmt. Die Vorhersagequalität von dynamischen Simulationen schwingungsanfälliger Bauteile hängt wesentlich von der Verwendung geeigneter Dämpfungsmodelle ab. Dämpfungsphänomene haben einen wesentlichen Einfluss auf die Schwingungsamplitude, die Frequenz und teilweise sogar die Existenz von Vibrationen. Allerdings ist die Entwicklung von realitätsnahen Dämpfungsmodellen oft schwierig, da eine Vielzahl von physikalischen Effekten zur Energiedissipation während eines Schwingungsvorgangs führt. Beispiele hierfür sind die Materialdämpfung, verschiedene Formen der Reibung sowie vielfältige Wechselwirkungen mit dem umgebenden Medium. Diese Dissertation befasst sich mit thermoelastischer Dämpfung, die in homogenen Materialien die dominante Ursache der Materialdämpfung darstellt. Der thermoelastische Effekt wird ausgelöst durch eine Temperaturänderung aufgrund mechanischer Spannungen. In der schwingenden Struktur entstehen während der Deformation Temperaturgradienten zwischen benachbarten Regionen unter Zug- und Druckbelastung. In Abhängigkeit von der Vibrationsfrequenz führen diese zu Wärmeströmen und irreversibler Umwandlung mechanischer in thermische Energie. Die Zielstellung dieser Arbeit besteht in der Entwicklung recheneffizienter Simulationsmethoden, um thermoelastische Dämpfung in zeitabhängigen Finite-Elemente Analysen, die auf modaler Superposition beruhen, zu integrieren. Der thermoelastische Verlustfaktor wird auf der Grundlage der mechanischen Eigenformen und -frequenzen bestimmt. In nachfolgenden Analysen im Zeit- und Frequenzbereich wird er als modaler Dämpfungsgrad verwendet. Zwei Ansätze werden entwickelt, um den thermoelastischen Verlustfaktor in dünn-wandigen Plattenstrukturen, sowie in dreidimensionalen Volumenbauteilen zu simulieren. Die realitätsnahe Vorhersage der Energiedissipation wird durch die Verifizierung an experimentellen Daten bestätigt. Dafür wird ein Versuchsaufbau entwickelt, der eine Messung von Materialdämpfung unter Ausschluss anderer Dissipationsquellen ermöglicht. Für den Fall der Volumenbauteile wird ein Ansatz verwendet, der auf der Berechnung der Entropieänderung und damit der erzeugte Wärmeenergie während eines Schwingungszyklus beruht. Im Verhältnis zur Formänderungsenergie ist dies ein Maß für die thermoelastische Dämpfung. Für dünne Plattenstrukturen wird der Anteil an Biegeenergie in der Eigenform bestimmt und im sogenannten modalen Biegefaktor (MBF) zusammengefasst. Der maximale Grad an thermoelastischer Dämpfung kann im Zustand reiner Biegung auftreten, sodass der MBF eine quantitative Klassifikation der Eigenformen hinsichtlich ihres thermoelastischen Dämpfungspotentials zulässt. Die Ergebnisse der entwickelten Simulationsmethoden stimmen sehr gut mit den experimentellen Daten überein und sind geeignet, um thermoelastische Dämpfungsgrade vorherzusagen. Beide Ansätze basieren auf modaler Superposition und ermöglichen damit zeitabhängige Simulationen mit einer hohen Recheneffizienz. Insgesamt stellt die Modellierung der thermoelastischen Dämpfung einen Baustein in einem umfassenden Dämpfungsmodell dar, welches zur realitätsnahen Simulation von Schwingungsvorgängen notwendig ist. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,8 KW - Werkstoffdämpfung KW - Finite-Elemente-Methode KW - Strukturdynamik KW - Thermoelastic damping KW - modal damping KW - decay experiments KW - energy dissipation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221116-47352 ER - TY - THES A1 - Yousefi, Hassan T1 - Discontinuous propagating fronts: linear and nonlinear systems N2 - The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales. In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis. Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems). The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended. At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied. KW - Partielle Differentialgleichung KW - Adaptives System KW - Wavelet KW - Tichonov-Regularisierung KW - Hyperbolic PDEs KW - Adaptive central high resolution schemes KW - Wavelet based adaptation KW - Tikhonov regularization Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220922-47178 ER - TY - JOUR A1 - Welch Guerra, Max T1 - Fach, Gesellschaft und Wissenschaft. Beitrag zur Debatte „Was ist Stadt? Was ist Kritik?“ JF - sub\urban. zeitschrift für kritische stadtforschung N2 - Der Aufruf, die Begriffe Stadt und Kritik in das Zentrum einer Debatte zu stellen, bietet die große Chance, uns weit über begriffliche Klärungen unseres gemeinsamen Arbeitsgegenstands hinaus – die ja auch für sich selbst sehr fruchtbar sein können – über die Funktion zu verständigen, die wir in der Gesellschaft ausüben, wenn wir räumliche Planung praktizieren, erforschen und lehren. Da in der Bundesrepublik nicht nur ein großer Bedarf, sondern auch eine beträchtliche Nachfrage nach öffentlicher Planung besteht und die planungsbezogenen Wissenschaften sich eines insgesamt stabilen institutionellen Standes erfreuen, laufen wir Gefahr, die gesellschaftspolitische Legitimation von Berufsfeld und Wissenschaft zu vernachlässigen, sie als gegeben zu behandeln. Wir müssen uns ja kaum rechtfertigen. KW - Stadt KW - Kritik KW - Wissenschaftskritik KW - Kritikbegriff KW - Herrschaftskritik KW - Alltagskritik KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220810-46855 UR - https://zeitschrift-suburban.de/sys/index.php/suburban/article/view/779 VL - 2022 IS - Band 10, Nr. 1 SP - 188 EP - 190 PB - Sub\urban e.V. CY - Leipzig ER - TY - THES A1 - Wang, Jiasheng T1 - Lebensdauerabschätzung von Bauteilen aus globularem Grauguss auf der Grundlage der lokalen gießprozessabhängigen Werkstoffzustände N2 - Das Ziel der Arbeit ist, eine mögliche Verbesserung der Güte der Lebensdauervorhersage für Gusseisenwerkstoffe mit Kugelgraphit zu erreichen, wobei die Gießprozesse verschiedener Hersteller berücksichtigt werden. Im ersten Schritt wurden Probenkörper aus GJS500 und GJS600 von mehreren Gusslieferanten gegossen und daraus Schwingproben erstellt. Insgesamt wurden Schwingfestigkeitswerte der einzelnen gegossenen Proben sowie der Proben des Bauteils von verschiedenen Gussherstellern weltweit entweder durch direkte Schwingversuche oder durch eine Sammlung von Betriebsfestigkeitsversuchen bestimmt. Dank der metallografischen Arbeit und Korrelationsanalyse konnten drei wesentliche Parameter zur Bestimmung der lokalen Dauerfestigkeit festgestellt werden: 1. statische Festigkeit, 2. Ferrit- und Perlitanteil der Mikrostrukturen und 3. Kugelgraphitanzahl pro Flächeneinheit. Basierend auf diesen Erkenntnissen wurde ein neues Festigkeitsverhältnisdiagramm (sogenanntes Sd/Rm-SG-Diagramm) entwickelt. Diese neue Methodik sollte vor allem ermöglichen, die Bauteildauerfestigkeit auf der Grundlage der gemessenen oder durch eine Gießsimulation vorhersagten lokalen Zugfestigkeitswerte sowie Mikrogefügenstrukturen besser zu prognostizieren. Mithilfe der Versuche sowie der Gießsimulation ist es gelungen, unterschiedliche Methoden der Lebensdauervorhersage unter Berücksichtigung der Herstellungsprozesse weiterzuentwickeln. KW - Grauguss KW - Lebensdauerabschätzung KW - Werkstoffprüfung Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220111-45542 ER - TY - JOUR A1 - Vollmer, Lisa T1 - Aber das sind doch die Guten – oder? Wohnungsgenossenschaften in Hamburg. Rezension zu Jo-scha Metzger (2021): Genossenschaften und die Wohnungsfrage. Konflikte im Feld der Sozialen Wohnungswirtschaft. Münster: Westfälisches Dampfboot JF - sub\urban. zeitschrift für kritische stadtforschung N2 - Warum werden in aktuellen Diskussionen Wohnungsgenossenschaften immer wieder als zentrale Akteure einer gemeinwohlorientierten Wohnraumversorgung benannt – obwohl sie kaum zur Schaffung neuen bezahlbaren Wohnraums beitragen? Warum wehrt sich die Mehrzahl der Wohnungsgenossenschaften mit Händen und Füßen gegen die Wiedereinführung eines Gesetzes zur Wohnungsgemeinnützigkeit, obwohl es doch gerade dieses Gesetz war, dass sie im 20. Jahrhundert zu im internationalen Vergleich großen Unternehmen wachsen ließ? Sind Wohnungsgenossenschaften nun klientilistische, wenig demokratische und nur halb dekommodifizierte Marktteilnehmer oder wichtiger Teil der Wohnungsversorgung der unteren Mittelschicht? Wer Antworten auf diese und andere Fragen sucht und Differenziertheit in ihrer Beantwortung aushält, lese Joscha Metzers Dissertation „Genossenschaften und die Wohnungsfrage. KW - Gentrifizierung KW - Wohnungsforschung KW - Genossenschaften KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220713-46691 UR - https://zeitschrift-suburban.de/sys/index.php/suburban/article/view/795 VL - 2022 IS - Band 10, Nr. 1 SP - 261 EP - 267 PB - sub\urban e. V. CY - Berlin ER - TY - THES A1 - Vogler, Verena T1 - A framework for artificial coral reef design: Integrating computational modelling and high precision monitoring strategies for artificial coral reefs – an Ecosystem-aware design approach in times of climate change N2 - Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling. The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts. Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem. For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time. In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them. Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design. N2 - Charakteristisch für das Zeitalter des Klimawandels sind die durch den Menschen verursachte Meeresverschmutzung sowie ein massiver Rückgang der Artenvielfalt in den Weltmeeren. Tropische Korallenriffe sind als eines der ältesten und artenreichsten Ökosysteme der Erde besonders stark gefährdet und stehen somit symbolisch für die zerstörerischen Auswirkungen menschlicher Aktivitäten auf die Natur [4], [5]. Um dem massiven Rückgang der Korallenriffe entgegenzuwirken, wurden von Menschen künstliche Riffsysteme entwickelt [6], [7]. Sie sind Beispiele dafür, wie Architektur und die Regenerierung von Ökosystemen miteinander verbunden werden können [8]. Eine Verknüpfung von einerseits künstlichen und andererseits komplexen, sich verändernden natürlichen Systemen, stellt jedoch eine Herausforderung dar, u.a. in Bezug auf die Computermodellierung (B-Rep Modellierung). Zum Erhalt der Korallenriffe werden in der vorliegende Doktorarbeit Strategien aus der digitalen Praxis neuartig auf das Entwerfen von künstlichen Korallenriffen angewendet. Die Hauptfrage befasst sich damit, wie der Entwurfsprozess von künstlichen Korallenriffen unter Einbeziehung von Computermodellierung und hochpräzisen Überwachungsstrategien optimiert werden kann. In diesem Zusammenhang werden Techniken, Methoden sowie ein übergeordnetes Framework erforscht, welche zukünftige Forschung und Praxis in Bezug auf Ökologie-geleitete Entwurfsprozesse fördern soll. In Anbetracht der vielen vorhandenen künstlichen Riffsysteme, kann man feststellen, dass die Zusammenhänge zwischen Architektur- und Ökosystem-Anforderungen nicht genau untersucht und dadurch bei der Umsetzung nicht entsprechend berücksichtigt werden. Zum Beispiel wie Oberflächenbeschaffenheit und Materialität eine Ansiedlung von Korallenlarven begünstigt oder wie eine räumlich vielseitige Struktur die Artenvielfalt verbessern kann. Zudem fehlt ein integrierter Unterwasser-Überwachungsprozess, welcher Informationen über das Ökosystem liefert und diese dem Entwurf bereitstellt. Zusätzlich ist eine Unterwasser-Überwachung notwendig, um herauszufinden, ob die künstlichen Riffstrukturen zur Regenerierung beitragen oder dem Ökosystem gänzlich schaden. In dieser Forschungsarbeit werden empirische und experimentelle Methoden angewendet: Algorithmisches Entwerfen für Korallenriffe, hochpräzise Unterwasser-Überwachung, Computermodellierung und -simulation. Die Forschung wird seit 2012 bis heute durch zwei Riffprototypen (Artificial Reef Prototypes – ARPs) in Gili Trawangan, Indonesien validiert. Zusätzlich wurden weitere separate Methoden und Techniken in insgesamt siebzehn computergestützten Experimenten entwickelt und so angewendet, dass viele kreuzvalidiert und in ein Framework integriert sind, welches dann als bedeutender Beitrag dem Forschungsgebiet zur Verfügung steht. Weitere Hauptbeiträge sind der Ökosystem-bewusste Entwurfsansatz (Ecosystem-aware design approach), Key Performance Indicators (KPIs) für das Gestalten von Korallenriffen, algorithmisches Entwerfen und die Herstellung von Biorock-Kathoden, neue hochpräzise Unterwasser-Überwachungsstrategien, reale Langzeitexperimente, neue digitale Analysemethoden, sowie zwei webbasierte Softwareanwendungen für die Gestaltung und die Überwachung von künstlichen Korallenriffen. Das methodische Framework ist das Hauptergebnis der Forschung, da die vielen technischen Komponenten in dieser Weise zum ersten Mal getestet und kombiniert wurden. Zusammenfassend reagiert die vorliegende Doktorarbeit sowohl auf die Dringlichkeit als auch auf die Relevanz der Erhaltung von Artenvielfalt in tropischen Korallenriffen in Zeiten eines massiven Aussterbens, indem sie einen differenzierten Entwurfsansatz für künstliche Korallenriffe offeriert. Die Arbeit zeigt auf, dass ein digitales Entwerfen einer solchen „lebendigen Architektur“ unter Berücksichtigung vielfältiger Anforderungen und Leistungsparametern machbar ist. Zusätzlich bietet sie eine ausführliche kritische Diskussion über die Rolle von computergestützten Entwerfen und Architektur im Zusammenhang mit Regenerierung von Ökosystemen und “Planetary Thinking”. In dieser Hinsicht fungiert die Doktorarbeit als theoretischer und praktischer Hintergrund für computergestütztes Entwerfen, Ökologie und Meeresschutz. Eine Verbesserung des Entwerfens von künstlichen Korallenriffen wird nicht nur auf technischer Ebene aufgezeigt, sondern es werden auch die wesentlichen Kriterien und Techniken für deren Umsetzung benannt. Schlüsselwörter: Künstliche Korallenriffe, Computermodellierung, hochpräzise Unterwasser-Überwachung, Ökologie im Architekturentwurf. KW - Korallenriff KW - Algorithmus KW - Architektur KW - Meeresökologie KW - Software KW - Artificial coral reefs KW - Computational modelling KW - High precision underwater monitoring KW - Ecology in design KW - Künstliche Korallenriffe KW - Unterwasserarchitektur Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220322-46115 UR - https://artificialreefdesign.com/ SN - 978-3-00-074495-2 N1 - Die URL führt zu 3D Modelle von echten Korallenriffen. ER - TY - THES A1 - Valizadeh, Navid T1 - Developments in Isogeometric Analysis and Application to High-Order Phase-Field Models of Biomembranes N2 - Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics. As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects. As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models. Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,1 KW - Phasenfeldmodell KW - Vesikel KW - Hydrodynamik KW - Multiphysics KW - Isogeometrische Analyse KW - Isogeometric Analysis KW - Vesicle dynamics KW - Phase-field modeling KW - Geometric Partial Differential Equations KW - Residual-based variational multiscale method Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220114-45658 ER - TY - THES A1 - Torres, César T1 - El paisaje de la Cuenca Lechera Central Argentina: la huella de la producción sobre el territorio N2 - In recent times, the study of landscape heritage acquires value by virtue of becoming an alternative to rethink regional development, especially from the point of view of territorial planning. In this sense, the Central Argentine Dairy Basin (CADB) is presented as a space where the traces of different human projects have accumulated over centuries of occupation, which can be read as heritage. The impact of dairy farming and other productive activities has shaped the configuration of its landscape. The main hypothesis assumed that a cultural landscape would have been formed in the CADB, whose configuration would have depended to a great extent on the history of productive activities and their deployment over the territory, and this same history would hide the keys to its alternative. The thesis approached the object of study from descriptive and cartographic methods that placed the narration of the history of territory and the resources of the landscape as a discursive axis. A series of intentional readings of the territory and its constituent parts pondered the layers of data that have accumulated on it in the form of landscape traces, with the help of an approach from complementary dimensions (natural, sociocultural, productive, planning). Furthermore, the intersection of historical sources was used in order to allow the construction of the territorial story and the detection of the origin of the landscape components. A meticulous cartographic work also helped to spatialise the set of phenomena and elements studied, and was reflected in a multiscalar scanning. KW - Landschaft KW - PAISAJE KW - TERRITORIO KW - ORDENAMIENTO TERRITORIAL KW - Raumordnung KW - Territorium KW - Landscape KW - Territory KW - Territorial planning Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220803-46835 ER - TY - JOUR A1 - Teitelbaum, Eric A1 - Alsaad, Hayder A1 - Aviv, Dorit A1 - Kim, Alexander A1 - Völker, Conrad A1 - Meggers, Forrest A1 - Pantelic, Jovan T1 - Addressing a systematic error correcting for free and mixed convection when measuring mean radiant temperature with globe thermometers JF - Scientific reports N2 - It is widely accepted that most people spend the majority of their lives indoors. Most individuals do not realize that while indoors, roughly half of heat exchange affecting their thermal comfort is in the form of thermal infrared radiation. We show that while researchers have been aware of its thermal comfort significance over the past century, systemic error has crept into the most common evaluation techniques, preventing adequate characterization of the radiant environment. Measuring and characterizing radiant heat transfer is a critical component of both building energy efficiency and occupant thermal comfort and productivity. Globe thermometers are typically used to measure mean radiant temperature (MRT), a commonly used metric for accounting for the radiant effects of an environment at a point in space. In this paper we extend previous field work to a controlled laboratory setting to (1) rigorously demonstrate that existing correction factors used in the American Society of Heating Ventilation and Air-conditioning Engineers (ASHRAE) Standard 55 or ISO7726 for using globe thermometers to quantify MRT are not sufficient; (2) develop a correction to improve the use of globe thermometers to address problems in the current standards; and (3) show that mean radiant temperature measured with ping-pong ball-sized globe thermometers is not reliable due to a stochastic convective bias. We also provide an analysis of the maximum precision of globe sensors themselves, a piece missing from the domain in contemporary literature. KW - Strahlungstemperatur KW - Mean radiant temperature KW - Globe thermometers KW - Indoor environment KW - Thermal comfort KW - Measurements KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220509-46363 UR - https://www.nature.com/articles/s41598-022-10172-5#citeas VL - 2022 IS - Volume 12, article 6473 PB - Springer Nature CY - London ER - TY - THES A1 - Tatarin, René T1 - Charakterisieren struktureller Veränderungen in zementgebundenen Baustoffen durch akustische zerstörungsfreie Prüfverfahren N2 - Im Rahmen dieser Arbeit wird das Charakterisieren struktureller Veränderungen zementgebundener Baustoffe durch zwei auf dem Ultraschall-Transmissionsverfahren beruhenden Methoden der zerstörungsfreien Prüfung (ZfP) mit mechanischen Wellen vorgenommen. Zur kontinuierlichen Charakterisierung der Erstarrung und Erhärtung frischer zementgebundener Systeme wird ein auf Ultraschallsensoren für Longitudinal- und Scherwellen basierendes Messsystem in Kombination mit zugehörigen Verfahrensweisen zur Datenauswertung konzipiert, charakterisiert und angewandt. Gegenüber der bislang üblichen alleinigen Bewertung der Verfestigung anhand indirekter Ultraschallparameter wie Ausbreitungsgeschwindigkeit, Signalenergie oder Frequenzgehalt der Longitudinalwelle lässt sich damit eine direkte, sensible Erfassung der sich während der Strukturbildung entwickelnden dynamischen elastischen Eigenschaften auf der Basis primärer physikalischer Werkstoffparameter erreichen. Insbesondere Scherwellen und der dynamische Schubmodul sind geeignet, den graduellen Übergang zum Festkörper mit Überschreiten der Perkolationsschwelle sensibel und unabhängig vom Luftgehalt zu erfassen. Die zeitliche Entwicklung der dynamischen elastischen Eigenschaften, die Strukturbildungsraten sowie die daraus extrahierten diskreten Ergebnisparameter ermöglichen eine vergleichende quantitative Charakterisierung der Strukturbildung zementgebundener Baustoffe aus mechanischer Sicht. Dabei lassen sich typische, oft unvermeidbare Unterschiede in der Zusammensetzung der Versuchsmischungen berücksichtigen. Der Einsatz laserbasierter Methoden zur Anregung und Erfassung von mechanischen Wellen und deren Kombination zu Laser-Ultraschall zielt darauf ab, die mit der Anwendung des konventionellen Ultraschall-Transmissionsverfahrens verbundenen Nachteile zu eliminieren. Diese resultieren aus der Sensorgeometrie, der mechanischen Ankopplung und bei einer Vielzahl von Oberflächenpunkten aus einem hohen prüftechnischen Aufwand. Die laserbasierte, interferometrische Erfassung mechanischer Wellen ist gegenüber Ultraschallsensoren rauschbehaftet und vergleichsweise unsensibel. Als wesentliche Voraussetzung der scannenden Anwendung von Laser-Ultraschall auf zementgebundene Baustoffe erfolgen systematische experimentelle Untersuchungen zur laserinduzierten ablativen Anregung. Diese sollen zum Verständnis des Anregungsmechanismus unmittelbar auf den Oberflächen von zementgebundenen Baustoffen, Gesteinskörnungen und metallischen Werkstoffen beitragen, relevante Einflussfaktoren aus den charakteristischen Materialeigenschaften identifizieren, geeignete Prozessparameter gewinnen und die Verfahrensgrenzen aufzeigen. Unter Einsatz von Longitudinalwellen erfolgt die Anwendung von Laser-Ultraschall zur zeit- und ortsaufgelösten Charakterisierung der Strukturbildung und Homogenität frischer sowie erhärteter Proben zementgebundener Baustoffe. Während der Strukturbildung wird erstmals eine simultane berührungslose Erfassung von Longitudinal- und Scherwellen vorgenommen. Unter Anwendung von tomographischen Methoden (2D-Laufzeit¬tomo¬graphie) werden überlagerungsfreie Informationen zur räumlichen Verteilung struktureller Gefügeveränderungen anhand der longitudinalen Ausbreitungsgeschwindigkeit bzw. des relativen dynamischen Elastizitätsmoduls innerhalb von virtuellen Schnittebenen geschädigter Probekörper gewonnen. Als beton-schädigende Mechanismen werden exemplarisch der kombinierte Frost-Tausalz-Angriff sowie die Alkali-Kieselsäure-Reaktion (AKR) herangezogen. Die im Rahmen dieser Arbeit entwickelten Verfahren der zerstörungsfreien Prüfung bieten erweiterte Möglichkeiten zur Charakterisierung zementgebundener Baustoffe und deren strukturellen Veränderungen und lassen sich zielgerichtet in der Werkstoffentwicklung, bei der Qualitätssicherung sowie zur Analyse von Schadensprozessen und -ursachen einsetzen. N2 - In this research, structural changes of cement-based building materials are characterized using two ultrasonic transmission-based methods of non-destructive testing (NDT) with mechanical waves. For continuous characterization of setting and hardening of fresh cementitious materials a measurement system is designed, characterized and applied based on ultrasonic compressional and shear wave transducers in combination with associated data evaluation procedures. In contrast to common non-destructive testing of setting and hardening by means of solely indirect ultrasonic parameters such as pulse velocity, signal energy or frequency content of compressional waves, a direct sensitive recording of dynamic elastic properties can be achieved during the structure formation on the basis of primary physical material parameters. Especially, shear waves and the dynamic shear modulus are suitable to capture the gradual transition to a solid with exceeding percolation threshold in a sensitive manner and independent of air content. The development of dynamic elastic properties, the structure formation rates and the extracted discrete result parameters enable a comparative and quantitative analysis of the structural formation of fresh cementitious materials from a mechanical point of view. As an advantage, often unavoidable differences in the composition of test blends can be taken into account. The application of laser-based techniques for generation and detection of mechanical waves and their combination to laser-ultrasonics eliminates the disadvantages associated with the application of conventional ultrasonic through-transmission techniques. These result from sensor geometry, mechanical coupling and, in case of numerous surface points, due to a high inspection time and effort. Furthermore, the laser-based interferometric detection of mechanical waves is noisy and relatively insensitive compared to application of ultrasonic sensors. As an essential prerequisite, systematic experimental investigations of laser-induced ablative generation are carried out for the scanning application of laser-ultrasonics on cement-based building materials. These investigations contribute to the understanding of the excitation mechanism directly on the surfaces of concrete, natural aggregates and metallic targets and to the identification of relevant influencing factors from the characteristic material properties. By gathering optimized process parameters, the limitations of laser-ultrasonics to concrete are shown. Laser-ultrasonics is applied using compressional waves for time- and space-resolved characterization of the structure formation and homogeneity of fresh and hardened specimen of cement-based building materials. During the structure formation process, the simultaneous contactless acquisition of compressional and shear waves is carried out for the first time. With the implementation of tomographic methods (2D travel-time tomography) it is possible to obtain superposition-free information on the spatial distribution of microstructural changes by means of the longitudinal ultrasonic pulse velocity or the relative dynamic modulus of elasticity within virtual cross-sections of damaged specimens. The combined freeze-thaw de-icing salt attack as well as the alkali-silica reaction (ASR) are investigated as mechanisms of concrete damage. The methods of non-destructive testing developed within the scope of this study offer extended possibilities for the characterization of cement-based building materials and their structural changes and can be applied in a targeted manner in materials development, quality control and in analysis of damage processes and causes. KW - Beton KW - Hydratation KW - Ultraschall KW - Zerstörungsfreie Werkstoffprüfung KW - Lasertechnologie KW - Laser-Ultraschall KW - elastische Parameter KW - Tomographie KW - Strukturbildung KW - Dauerhaftigkeit Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220215-45920 SN - 978-3-7369-7575-0 PB - Cuvillier Verlag CY - Göttingen ER - TY - JOUR A1 - Taraben, Jakob A1 - Morgenthal, Guido T1 - Integration and Comparison Methods for Multitemporal Image-Based 2D Annotations in Linked 3D Building Documentation JF - Remote Sensing N2 - Data acquisition systems and methods to capture high-resolution images or reconstruct 3D point clouds of existing structures are an effective way to document their as-is condition. These methods enable a detailed analysis of building surfaces, providing precise 3D representations. However, for the condition assessment and documentation, damages are mainly annotated in 2D representations, such as images, orthophotos, or technical drawings, which do not allow for the application of a 3D workflow or automated comparisons of multitemporal datasets. In the available software for building heritage data management and analysis, a wide range of annotation and evaluation functions are available, but they also lack integrated post-processing methods and systematic workflows. The article presents novel methods developed to facilitate such automated 3D workflows and validates them on a small historic church building in Thuringia, Germany. Post-processing steps using photogrammetric 3D reconstruction data along with imagery were implemented, which show the possibilities of integrating 2D annotations into 3D documentations. Further, the application of voxel-based methods on the dataset enables the evaluation of geometrical changes of multitemporal annotations in different states and the assignment to elements of scans or building models. The proposed workflow also highlights the potential of these methods for condition assessment and planning of restoration work, as well as the possibility to represent the analysis results in standardised building model formats. KW - Bauwesen KW - Punktwolke KW - Denkmalpflege KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220513-46488 UR - https://www.mdpi.com/2072-4292/14/9/2286 VL - 2022 IS - Volume 14, issue 9, article 2286 SP - 1 EP - 20 PB - MDPI CY - Basel ER - TY - JOUR A1 - Söbke, Heinrich A1 - Lück, Andrea T1 - Framing Algorithm-Driven Development of Sets of Objectives Using Elementary Interactions JF - Applied System Innovation N2 - Multi-criteria decision analysis (MCDA) is an established methodology to support the decision-making of multi-objective problems. For conducting an MCDA, in most cases, a set of objectives (SOO) is required, which consists of a hierarchical structure comprised of objectives, criteria, and indicators. The development of an SOO is usually based on moderated development processes requiring high organizational and cognitive effort from all stakeholders involved. This article proposes elementary interactions as a key paradigm of an algorithm-driven development process for an SOO that requires little moderation efforts. Elementary interactions are self-contained information requests that may be answered with little cognitive effort. The pairwise comparison of elements in the well-known analytical hierarchical process (AHP) is an example of an elementary interaction. Each elementary interaction in the development process presented contributes to the stepwise development of an SOO. Based on the hypothesis that an SOO may be developed exclusively using elementary interactions (EIs), a concept for a multi-user platform is proposed. Essential components of the platform are a Model Aggregator, an Elementary Interaction Stream Generator, a Participant Manager, and a Discussion Forum. While the latter component serves the professional exchange of the participants, the first three components are intended to be automatable by algorithms. The platform concept proposed has been evaluated partly in an explorative validation study demonstrating the general functionality of the algorithms outlined. In summary, the platform concept suggested demonstrates the potential to ease SOO development processes as the platform concept does not restrict the application domain; it is intended to work with little administration moderation efforts, and it supports the further development of an existing SOO in the event of changes in external conditions. The algorithm-driven development of SOOs proposed in this article may ease the development of MCDA applications and, thus, may have a positive effect on the spread of MCDA applications. KW - Multikriteria-Entscheidung KW - Multikriterielle Entscheidungsanalyse KW - multi-criteria decision analysis; KW - set of objectives KW - crowdsourcing KW - platform KW - elementary interaction KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220713-46624 UR - https://www.mdpi.com/2571-5577/5/3/49 VL - 2022 IS - Volume 5, issue 3, article 49 SP - 1 EP - 20 PB - MDPI CY - Basel ER - TY - JOUR A1 - Söbke, Heinrich A1 - Lück, Andrea T1 - Framing Algorithm-Driven Development of Sets of Objectives Using Elementary Interactions JF - Applied System Innovation N2 - Multi-criteria decision analysis (MCDA) is an established methodology to support the decision-making of multi-objective problems. For conducting an MCDA, in most cases, a set of objectives (SOO) is required, which consists of a hierarchical structure comprised of objectives, criteria, and indicators. The development of an SOO is usually based on moderated development processes requiring high organizational and cognitive effort from all stakeholders involved. This article proposes elementary interactions as a key paradigm of an algorithm-driven development process for an SOO that requires little moderation efforts. Elementary interactions are self-contained information requests that may be answered with little cognitive effort. The pairwise comparison of elements in the well-known analytical hierarchical process (AHP) is an example of an elementary interaction. Each elementary interaction in the development process presented contributes to the stepwise development of an SOO. Based on the hypothesis that an SOO may be developed exclusively using elementary interactions (EIs), a concept for a multi-user platform is proposed. Essential components of the platform are a Model Aggregator, an Elementary Interaction Stream Generator, a Participant Manager, and a Discussion Forum. While the latter component serves the professional exchange of the participants, the first three components are intended to be automatable by algorithms. The platform concept proposed has been evaluated partly in an explorative validation study demonstrating the general functionality of the algorithms outlined. In summary, the platform concept suggested demonstrates the potential to ease SOO development processes as the platform concept does not restrict the application domain; it is intended to work with little administration moderation efforts, and it supports the further development of an existing SOO in the event of changes in external conditions. The algorithm-driven development of SOOs proposed in this article may ease the development of MCDA applications and, thus, may have a positive effect on the spread of MCDA applications. KW - Operations Research KW - multi-criteria decision analysis KW - elementary interaction KW - set of objectives KW - crowdsourcing KW - platform Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220524-46469 UR - https://www.mdpi.com/2571-5577/5/3/49 VL - 2022 IS - Volume 5, issue 49, article 49 SP - 1 EP - 21 PB - MDPI CY - Basel ER - TY - JOUR A1 - Stadler, Max T1 - Gründerzeit. Hightech und Alternativen der Wissenschaft in West-Berlin JF - NTM Zeitschrift für Geschichte der Wissenschaften, Technik und Medizin N2 - Zu den diversen Unternehmungen sozialbewegter „Gegenwissenschaft“, die um 1980 auf der Bildfläche der BRD erschienen, zählte der 1982 gegründete Berliner Wissenschaftsladen e. V., kurz WILAB – eine Art „alternatives“ Spin-off der Technischen Universität Berlin. Der vorliegende Beitrag situiert die Ausgründung des „Ladens“ im Kontext zeitgenössischer Fortschritte der (regionalen) Forschungs- und Technologiepolitik. Gezeigt wird, wie der deindustrialisierenden Inselstadt, qua „innovationspolitischer“ Gegensteuerung, dabei sogar eine gewisse Vorreiterrolle zukam: über die Stadtgrenzen hinaus sichtbare Neuerungen wie die Gründermesse BIG TECH oder das 1983 eröffnete Berliner Innovations- und Gründerzentrum (BIG), der erste „Incubator“ [sic] der BRD, etwa gingen auf das Konto der 1977/78 lancierten Technologie-Transferstelle der TU Berlin, TU-transfer. Anders gesagt: tendenziell bekam man es hier nun mit Verhältnissen zu tun, die immer weniger mit den Träumen einer „kritischen“, nicht-fremdbestimmten (Gegen‑)Wissenschaft kompatibel waren. Latent konträr zur historiographischen Prominenz des wissenschaftskritischen Zeitgeists fristeten „alternativen“ Zielsetzungen verpflichtete Unternehmungen wie „WILAB“ ein relativ marginalisiertes Nischendasein. Dennoch wirft das am WILAB verfolgte, so gesehen wenig aussichtsreiche Anliegen, eine andere, nämlich „humanere“ Informationstechnologie in die Wege zu leiten, ein instruktives Licht auf die Aufbrüche „unternehmerischer“ Wissenschaft in der BRD um 1980. KW - Berlin KW - Wissenschaftspolitik KW - Gegenwissenschaft KW - Gegenwissen KW - Informatik KW - Strukturkrise Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230124-48800 UR - https://link.springer.com/article/10.1007/s00048-022-00352-9 VL - 2022 IS - 30 (2022) SP - 599 EP - 632 PB - Basel CY - Birkhäuser ER - TY - JOUR A1 - Simon-Ritz, Frank T1 - Zwischen Residenzkultur und Bratwursttradition: Thüringer UNESCO-Initiativen JF - Palmbaum: Literarisches Journal aus Thüringen N2 - Der Essay, der in H. 1/2022 des "Palmbaum: literarisches Journal aus Thüringen" erschienen ist, beschäftigt sich mit dem Begriff des "kulturellen Erbes", der verschiedenen UNESCO-Programmen zugrundeliegt. KW - Kultur KW - UNESCO Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220413-46282 VL - 2022 IS - Heft 1 PB - Quartus Verlag CY - Bucha bei Jena ER - TY - THES A1 - Shaaban Mohamed, Ahmed Mostafa T1 - Isogeometric boundary element analysis and structural shape optimization for Helmholtz acoustic problems N2 - In this thesis, a new approach is developed for applications of shape optimization on the time harmonic wave propagation (Helmholtz equation) for acoustic problems. This approach is introduced for different dimensional problems: 2D, 3D axi-symmetric and fully 3D problems. The boundary element method (BEM) is coupled with the isogeometric analysis (IGA) forming the so-called (IGABEM) which speeds up meshing and gives higher accuracy in comparison with standard BEM. BEM is superior for handling unbounded domains by modeling only the inner boundaries and avoiding the truncation error, present in the finite element method (FEM) since BEM solutions satisfy the Sommerfeld radiation condition automatically. Moreover, BEM reduces the space dimension by one from a volumetric three-dimensional problem to a surface two-dimensional problem, or from a surface two-dimensional problem to a perimeter one-dimensional problem. Non-uniform rational B-splines basis functions (NURBS) are used in an isogeometric setting to describe both the CAD geometries and the physical fields. IGABEM is coupled with one of the gradient-free optimization methods, the Particle Swarm Optimization (PSO) for structural shape optimization problems. PSO is a straightforward method since it does not require any sensitivity analysis but it has some trade-offs with regard to the computational cost. Coupling IGA with optimization problems enables the NURBS basis functions to represent the three models: shape design, analysis and optimization models, by a definition of a set of control points to be the control variables and the optimization parameters as well which enables an easy transition between the three models. Acoustic shape optimization for various frequencies in different mediums is performed with PSO and the results are compared with the benchmark solutions from the literature for different dimensional problems proving the efficiency of the proposed approach with the following remarks: - In 2D problems, two BEM methods are used: the conventional isogeometric boundary element method (IGABEM) and the eXtended IGABEM (XIBEM) enriched with the partition-of-unity expansion using a set of plane waves, where the results are generally in good agreement with the linterature with some computation advantage to XIBEM which allows coarser meshes. -In 3D axi-symmetric problems, the three-dimensional problem is simplified in BEM from a surface integral to a combination of two 1D integrals. The first is the line integral similar to a two-dimensional BEM problem. The second integral is performed over the angle of revolution. The discretization is applied only to the former integration. This leads to significant computational savings and, consequently, better treatment for higher frequencies over the full three-dimensional models. - In fully 3D problems, a detailed comparison between two BEM methods: the conventional boundary integral equation (CBIE) and Burton-Miller (BM) is provided including the computational cost. The proposed models are enhanced with a modified collocation scheme with offsets to Greville abscissae to avoid placing collocation points at the corners. Placing collocation points on smooth surface enables accurate evaluation of normals for BM formulation in addition to straightforward prediction of jump-terms and avoids singularities in $\mathcal{O} (1/r)$ integrals eliminating the need for polar integration. Furthermore, no additional special treatment is required for the hyper-singular integral while collocating on highly distorted elements, such as those containing sphere poles. The obtained results indicate that, CBIE with PSO is a feasible alternative (except for a small number of fictitious frequencies) which is easier to implement. Furthermore, BM presents an outstanding treatment of the complicated geometry of mufflers with internal extended inlet/outlet tube as an interior 3D Helmholtz acoustic problem instead of using mixed or dual BEM. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,6 KW - Randelemente-Methode KW - Isogeometrische Analyse KW - Gestaltoptimierung KW - Boundary Element Method KW - Isogeometric Analysis KW - Helmholtz Acoustic Problems KW - Shape Optimization Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220816-47030 ER - TY - THES A1 - Schnös, Christian Emanuel T1 - Handlungsressourcen von zivilgesellschaftlichen Akteuren in Planungsprozessen BT - Untersucht an den Beispielen der Berliner Mauerparkserweiterung und des Baugebietes "So Berlin!", Berlin N2 - Diese Dissertation untersucht Handlungsressourcen von zivilgesellschaftlichen Akteuren in Planungsprozessen um innerstädtische Planungsverfahren. Den theoretischen Rahmen bilden die Kapitalarten von Pierre Bourdieu, die zusammen mit dem Matrixraum von Dieter Läpple zu einem neuen Feldbegriff des ‚Raumfeldes‘ zusammengeführt und operationalisiert wurden. Es handelt sich um eine qualitative Arbeit, die zwischen Stadtsoziologie und Urbanistik zu verorten ist. Als Fallbeispiele wurde die Erweiterung des Berliner Mauerparks sowie das Baugebiet „So! Berlin“ in Berlin gewählt. KW - Zivilgesellschaft KW - Stadtforschung KW - Empirische Sozialforschung KW - Ressourcen KW - Urbanistik KW - Beteiligungsforschung KW - Relationale Raummodelle Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220505-46346 ER - TY - THES A1 - Schlaffke, Markus T1 - Die Rekonstruktion des Menaka-Archivs: Navigationen durch die Tanz-Moderne zwischen Kolkata, Mumbai und Berlin 1936-38 N2 - Die Europatournee des Indischen Menaka-Balletts von 1936-38 ist der Ausgangspunkt dieser archivologischen Navigation entlang der Spuren indischer KünstlerInnen in Europa. In einer breit angelegten Archivrecherche wurden dazu Dokumente, Fundstücke, orale Erinnerungen und ethnografische Beobachtungen aus dem Kontext der Menaka-Tournee durch das nationalsozialistische Deutschland zusammengetragen. Das Buch beschreibt den Rekonstruktionsprozess eines bedeutsamen Projekts der indischen Tanzmoderne. Es verfolgt dabei eine Methode, mit der sich die fragmentierten Dokumente des Medienereignisses als Spur lesen lassen und nutzt eine künstlerisch-forschende Involvierung in gegenwärtige Erinnerungspolitiken, in welche die verflochtenen Strukturen der künstlerischen Avantgarde zwischen Kolkata, Mumbai und Berlin hineinreichen. Die Spur des Menaka-Ballett erweist sich dabei als Teil weitreichender ideologischer, tänzerischer, musikalischer, filmischer und literarischer Strömungen, die auch in gegenwärtigen kulturellen Bestimmungen fortwirken. Fotografien, Zeitungsberichte, Film- und Tonaufnahmen, Briefe und persönliche Erinnerungstücke erzählen davon, wie sich, vor dem Hintergrund der im antikolonialen Aufbruch befindlichen Kulturreform in Indien, und der nationsozialistisch-völkischen Kulturpolitik in Deutschland, die Tänzerinnen und Musiker der indischen Ballettgruppe und die deutsche Öffentlichkeit im gegenseitigen Spiegel betrachteten, während die Vorzeichen des kommenden Krieges immer deutlicher wurden. KW - Menaka KW - Menaka-Archiv KW - Leila Roy-Sokhey Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220823-47069 ER - TY - JOUR A1 - Roskamm, Nikolai A1 - Vollmer, Lisa T1 - Was ist Stadt? Was ist Kritik? Einführung in die Debatte zum Jubiläumsheft von sub\urban JF - sub\urban. zeitschrift für kritische stadtforschung N2 - Im Heft zum zehnjährigen Jubiläum von sub\urban mit dem Themenschwerpunkt „sub\x: Verortungen, Entortungen" veröffentlichen wir eine Debatte, die von den bisherigen in unserer Zeitschrift in dieser Rubrik geführten textlichen Diskussionen abweicht. Im Vorfeld der Planungen für unsere Jubiläumsausgabe haben wir die aktuellen Mitglieder unseres wissenschaftlichen Beirats darum gebeten, zwei grundlegende Fragen von kritischer Stadtforschung in kurzen Beiträgen zu diskutieren: Was ist Stadt? Was ist Kritik? KW - Stadt KW - Stadtforschung KW - Kritik KW - Theorie KW - kritische Stadtforschung KW - Debatte KW - Stadtbegriff KW - Kritikbegriff KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220811-46847 UR - https://zeitschrift-suburban.de/sys/index.php/suburban/article/view/798 IS - Band 10, Nr. 1, SP - 127 EP - 130 PB - Sub\urban e.V. CY - Leipzig ER - TY - THES A1 - Riechert, Christin T1 - Hydratation und Eigenschaften von Gips-Zement-Puzzolan-Bindemitteln mit alumosilikatischen Puzzolanen N2 - Reine Calciumsulfatbindemittel weisen eine hohe Löslichkeit auf. Feuchteinwirkung führt zudem zu starken Festigkeitsverlusten. Aus diesem Grund werden diese Bindemittel ausschließlich für Baustoffe und -produkte im Innenbereich ohne permanenten Feuchtebeanspruchung eingesetzt. Eine Möglichkeit, die Feuchtebeständigkeit zu erhöhen, ist die Beimischung puzzolanischer und zementärer Komponenten. Diese Mischsysteme werden Gips-Zement-Puzzolan-Bindemittel (kurz: GZPB) genannt. Mischungen aus Calciumsulfaten und Portlandzementen allein sind aufgrund der treibenden Ettringitbildung nicht raumbeständig. Durch die Zugabe von puzzolanischen Stoffen können aber Bedingungen im hydratisierenden System geschaffen werden, welche eine rissfreie Erhärtung ermöglichen. Hierfür ist eine exakte Rezeptierung der GZPB notwendig, um die GZPB-typischen, ettringitbedingten Dehnungen zeitlich zu begrenzen. Insbesondere bei alumosilikatischen Puzzolanen treten während der Hydratation gegenüber rein silikatischen Puzzolanen deutlich höhere Expansionen auf, wodurch die Gefahr einer potenziellen Rissbildung steigt. Für die Erstellung geeigneter GZPB-Zusammensetzungen bedarf es daher einer Methodik, um raumbeständig erhärtende Systeme sicher von destruktiven Mischungen unterscheiden zu können. Sowohl für die Rezeptierung als auch für die Anwendung der GZPB existieren in Deutschland keinerlei Normen. Darüber hinaus sind die Hydratationsvorgänge sowie die entstehenden Produkte nicht konsistent beschrieben. Auch auf die Besonderheiten der GZPB mit alumosilikatischen Puzzolanen wird in der Literatur nur unzureichend eingegangen. Ziel war es daher, ein grundlegendes Verständnis der Hydratation sowie eine sichere Methodik zur Rezeptierung raumbeständig und rissfrei erhärtender GZPB, insbesondere in Hinblick auf die Verwendung alumosilikatischer Puzzolane, zu erarbeiten. Darüber hinaus sollte systematisch der Einfluss der Einzelkomponenten auf Hydratation und Eigenschaften dieser Bindemittelsysteme untersucht werden. Dies soll ermöglichen, die GZPB für ein breites Anwendungsspektrum als Bindemittel zu etablieren, und somit vorteilhafte Eigenschaften der Calciumsulfate (geringe Schwindneigung, geringe CO2-Emission etc.) mit der Leistungs-fähigkeit von Zementen (Wasserbeständigkeit, Festigkeit, Dauerhaftigkeit etc.) zu verbinden. Als Ausgangsstoffe der Untersuchungen zu den GZPB wurden Stuckgips und Alpha-Halbhydrat als Calciumsulfatbindemittel in unterschiedlichen Anteilen im GZPB verwendet. Die Puzzolan-Zement-Verhältnisse wurden ebenfalls variiert. Als Puzzolan kam für den Großteil der Untersuchungen ein alumosilikatisches Metakaolin zum Einsatz. Als kalkspendende Komponente diente ein reiner Portlandzement. Das Untersuchungsprogramm gliederte sich in 4 Teile. Zuerst wurde anhand von CaO- und pH-Wert-Messungen in Suspensionen sowie dem Längenänderungsverhalten von Bindemittelleimen verschiedener Zusammensetzungen eine Vorauswahl geeigneter GZPB-Rezepturen ermittelt. Danach erfolgten, ebenfalls an Bindemittelleimen, Untersuchungen zu den Eigenschaften der als geeignet eingeschätzten GZPB-Mischungen. Hierzu zählten Langzeitbetrachtungen zur rissfreien Erhärtung bei unterschiedlichen Umgebungsbedingungen sowie die Festigkeitsentwicklung im trockenen und feuchten Zustand. Im nächsten Schritt wurde anhand zweier exemplarischer GZPB-Zusammensetzungen (mit silikatischen und alumosilikatischen Puzzolan) die prinzipiell mögliche Phasenzusammensetzung unter Variation des Puzzolan-Zement-Verhältnisses (P/Z-Verhältnis) und des Calciumsulfatanteils im thermodynamischen Gleichgewichtszustand berechnet. Hier wurde im Besonderen auf Unterschiede der silikatischen und alumosilikatischen Puzzolane eingegangen. Im letzten Teil der Untersuchungen wurden die Hydratationskinetik der GZPB sowie die Gefügeentwicklung näher betrachtet. Hierfür wurden die Porenlösungen chemisch analysiert und Sättigungsindizes berechnet, sowie elektronenmikropische, porosimetrische und röntgenografische Untersuchungen durchgeführt. Abschließend wurden die Ergebnisse gesamtheitlich interpretiert, da die Ergebnisse der einzelnen Untersuchungsprogramme miteinander in Wechselwirkung stehen. Als hauptsächliche Hydratationsprodukte wurden Calciumsulfat-Dihydrat, Ettringit und C-(A)-S-H-Phasen ermittelt, deren Anteile im GZPB neben dem Calciumsulfatanteil und dem Puzzolan-Zement-Verhältnis auch deutlich vom Wasserangebot und der Gefügeentwicklung abhängen. Bei Verwendung von alumosilikatischen Puzzolans kommt es wahrscheinlich zur teilweisen Substitution des Siliciums durch Aluminium in den C-S-H-Phasen. Dies erscheint aufgrund des Nachweises der für diese Phasen typischen, folienartigen Morphologie wahrscheinlich. Portlandit wurde in raumbeständigen GZPB-Systemen nur zu sehr frühen Zeitpunkten in geringen Mengen gefunden. In den Untersuchungen konnte ein Teil der in der Literatur beschriebenen, prinzipiellen Hydratationsabläufe bestätigt werden. Bei Verwendung von Halbhydrat als Calciumsulfatkomponente entsteht zuerst Dihydrat und bildet die Primärstruktur der GZPB. In dieses existierende Grundgefüge kristallisieren dann das Ettringit und die C-(A)-S-H-Phasen. In den GZPB sorgen entgegen der Beschreibungen in der Literatur nicht ausschließlich die C-(A)-S-H-Phasen zur Verbesserung der Feuchtebeständigkeit und der Erhöhung des Festigkeitsniveaus, sondern auch das Ettringit. Beide Phasen überwachsen im zeitlichen Verlauf der Hydratation die Dihydratkristalle in der Matrix und hüllen diese – je nach Calciumsulfatanteil im GZPB – teilweise oder vollständig ein. Diese Umhüllung sowie die starke Gefügeverdichtung durch die C-(A)-S-H-Phasen und das Ettringit bedingen, dass ein lösender Angriff durch Wasser erschwert oder gar verhindert wird. Gleichzeitig wird die Gleitfähigkeit an den Kontaktstellen der Dihydratkristalle verringert. Eine rissfreie und raumbeständige Erhärtung ist für die gefahrlose Anwendung eines GZPB-Systems essentiell. Hierfür ist die Kinetik der Ettringitbildung von elementarer Bedeutung. Die gebildete Ettringitmenge spielt nur eine untergeordnete Rolle. Selbst ausgeprägte, ettringitbedingte Dehnungen und hohe sich bildende Mengen führen zu frühen Zeitpunkten, wenn die Dihydratkristalle noch leicht gegeneinander verschiebbar sind, zu keinen Schäden. Bleibt die Übersättigung bezüglich Ettringit und somit auch der Kristallisationsdruck allerdings über einen langen Zeitraum hoch, genügen bereits geringe Ettringitmengen, um das sich stetig verfestigende Gefüge stark zu schädigen. Die für die raumbeständige Erhärtung der GZPB notwendige, schnelle Abnahme der Ettringitübersättigung wird hauptsächlich durch die Reaktivität des Puzzolans beeinflusst. Die puzzolanische Reaktion führt zur Bindung des aus dem Zement stammenden Calciumhydroxid durch die Bildung von C-(A)-S-H-Phasen und Ettringit. Hierdurch sinkt die Calcium- und Hydroxidionenkonzentration in der Porenlösung im Verlauf der Hydratation, wodurch auch die Übersättigung bezüglich Ettringit abnimmt. Je höher die Reaktivität des Puzzolans ist, desto schneller sinkt der Sättigungsindex des Ettringits und somit auch der Kristallisationsdruck. Nach Unterschreiten eines noch näher zu klärendem Grenzwert der Übersättigung stagnieren die Dehnungen. Das Ettringit kristallisiert bzw. wächst nun bevorzugt in den Poren ohne eine weitere, äußere Volumenzunahme zu verursachen. Um eine schadensfreie Erhärtung des GZPB zu gewährleisten, muss gerade in der frühen Phase der Hydratation ein ausreichendes Wasserangebot gewährleistet werden, so dass die Ettringitbildung möglichst vollständig ablaufen kann. Andernfalls kann es bei einer Wiederbefeuchtung zur Reaktivierung der Ettringitbildung kommen, was im eingebauten Zustand Schäden verursachen kann. Die Gewährleistung eines ausreichenden Wasserangebots ist im GZPB-System nicht unproblematisch. In Abhängigkeit der GZPB-Zusammensetzung können sich große Ettringitmengen bilden, die einen sehr hohen Wasserbedarf aufweisen. Deshalb kann es, je nach verwendeten Wasser-Bindemittel-Wert, im Bindemittelleim zu einem Wassermangel kommen, welcher die weitere Hydratation verlangsamt bzw. komplett verhindert. Zudem können GZPB-Systeme teils sehr dichte Gefüge ausbilden, wodurch der Wassertransport zum Reaktionsort des Ettringits zusätzlich behindert wird. Die Konzeption raumbeständiger GZPB-Systeme muss anhand mehrerer aufeinander aufbauender Untersuchungen erfolgen. Zur Vorauswahl geeigneter Puzzolan-Zementverhältnisse eignen sich die Messungen der CaO-Konzentration und des pH-Wertes in Suspensionen. Als alleinige Beurteilungsgrundlage reicht dies allerdings nicht aus. Zusätzlich muss das Längenänderungs-verhalten beurteilt werden. Raumbeständige Mischungen mit alumosilikatischen Puzzolanen zeigen zu frühen Zeitpunkten starke Dehnungen, welche dann abrupt stagnieren. Stetige – auch geringe – Dehnungen weisen auf eine destruktive Zusammensetzung hin. Mit diesem mehrstufigen Vorgehen können raumbeständige, stabile GZPB-Systeme konzipiert werden, so dass die Zielstellung der Arbeit erreicht wurde und ein sicherer praktischer Einsatz dieser Bindemittelart gewährleistet werden kann.   KW - Gips KW - Zement KW - Hydratation KW - Gips-Zement-Puzzolan-Bindemittel KW - Hydratation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220825-47076 SN - 978-3-00-073003-0 ER - TY - JOUR A1 - Rabczuk, Timon A1 - Guo, Hongwei A1 - Zhuang, Xiaoying A1 - Chen, Pengwan A1 - Alajlan, Naif T1 - Stochastic deep collocation method based on neural architecture search and transfer learning for heterogeneous porous media JF - Engineering with Computers N2 - We present a stochastic deep collocation method (DCM) based on neural architecture search (NAS) and transfer learning for heterogeneous porous media. We first carry out a sensitivity analysis to determine the key hyper-parameters of the network to reduce the search space and subsequently employ hyper-parameter optimization to finally obtain the parameter values. The presented NAS based DCM also saves the weights and biases of the most favorable architectures, which is then used in the fine-tuning process. We also employ transfer learning techniques to drastically reduce the computational cost. The presented DCM is then applied to the stochastic analysis of heterogeneous porous material. Therefore, a three dimensional stochastic flow model is built providing a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. The performance of the presented NAS based DCM is verified in different dimensions using the method of manufactured solutions. We show that it significantly outperforms finite difference methods in both accuracy and computational cost. KW - Maschinelles Lernen KW - Neuronales Lernen KW - Fehlerabschätzung KW - deep learning KW - neural architecture search KW - randomized spectral representation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220209-45835 UR - https://link.springer.com/article/10.1007/s00366-021-01586-2 VL - 2022 SP - 1 EP - 26 PB - Springer CY - London ER - TY - GEN A1 - Raab, Susanna A1 - Müller, Hannah T1 - LebensMittelPunkte schaffen in Kooperation! Ein Handlungsleitfaden für die Zusammenarbeit von bezirklicher Verwaltung und ernährungspolitischen Initiativen BT - LebensMittelPunkte in Kooperation zwischen bezirklicher Verwaltung und ernährungspolitischen Initiativen etablieren. Ein Handlungsleitfaden mit Erfahrungen aus Berlin. N2 - Zugang zu gesunder und nachhaltiger Ernährung ist in Berlin nicht für alle Menschen eine Selbstverständlichkeit. Um Ernährung für alle gewährleisten zu können, braucht es einen Wandel des Ernährungssystems in Berlin, der eine ökologische, klima- und sozialgerechte Nahrungsproduktion und Verteilung für alle Menschen in der Stadt ermöglicht. Einen Beitrag um die Ernährung in der Stadt gerechter und nachhaltiger zu gestalten kann ein sogenannter LebensMittelPunkt (LMP) leisten. LebensMittelPunkte entstehen meist aus ehrenamtlichen Initiativen, können aber auch in Zusammenarbeit mit städtischen Verwaltungen etabliert werden. Eine Zusammenarbeit zwischen zivilgesellschaftlichen Organisationen und Verwaltungen kann dabei Potenziale und Ressourcen freisetzen. Dieser Leitfaden soll ernährungspolitischen Initiativen und Vereinen aus der Zivilgesellschaft sowie kommunalen oder bezirklichen Verwaltungen in Berlin – und darüber hinaus – Empfehlungen geben, wie ein LebensMittelPunkt in einer gemeinsamen Kooperation aufgebaut werden kann. T3 - KoopWohl - Working Paper - 5 KW - Ernährung KW - Kooperation KW - Zivilgesellschaft KW - Verwaltung KW - Ko-Produktion KW - Ernährungswende KW - Lebensmittelpunkt KW - Leitfaden Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221109-47347 ER - TY - RPRT A1 - Raab, Susanna T1 - Ernährungsgerechtigkeit im deutschen Wohlfahrtsregime. Teilhabe und Ausschlüsse N2 - Ernährung bestimmt unser tägliches Leben. Sie erfüllt in erster Linie die physiologische Notwendigkeit unseren Körper am Leben zu halten und ist gleichzeitig Alltagspraxis, durch welche gesamtgesellschaftliche Strukturen sichtbar werden. Innerhalb dieser Alltagspraxen erfüllt Ernährung vor allem eine wichtige Funktion in der Herstellung gesellschaftlicher Teilhabe oder kann strukturelle Ausschlüsse und soziale Ungleichheit bedingen. Dem Wohlfahrtsregime kommt somit eine wichtige Aufgabe in der Grundversorgung der Bevölkerung zu und muss innerhalb der Daseinsvorsorge auf Ausschlüsse von ernährungsbezogener Teilhabe einzelner Bevölkerungsschichten eingehen und sozialer Ungleichheit entgegenwirken. In diesem Working Paper soll der Fragestellung nachgegangen werden, inwiefern Teilhabe bzw. strukturelle Ausschlüsse von Ernährung innerhalb des bundesdeutschen Wohlfahrtsregimes hergestellt werden und durch welche politischen Praktiken und Forderungen aus der Zivilgesellschaft bzw. sozialen Bewegungen ernährungsvermittelte Teilhabe (wieder) hergestellt wird. T3 - KoopWohl - Working Paper - 2 KW - Ernährung KW - Wohlfahrtsregime KW - Ernährungsgerechtigkeit Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230530-63952 ER - TY - JOUR A1 - Patzelt, Max A1 - Erfurt, Doreen A1 - Ludwig, Horst-Michael T1 - Quantification of cracks in concrete thin sections considering current methods of image analysis JF - Journal of Microscopy N2 - Image analysis is used in this work to quantify cracks in concrete thin sections via modern image processing. Thin sections were impregnated with a yellow epoxy resin, to increase the contrast between voids and other phases of the concrete. By the means of different steps of pre-processing, machine learning and python scripts, cracks can be quantified in an area of up to 40 cm2. As a result, the crack area, lengths and widths were estimated automatically within a single workflow. Crack patterns caused by freeze-thaw damages were investigated. To compare the inner degradation of the investigated thin sections, the crack density was used. Cracks in the thin sections were measured manually in two different ways for validation of the automatic determined results. On the one hand, the presented work shows that the width of cracks can be determined pixelwise, thus providing the plot of a width distribution. On the other hand, the automatically measured crack length differs in comparison to the manually measured ones. KW - Beton KW - Rissbildung KW - Bildanalyse KW - Maschinelles Lernen KW - Mikroskopie KW - concrete KW - crack KW - degradation KW - transmitted light microscopy Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220811-46754 UR - https://onlinelibrary.wiley.com/doi/full/10.1111/jmi.13091 VL - 2022 IS - Volume 286, Issue 2 SP - 154 EP - 159 ER - TY - THES A1 - Partschefeld, Stephan T1 - Synthese von Fließmitteln aus Stärke und Untersuchung der Wechselwirkung mit Portlandzement N2 - Das Ziel dieser Arbeit war es, neuartige Fließmittel auf Basis von Stärke als nachwachsenden Rohstoff zu synthetisieren und die Wechselwirkung mit Portlandzement zu charakterisieren. Die Notwendigkeit, Alternativen zu synthetischen Zusatzmittel zu erforschen, ergibt sich aus der benötigten Menge zur Verarbeitung von ca. 4,1 Gt/a, wobei ca. 85 % der Zusatzmittel auf die Fließmittel entfallen. Um Fließmittel aus Stärke zu synthetisieren, wurden drei Basisstärken unterschiedlicher Herkunft verwendet. Es wurde eine Maniokstärke mit einer niedrigen Molekularmasse und eine Weizenstärke mit einer hohen Molekularmasse verwendet. Darüber hinaus wurde eine Kartoffelstärke mit einer mittleren Molekularmasse, die ein Abfallprodukt der kartoffelverarbeitenden Industrie darstellt, genutzt. Die Stärkefließmittel wurden durch chemische Modifikation in einem zweistufigen Prozess synthetisiert. Im ersten Schritt wurde das Molekulargewicht der Weizen- und Kartoffelstärke durch säurehydrolytischen Abbau verringert. Für die kurzkettige Maniokstärke war eine Degradation der Molekularmasse nicht notwendig. Im zweiten Syntheseschritt wurden anionische Ladungen durch das Versetzen der degradierten Stärken und Maniokstärke mit Natriumvinylsulfonat in die Stärkemoleküle eingeführt. Beurteilung der Synthesemethode zur Erzeugung von Stärkefließmitteln In diesem Zusammenhang sollten molekulare Parameter der Stärkefließmittel gezielt eingestellt werden, um eine Fließwirkung im Portlandzement zu erhalten. Insbesondere die Molekularmasse und die Menge anionischer Ladungen sollte variiert werden, um Abhängigkeiten mit der Dispergierleistung zu identifizieren. 1. Es konnte durch GPC-Messungen gezeigt werden, dass die Molekularmasse der langkettigen Weizenstärke durch die gewählten Modifizierungsbedingungen zum säurehydrolytischen Abbau verringert werden konnte. Durch Variation der säurehydrolytischen Bedingungen wurden 4 degradierte Weizenstärken erzeugt, die eine Reduzierung der Molekularmasse um 27,5 – 43 % aufwiesen. Die Molekularmasse der Kartoffelstärke konnte durch säurehydrolytischen Abbau um ca. 26 % verringert werden. 2. Durch PCD-Messungen wurde gezeigt, dass anionische Ladungen durch Sulfoethylierung der freien Hydroxylgruppen in die degradierten Stärken eingeführt werden konnten. Durch Variation der Dauer der Sulfoethylierung konnte die Menge der anionischen Ladungen gesteuert und gezielt variiert werden, so dass Stärkefließmittel mit steigender Ladungsmenge in folgender Reihenfolge synthetisiert wurden: W-3 < W-2 < K-1 < W¬-4 < W¬1 < M-1 Im Ergebnis der chemischen Modifizierung konnten 6 Stärkefließmittel mit variierten Molekularmassen und anionischen Ladungen erzeugt werden. Es konnte gezeigt werden, dass die Herkunft der Stärke für die chemische Modifizierung unerheblich ist. Die Fließmittel lagen synthesebedingt als basische, wässrige Suspensionen mit Wirkstoffgehalten im Bereich von 23,5 – 50 % vor. Beurteilung der Dispergierleistung der synthetisierten Stärkefließmittel Die Dispergierperformance wurde durch rheologische Experimente mit einem Rotationsviskosimeter erfasst. Dabei wurden der Einfluss auf die Fließkurven und die Viskositätskurven betrachtet. Durch Vergleich der Dispergierleistung mit einem Polykondensat- und einem PCE-Fließmittel konnte eine Einordnung und Bewertung der Fließmittel vorgenommen werden. 3. Die rheologische Experimente haben gezeigt, dass die Stärkefließmittel eine vergleichbar hohe Dispergierleistung aufweisen, wie das zum Vergleich herangezogen PCE-Fließmittel. Darüber hinaus zeigte sich, dass die Fließwirkung der 6 Stärkefließmittel gegenüber dem Polykondensatfließmittel deutlich höher ist. Das aus der Literatur bekannte Einbrechen der Dispergierleistung der Polykondensat-fließmittel bei w/z-Werten < 0,4 konnte bestätigt werden. 4. Alle 6 Stärkefließmittel führten zu einer Verringerung der Fließgrenze und der dynamischen Viskosität des Zementleimes bei einem w/z-Wert von 0,35. 5. Der Vergleich der Dispergierleistung der Stärkefließmittel untereinander zeigte, dass die anionische Ladungsmenge einen Schlüsselparameter darstellt. Die Stärkefließmittel M-1, K-1, W-1 und W-4 mit anionischen Ladungsmengen > 6 C/g zeigten die höchste Dispergier¬performance. Die vergleichend herangezogenen klassischen Fließmittel wiesen anionische Ladungsmengen im Bereich von 1,2 C/g (Polycondensat) und 1,6 C/g (PCE) auf. Die Molekularmasse schien für die Dispergierleistung zunächst unerheblich zu sein. Aus diesem Grund wurde die Basisweizenstärke erneut chemisch modifiziert, indem anionische Ladungen eingeführt wurden, ohne die Molekularmasse jedoch zu verringern. Das Stärkederivat wies verdickende Eigenschaften im Zementleim auf. Daraus konnte geschlussfolgert werden, dass eine definierte Grenzmolekularmasse (150.000 Da) existiert, die unterschritten werden muss, um Fließmittel aus Stärke zu erzeugen. Des Weiteren zeigen die Ergebnisse, dass durch die chemische Modifizierung sowohl Fließmittel als auch Verdicker aus Stärke erzeugt werden können. Beurteilung der Beeinflussung der Hydratation und der Porenlösung des Portlandzementes Aus der Literatur ist bekannt, dass Fließmittel die Hydratation von Portlandzement maßgeblich beeinflussen können. Aus diesem Grund wurden kalorimetrische und konduktometrische Untersuchungen an Zementsuspensionen, die mit den synthetisierten Stärkefließmitteln versetzt wurden, durchgeführt. Ergänzt wurden die Untersuchungen durch Porenlösungsanalysen zu verschiedenen Zeitpunkten der Hydratation. 6. Die kalorimetrischen Untersuchungen zur Beeinflussung der Hydratation des Portlandzementes zeigten, dass die dormante Periode durch die Zugabe der Stärkefließmittel z.T. erheblich verlängert wird. Es konnte gezeigt werden, dass, je höher die anionische Ladungsmenge der Stärkefließmittel ist, desto länger dauert die dormante Periode andauert. Darüber hinaus zeigte sich, dass eine niedrige Molekularmasse der Stärkefließmittel die Verlängerung der dormanten Periode begünstigt. 7. Durch die konduktometrischen Untersuchungen konnte gezeigt werden, dass alle Stärkefließmittel die Dauer des freien- und diffusionskontrollierten CSH-Phasenwachstums verlangsamen. Insbesondere die Ausfällung des Portlandits, welches mit dem Erstarrungsbeginn korreliert, erfolgt zu deutlich späteren Zeitpunkten. Des Weiteren korrelierten die konduktometrischen Untersuchungen mit der zeitlichen Entwicklung der Calciumkonzentration der Porenlösungen. Der Vergleich der Stärkefließmittel untereinander zeigte, dass die Molekularmasse ein Schlüsselparameter ist. Das Stärkefließmittel M-1 mit der geringsten Molekularmasse, welches geringe Mengen kurzkettiger Anhydroglucoseeinheiten aufweist, verzögert die Hydratphasenbildung am stärksten. Diese Wirkung ist vergleichbar mit der von Zuckern. Darüber hinaus deuteten die Ergebnisse daraufhin, dass die Stärkefließmittel auf den ersten Hydratationsprodukten adsorbieren, wodurch die Hydratphasenbildung verlangsamt wird. Die kalorimetrischen und konduktometrischen Daten sowie die Ergebnisse der Porenlösungsanalytik des Zementes, erforderten eine genauere Betrachtung der Beeinflussung der Hydratation der Klinkerphasen C3A und C3S, durch die Stärkefließmittel. Demzufolge wurden die Untersuchungen mit den Klinkerphasen C3A und C3S in Analogie zum Portlandzement durchgeführt. Beurteilung der Beeinflussung der Hydratation und der Porenlösung des C3A Während die kalorimetrischen Untersuchungen zur C3A-Hydratation eine Tendenz zur verlangsamten Hydratphasenbildung durch die Stärkefließmittel aufzeigten, lieferten die konduktometrischen Ergebnisse grundlegende Erkenntnisse zur Beeinflussung der C3A-Hydratation. Das Stadium I der C3A-Hydratation ist durch einen Abfall der elektrischen Leitfähigkeit geprägt. Dies korreliert mit dem Absinken der Calciumionenkonzentration und dem Anstieg der Aluminiumionenkonzentration in der Porenlösung der C3A-Suspensionen. Im Anschluss an das Stadium I bildet sich ein Plateau in den elektrischen Leitfähigkeitskurven aus. 8. Es konnte gezeigt werden, dass die Stärkefließmittel das Stadium I der C3A-Hydratation, d.h. die Auflösung und Bildung erster Calciumaluminathydrate verlangsamen. Insbesondere die Stärkefließmittel mit höherer Molekularmasse erhöhten die Dauer des Stadium I. Das Stadium II wird durch die Stärkefließmittel in folgender Reihenfolge am stärksten verlängert: M-1 > W-3 > K-1 > W-2 ≥ W-4 und verdeutlicht, dass keine Abhängigkeit von der anionischen Ladungsmenge identifiziert werden konnte. Die Ergebnisse zeigten, dass speziell die kurzkettige Stärke M-1, das Stadium II länger aufrechterhalten. 9. Das Stadium III und IV der C3A-Hydratation wird insbesondere durch die Stärkefließmittel mit höherer Molekularmasse verlängert. Die Ergebnisse der Porenlösungsanalytik korrelieren mit den Ergebnissen der elektrischen Leitfähigkeit. Speziell die zeitlichen Verläufe der Calciumionenkonzentration bildeten die Verläufe der Konduktivitätskurven der C3A-Hydratation mit großer Übereinstimmung ab. Beurteilung der Beeinflussung der Hydratation und der Porenlösung des C3S Die Ergebnisse der kalorimetrischen Untersuchungen zur Beeinflussung der C3S-Hydratation durch die Stärkefließmittel zeigen, dass diese maßgeblich verlangsamt wird. Das Maximum des Haupthydratationspeaks wird zu späteren Zeiten verschoben und auch die Höhe des Maximums wird deutlich verringert. Durch die konduktometrischen Experimente wurde aufgeklärt, welche Stadien der C3S-Hydrataion beeinflusst wurden. 10. Es konnte gezeigt werden, dass sowohl die Menge der eingebrachten anionischen Ladungen als auch das Vorhandensein sehr kleiner Stärkefließmittelmoleküle (Zucker), Schlüsselparameter der verzögerten Hydratationskinetik des C3S sind. Der grundlegende Mechanismus der Hydratationsverzögerung beruht auf einer Kombination aus verminderter CSH-Keimbildung und Adsorptionsprozessen auf den ersten gebildeten CSH-Phasen der C3S-Partikel. Beurteilung des Adsorptionsverhaltens am Zement, C3A und C3S Die Bestimmung des Adsorptionsverhaltens der Stärkefließmittel erfolgte mit der Phenol-Schwefelsäure-Methode an Zement,- C3A- und C3S-Suspensionen. Durch den Vergleich der Adsorptionsraten und Adsorptionsmengen in Abhängigkeit von den molekularen Parametern der Stärkefließmittel wurde ein Wechselwirkungsmodell identifiziert. 11. Die Ursache für die hohe Dispergierleistung der Stärkefließmittel liegt in Adsorptionsprozessen an den ersten gebildeten Hydratphasen des Zementes begründet. Die Molekularmasse der Stärkefließmittel ist ein Schlüsselparameter der entscheidend für den Mechanismus der Adsorption ist. Während anionische, langkettige Stärken an mehreren Zementpartikeln gleichzeitig adsorbieren und für eine Vernetzung der Zementpartikel untereinander sorgen (Verdickerwirkung), adsorbieren kurzkettige anionische Stärken lediglich an den ersten gebildeten Hydratphasen der einzelnen Zementpartikel und führen zu elektrostatischer Abstoßung (Fließmittelwirkung). 12. Es konnte gezeigt werden, dass die Stärkefließmittel mit geringerem Molekulargewicht bei höheren Konzentrationen an den Hydratphasen des Zementes adsorbieren. Die Stärkefließmittel mit höherer Molekularmasse erreichen bei einer Zugabemenge von 0,7 % ein Plateau. Daraus wird geschlussfolgert, dass die größeren Fließmittelmoleküle einen erhöhten Platzbedarf erfordern und zur Absättigung der hydratisierenden Oberflächen bei geringeren Zugabemengen führen. Darüber hinaus konnte gezeigt werden, dass die Stärkefließmittel mit höherer anionischer Ladungsmenge zu höheren Adsorptionsmengen auf den Zement-, C3A- und C3S-Partikeln führen. 13. Die Adsorptionsprozesse finden an den ersten gebildeten Hydratphasen der C3A-Partikel statt, wodurch sowohl die Auflösung des C3A als auch die Bildung der Calciumhydroaluminate verlangsamt wird. Darüber hinaus wurde festgestellt, dass die Verlangsamung des freien- und diffusionskontrollierten Hydratphasenwachstums des C3S, durch die Adsorption der Stärkefließmittel auf den ersten gebildeten CSH-Phasen hervorgerufen wird. Des Weiteren wurde festgestellt, dass sehr kleine zuckerähnliche Moleküle in der kurzkettigen Maniokstärke in der Lage sind, die Bildung der ersten CSH-Keime zu unterdrücken. Dadurch kann die langanhaltende Plateauphase der elektrischen Leitfähigkeit der C3S-Hydratation erklärt werden. Beurteilung der Porenstruktur- und Festigkeitsausbildung Die Beurteilung der Qualität der Mikrostruktur erfolgte durch die Bestimmung der Rohdichte und der Porenradienverteilung mit Hilfe der Quecksilberhochdruckporosimetrie. Durch das Versetzen der Zementleime mit den Stärkefließmitteln konnten bei gleichbleibender Verarbeitbarkeit Zementsteinprobekörper mit einem um 17,5 % geringeren w/z-Wert von 0,35 hergestellt werden. Die Absenkung des w/z-Wertes führt zu einem Anstieg der Rohdichte des Zementsteins. 14. Durch die Zugabe der Stärkefließmittel und den verringerten w/z-Wert wird die Porenstruktur der Zementsteinproben im Vergleich zum Referenzzementstein verfeinert, da die Gesamtporosität sinkt. Insbesondere der Kapillarporenanteil wird verringert und der Gelporenanteil erhöht. Im Unterschied zu den PCE-Fließmitteln führt die Zugabe der Stärkefließmittel zu keinem erhöhten Eintrag von Luftporen. Dies wiederum hat zur Folge, dass bei der Verwendung der Stärkefließmittel auf Entschäumer verzichtet werden kann. 15. Entsprechend der dichteren Zementsteinmatrix wurden für die Zementsteine mit den Stärkefließmitteln nach 7 d und 28 d, erhöhte Biegezug- und Druckfestigkeiten ermittelt. Insbesondere die 28 d Druckfestigkeit wurde durch den verringerten w/z-Wert um die Faktoren 3,5 - 6,6 erhöht. KW - Zusatzmittel KW - Fließmittel KW - Stärkefließmittel KW - Zementhydratation KW - Synthese KW - Bauchemie KW - Betonverflüssiger Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220505-46402 ER - TY - THES A1 - Pacheco Alvim, Sarah T1 - Falsa Seringueira: Rubber Trees and the Materiality of the Unseen N2 - The theme of this project is the colonial history of the natural rubber industry. It focuses on two species of tropical plants: Ficus elastica and Hevea brasiliensis. Geographically their native habitat is very distant from each other, but they connect by European influence through the exploitation of latex. The many forms and outcomes from this work manifest the attempt of the artist to create an association between a common household plant, the origin of its name, and the source of rubber. As a ghostly connective tissue, the latex surrounds reconstructed history, old prints, live plants, and drawings, accepting the material's capacity to both erase and preserve the past. KW - Kautschuk KW - Gummi KW - Kolonialismus KW - Kunst KW - Geschichte KW - art installation KW - colonialism KW - history KW - natural latex KW - rubber industry Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230105-48467 ER - TY - THES A1 - Ortiz Alvis, Alfredo T1 - Urban Agoraphobia: The pursuit of security within confined community ties. Urban-ethnographic analysis on gated housing developments of Guadalajara, Mexico. N2 - The Gated Community (GC) phenomenon in Latin American cities has become an inherent element of their urban development, despite academical debate, their approach thrives within the housing market; not surprisingly, as some of the premises on which GCs are based, namely safety, control and supervision intersperse seamlessly with the insecure conditions of the contexts from which they arise. The current security crisis in Mexico, triggered in 2006 by the so-called war on drugs, has reached its peak with the highest insecurity rates in decades, representing a unique chance to study these interactions. Although the leading term of this research, Urban Agoraphobia, implies a causal dichotomy between the rise in the sense of fear amongst citizens and housing confinement as lineal consequence, I acknowledge that GCs represent a complex phenomenon, a hub of diverse factors and multidimensional processes held on four fundamental levels: global, social, individual and state-related. The focus of this dissertation is set on the individual plane and contributes, from the analysis of the GC’s resident’s perspective, experiences and perceptions, to a debate that has usually been limited to the scrutiny of other drivers, disregarding the role of dweller’s underlying fears, motivations and concerns. Assuming that the current ruling security model in Mexico tends to empower its commodification rather than its collective quality, this research draws upon the use of a methodological triangulation, along conceptual and contextual analyses, to test the hypothesis that insecurity plays an increasingly major role, leading citizens into the belief that acquiring a household in a controlled and surveilled community represents a counterweight against the feared environment of the open city. The focus of the analysis lies on the internal hatch of community ties as potential palliative for the provision of a sense of security, aiming to transcend the unidimensional discourse of GCs as defined mainly by their defensive apparatus. Residents’ perspectives acquired through ethnographical analyses may provide the chance to gain an essential view into a phenomenon that further consolidates without a critical study of its actual implications, not only for Mexican cities, but also for the Latin American and global contexts. KW - Agoraphobie KW - Geschlossene Gesellschaft KW - Volkskunde KW - Kommunität KW - Segregation KW - Urban Agoraphobia KW - Gated Communities KW - Ethnography KW - Community KW - Segregation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221005-47234 ER - TY - THES A1 - Nouri, Hamidreza T1 - Mechanical Behavior of two dimensional sheets and polymer compounds based on molecular dynamics and continuum mechanics approach N2 - Compactly, this thesis encompasses two major parts to examine mechanical responses of polymer compounds and two dimensional materials: 1- Molecular dynamics approach is investigated to study transverse impact behavior of polymers, polymer compounds and two dimensional materials. 2- Large deflection of circular and rectangular membranes is examined by employing continuum mechanics approach. Two dimensional materials (2D), including, Graphene and molybdenum disulfide (MoS2), exhibited new and promising physical and chemical properties, opening new opportunities to be utilized alone or to enhance the performance of conventional materials. These 2D materials have attracted tremendous attention owing to their outstanding physical properties, especially concerning transverse impact loading. Polymers, with the backbone of carbon (organic polymers) or do not include carbon atoms in the backbone (inorganic polymers) like polydimethylsiloxane (PDMS), have extraordinary characteristics particularly their flexibility leads to various easy ways of forming and casting. These simple shape processing label polymers as an excellent material often used as a matrix in composites (polymer compounds). In this PhD work, Classical Molecular Dynamics (MD) is implemented to calculate transverse impact loading of 2D materials as well as polymer compounds reinforced with graphene sheets. In particular, MD was adopted to investigate perforation of the target and impact resistance force . By employing MD approach, the minimum velocity of the projectile that could create perforation and passes through the target is obtained. The largest investigation was focused on how graphene could enhance the impact properties of the compound. Also the purpose of this work was to discover the effect of the atomic arrangement of 2D materials on the impact problem. To this aim, the impact properties of two different 2D materials, graphene and MoS2, are studied. The simulation of chemical functionalization was carried out systematically, either with covalently bonded molecules or with non-bonded ones, focusing the following efforts on the covalently bounded species, revealed as the most efficient linkers. To study transverse impact behavior by using classical MD approach , Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software, that is well-known among most researchers, is employed. The simulation is done through predefined commands in LAMMPS. Generally these commands (atom style, pair style, angle style, dihedral style, improper style, kspace style, read data, fix, run, compute and so on) are used to simulate and run the model for the desired outputs. Depends on the particles and model types, suitable inter-atomic potentials (force fields) are considered. The ensembles, constraints and boundary conditions are applied depends upon the problem definition. To do so, atomic creation is needed. Python codes are developed to generate particles which explain atomic arrangement of each model. Each atomic arrangement introduced separately to LAMMPS for simulation. After applying constraints and boundary conditions, LAMMPS also include integrators like velocity-Verlet integrator or Brownian dynamics or other types of integrator to run the simulation and finally the outputs are emerged. The outputs are inspected carefully to appreciate the natural behavior of the problem. Appreciation of natural properties of the materials assist us to design new applicable materials. In investigation on the large deflection of circular and rectangular membranes, which is related to the second part of this thesis, continuum mechanics approach is implemented. Nonlinear Föppl membrane theory, which carefully release nonlinear governing equations of motion, is considered to establish the non-linear partial differential equilibrium equations of the membranes under distributed and centric point loads. The Galerkin and energy methods are utilized to solve non-linear partial differential equilibrium equations of circular and rectangular plates respectively. Maximum deflection as well as stress through the film region, which are kinds of issue in many industrial applications, are obtained. T2 - Mechanisches Verhalten von zweidimensionalen Schichten und Polymerverbindungen basierend auf molekulardynamischer und kontinuumsmechanischem Ansatz KW - Molekulardynamik KW - Polymerverbindung KW - Auswirkung KW - Molecular Dynamics Simulation KW - Continuum Mechnics KW - Polymer compound KW - Impact Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220713-46700 ER - TY - THES A1 - Müller, Matthias T1 - Salt-frost Attack on Concrete - New Findings regarding the Damage Mechanism N2 - The reduction of the cement clinker content is an important prerequisite for the improvement of the CO2-footprint of concrete. Nevertheless, the durability of such concretes must be sufficient to guarantee a satisfactory service life of structures. Salt frost scaling resistance is a critical factor in this regard, as it is often diminished at increased clinker substitution rates. Furthermore, only insufficient long-term experience for such concretes exists. A high salt frost scaling resistance thus cannot be achieved by applying only descriptive criteria, such as the concrete composition. It is therefore to be expected, that in the long term a performance based service life prediction will replace the descriptive concept. To achieve the important goal of clinker reduction for concretes also in cold and temperate climates it is important to understand the underlying mechanisms for salt frost scaling. However, conflicting damage theories dominate the current State of the Art. It was consequently derived as the goal of this thesis to evaluate existing damage theories and to examine them experimentally. It was found that only two theories have the potential to describe the salt frost attack satisfactorily – the glue spall theory and the cryogenic suction theory. The glue spall theory attributes the surface scaling to the interaction of an external ice layer with the concrete surface. Only when moderate amounts of deicing salt are present in the test solution the resulting mechanical properties of the ice can cause scaling. However, the results in this thesis indicate that severe scaling also occurs at deicing salt levels, at which the ice is much too soft to damage concrete. Thus, the inability of the glue spall theory to account for all aspects of salt frost scaling was shown. The cryogenic suction theory is based on the eutectic behavior of salt solutions, which consist of two phases – water ice and liquid brine – between the freezing point and the eutectic temperature. The liquid brine acts as an additional moisture reservoir, which facilitates the growth of ice lenses in the surface layer of the concrete. The experiments in this thesis confirmed, that the ice formation in hardened cement paste increases due to the suction of brine at sub-zero temperatures. The extent of additional ice formation was influenced mainly by the porosity and by the chloride binding capacity of the hardened cement paste. Consequently, the cryogenic suction theory plausibly describes the actual generation of scaling, but it has to be expanded by some crucial aspects to represent the salt frost scaling attack completely. The most important aspect is the intensive saturation process, which is ascribed to the so-called micro ice lens pump. Therefore a combined damage theory was proposed, which considers multiple saturation processes. Important aspects of this combined theory were confirmed experimentally. As a result, the combined damage theory constitutes a good basis to understand the salt frost scaling attack on concrete on a fundamental level. Furthermore, a new approach was identified, to account for the reduced salt frost scaling resistance of concretes with reduced clinker content. T2 - Frost-Tausalz-Angriff auf Beton - Neue Erkenntnisse zum Schadensmechnismus KW - Beton KW - Frost KW - Concrete KW - Salt frost attack KW - Damage mechanism KW - Glue Spall KW - Cryogenic Suction Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230103-48681 UR - https://e-pub.uni-weimar.de/opus4/frontdoor/index/index/docId/4502 N1 - Englische Fassung meiner deutschsprachigen Dissertation mit dem Titel "Frost-Tausalz-Angriff auf Beton - Neue Erkenntnisse zum Schadensmechnismus". ER - TY - JOUR A1 - Moscoso, Caridad A1 - Kraus, Matthias T1 - On the Verification of Beams Subjected to Lateral Torsional Buckling by Simplified Plastic Structural Analysis JF - ce/papers N2 - Plastic structural analysis may be applied without any difficulty and with little effort for structural member verifications with regard to lateral torsional buckling of doubly symmetric rolled I sections. Suchlike analyses can be performed based on the plastic zone theory, specifically using finite beam elements with seven degrees of freedom and 2nd order theory considering material nonlinearity. The existing Eurocode enables these approaches and the coming-up generation will provide corresponding regulations in EN 1993-1-14. The investigations allow the determination of computationally accurate limit loads, which are determined in the present paper for selected structural systems with different sets of parameters, such as length, steel grade and cross section types. The results are compared to approximations gained by more sophisticated FEM analyses (commercial software Ansys Workbench applying solid elements) for reasons of verification/validation. In this course, differences in the results of the numerical models are addressed and discussed. In addition, results are compared to resistances obtained by common design regulations based on reduction factors χlt including regulations of EN 1993-1-1 (including German National Annex) as well as prEN 1993-1-1: 2020-08 (proposed new Eurocode generation). Concluding, correlations of results and their advantages as well as disadvantages are discussed. KW - Stahl KW - Träger KW - Robustheit KW - steel KW - stability KW - flexural-torsional-buckling Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230124-48782 UR - https://onlinelibrary.wiley.com/doi/full/10.1002/cepa.1835 VL - 2022 IS - Volume 5, Issue 4 SP - 914 EP - 923 PB - Ernst & Sohn CY - Berlin ER - TY - JOUR A1 - Morawski, Tommaso T1 - La tavola e la mappa. Paradigmi per una metaforologia mediale dell’immaginazione cartografica in Kant JF - Philosophy Kitchen N2 - Immanuel Kant’s thought is a central historical and theoretical reference in Hans Blumenberg’s metaphorological project. This is demonstrated by the fact that in the Paradigms the author outlines the concept of absolute metaphor by explicitly referring to §59 of the Critique of the Power of Judgment and recognizing in the Kantian symbol a model for his own metaphorics. However, Kant’s name also appears in the chapter on the metaphor of the “terra incognita” that not only did he theorize the presence of symbolic hypotyposis in our language [...] but also made extensive use of metaphors linked to “determinate historical experiences”. In particular: geographical metaphors. In my essay, I would like to start from the analysis of Kant’s geographical metaphors in order to try to rethink Blumenberg’s archaeological method as an archaeology of media that grounds the study of metaphors in the materiality of communication and the combination of tools, agents and media. KW - Kant, Immanuel KW - Blumenberg, Hans KW - Metapher KW - Medienwissenschaft KW - Metaphorik KW - geografische Metapher KW - Medienarchäologie Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230124-48766 UR - https://www.ojs.unito.it/index.php/philosophykitchen/article/view/7191 VL - 2022 IS - 17, II/2022 SP - 137 EP - 152 PB - Università degli Studi di Torino CY - Turino ER - TY - THES A1 - Moosbrugger, Jennifer T1 - Design Intelligence - Human-Centered-Design for the development of industrial AI/ML agents N2 - This study deals with design for AI/ML systems, more precisely in the industrial AI context based on case studies from the factory automation field. It therefore touches on core concepts from Human-Centered-Design (HCD), User Experience (UX) and Human Computer Interaction (HCI) on one hand, as well as concepts from Artificial Intelligence (AI), Machine Learning (ML) and the impact of technology on the other. The case studies the research is based on are within the industrial AI domain. However, the final outcomes, the findings, solutions, artifacts and so forth, should be transferable to a wider spectrum of domains. The study’s aim is to examine the role of designers in the age of AI and the factors which are relevant, based on the hypothesis that current AI/ML development lacks the human perspective, which means that there are pitfalls and challenges that design can help resolve. The initial literature review revealed that AI/ML are perceived as a new design material that calls for a new design paradigm. Additional research based on qualitative case study research was conducted to gain an overview of the relevant issues and challenges. From this, 17 themes emerged, which together with explorative expert interviews and a structured literature review, were further analyzed to produce the relevant HCD, UX and HCI themes. It became clear that designers need new processes, methods, and tools in the age of AI/ML in combination with not only design, but also data science and business expertise, which is why the proposed solution in this PhD features process modules for design, data science and business collaboration. There are seven process modules and their related activities and dependencies that serve as guidelines for practitioners who want to design intelligence. A unified framework for collecting use case exemplars was created, based on a workshop with different practitioners and researchers from the area of AI/ML to support and enrich the process modules with concrete projects examples. KW - Künstliche Intelligenz KW - Benutzererlebnis KW - Human-centered Design KW - Datenkompetenz KW - Prozessmodell KW - AI, computational thinking KW - Design, UX, Human-Centered-Design KW - process, tools, methods KW - collaboration KW - Artificial Intelligence Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230719-64098 ER - TY - THES A1 - Mojahedin, Arvin T1 - Analysis of Functionally Graded Porous Materials Using Deep Energy Method and Analytical Solution N2 - Porous materials are an emerging branch of engineering materials that are composed of two elements: One element is a solid (matrix), and the other element is either liquid or gas. Pores can be distributed within the solid matrix of porous materials with different shapes and sizes. In addition, porous materials are lightweight, and flexible, and have higher resistance to crack propagation and specific thermal, mechanical, and magnetic properties. These properties are necessary for manufacturing engineering structures such as beams and other engineering structures. These materials are widely used in solid mechanics and are considered a good replacement for classical materials by many researchers recently. Producing lightweight materials has been developed because of the possibility of exploiting the properties of these materials. Various types of porous material are generated naturally or artificially for a specific application such as bones and foams. Like functionally graded materials, pore distribution patterns can be uniform or non-uniform. Biot’s theory is a well-developed theory to study the behavior of poroelastic materials which investigates the interaction between fluid and solid phases of a fluid-saturated porous medium. Functionally graded porous materials (FGPM) are widely used in modern industries, such as aerospace, automotive, and biomechanics. These advanced materials have some specific properties compared to materials with a classic structure. They are extremely light, while they have specific strength in mechanical and high-temperature environments. FGPMs are characterized by a gradual variation of material parameters over the volume. Although these materials can be made naturally, it is possible to design and manufacture them for a specific application. Therefore, many studies have been done to analyze the mechanical and thermal properties of FGPM structures, especially beams. Biot was the pioneer in formulating the linear elasticity and thermoelasticity equations of porous material. Since then, Biot's formulation has been developed in continuum mechanics which is named poroelasticity. There are obstacles to analyzing the behavior of these materials accurately like the shape of the pores, the distribution of pores in the material, and the behavior of the fluid (or gas) that saturated pores. Indeed, most of the engineering structures made of FGPM have nonlinear governing equations. Therefore, it is difficult to study engineering structures by solving these complicated equations. The main purpose of this dissertation is to analyze porous materials in engineering structures. For this purpose, the complex equations of porous materials have been simplified and applied to engineering problems so that the effect of all parameters of porous materials on the behavior of engineering structure has been investigated. The effect of important parameters of porous materials on beam behavior including pores compressibility, porosity distribution, thermal expansion of fluid within pores, the interaction of stresses between pores and material matrix due to temperature increase, effects of pore size, material thickness, and saturated pores with fluid and unsaturated conditions are investigated. Two methods, the deep energy method, and the exact solution have been used to reduce the problem hypotheses, increase accuracy, increase processing speed, and apply these in engineering structures. In both methods, they are analyzed nonlinear and complex equations of porous materials. To increase the accuracy of analysis and study of the effect of shear forces, Timoshenko and Reddy's beam theories have been used. Also, neural networks such as residual and fully connected networks are designed to have high accuracy and less processing time than other computational methods. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,12 KW - Poröser Stoff KW - Analytische Lösung KW - Porous Materials KW - Deep Energy Method KW - Analytical Solution KW - Functionally Graded Materials Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221220-48674 ER - TY - THES A1 - Miller, Martin T1 - BIM-basierte Digitalisierung von Bestandsgebäuden aus Sicht des FM am Beispiel von Heizungsanlagen N2 - Das Ziel der Arbeit ist, für das Facility Management relevante Informationen für die mit Building Information Modeling basierende Erstellung von Bestandsgebäuden am Beispiel einer Hei- zungsanlage zu definieren. Darauf basierend sind die notwendigen Arbeitsschritte der Objek- taufnahme abgeleitet. Für die Definition der Arbeitsschritte wurden das grundlegende Vorge- hen bei einer Objektaufnahme sowie die gesetzlichen Gegebenheiten für den Betrieb einer Heizungsanlage dargelegt. Darüber hinaus sind in der vorliegenden Ausarbeitung die Vorteile und Herausforderungen hinsichtlich des Zusammenspiels von Building Information Modeling und Facility Management analysiert. Die definierten Arbeitsschritte sind anhand eines Beispiel- projektes angewendet worden. Im Rahmen des Beispielprojekts sind die entscheidenden Be- triebsdaten je Anlagenteil in Form von Informationsanforderungen nach DIN 17412 definiert. Das Gebäudemodell ist durch Parameter mit den für das Facility Management relevanten In- formationen ergänzt. Die Resultate des Beispielprojektes sind mit aussagekräftigen Schnitten, Plänen sowie 3-D-Visualisierungen dargestellt. Abschließend sind die Ergebnisse in Bezug auf das FM validiert. Aus den Arbeitsschritten und Ergebnissen ist eine Leitlinie erstellt worden für den Digitalisierungsprozess von Bestandsgebäuden für das Facility Management. KW - Facility Management KW - BIM Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220623-46616 ER - TY - JOUR A1 - Mehling, Simon A1 - Schnabel, Tobias A1 - Londong, Jörg T1 - Investigation on Energetic Efficiency of Reactor Systems for Oxidation of Micro-Pollutants by Immobilized Active Titanium Dioxide Photocatalysis JF - Water N2 - In this work, the degradation performance for the photocatalytic oxidation of eight micropollutants (amisulpride, benzotriazole, candesartan, carbamazepine, diclofenac, gabapentin, methlybenzotriazole, and metoprolol) within real secondary effluent was investigated using three different reactor designs. For all reactor types, the influence of irradiation power on its reaction rate and energetic efficiency was investigated. Flat cell and batch reactor showed almost similar substance specific degradation behavior. Within the immersion rotary body reactor, benzotriazole and methylbenzotriazole showed a significantly lower degradation affinity. The flat cell reactor achieved the highest mean degradation rate, with half time values ranging from 5 to 64 min with a mean of 18 min, due to its high catalysts surface to hydraulic volume ratio. The EE/O values were calculated for all micro-pollutants as well as the mean degradation rate constant of each experimental step. The lowest substance specific energy per order (EE/O) values of 5 kWh/m3 were measured for benzotriazole within the batch reactor. The batch reactor also reached the lowest mean values (11.8–15.9 kWh/m3) followed by the flat cell reactor (21.0–37.0 kWh/m3) and immersion rotary body reactor (23.9–41.0 kWh/m3). Catalyst arrangement and irradiation power were identified as major influences on the energetic performance of the reactors. Low radiation intensities as well as the use of submerged catalyst arrangement allowed a reduction in energy demand by a factor of 3–4. A treatment according to existing treatment goals of wastewater treatment plants (80% total degradation) was achieved using the batch reactor with a calculated energy demand of 7000 Wh/m3. KW - Fotokatalyse KW - Abwasserreinigung KW - photocatalysis KW - micro-pollutant treatment KW - titanium dioxid KW - reactor design KW - energy per order KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220912-47130 UR - https://www.mdpi.com/2073-4441/14/17/2681 VL - 2022 IS - Volume 14, issue 7, article 2681 SP - 1 EP - 15 PB - MDPI CY - Basel ER - TY - THES A1 - Manzano Gómez, Noel A. T1 - The reverse of urban planning. Towards a 20th century history of informal urbanization in Europe and its origins in Madrid and Paris (1850-1940) N2 - The objective of this thesis was to understand the 20th-century history of informal urbanisation in Europe and its origins in Madrid and Paris. The concept of informal urbanisation was employed to refer to the process of developing shacks and precarious single-family housing areas that were not planned by the public powers and were considered to be substandard because of their below-average materials and social characteristics. Our main hypothesis was that despite being a phenomenon with ancient roots, informal urbanisation emerged as a public problem and was subsequently prohibited in connection with another historical process occurred: the birth of contemporary urban planning. Therefore, its transformation into a deviant and illegal urban growth mechanism would have been a pan-European process occurring at the same pace that urban planning developed during the first decades of the 20th century. Analysing the 20th-century history of informal urbanisation in Europe was an ambitious task that required using a large number of sources. To contend with this issue, this thesis combined two main methods: historiographical research about informal urbanisation in Europe and archival research of two case studies, Madrid and Paris, to make the account more precise by analysing primary sources of the subject. Our research of these informal areas, which were produced mainly through poor private allotments and housing developed on land squats, revealed two key moments of explosive growth across Europe: the 1920s and 1960s. The near disappearance of informal urbanisation throughout the continent seemed to be a consequence not of the historical development of urban planning—which was commonly transgressed and bypassed—but of the exacerbation of global economic inequalities, permitting the development of a geography of privilege in Europe. Concerning the cases of Paris and Madrid, the origins of informal urbanisation—that is, the moment the issue started to be problematised—seemed to occur in the second half of the 19th century, when a number of hygienic norms and surveillance devices began to control housing characteristics. From that moment onwards, informal urbanisation areas formed peripheral belts in both cities. This growth became the object of an illegalisation process of which we have identified three phases: (i) the unregulated development of the phenomenon during the second half of the 20th century, (ii) the institutional production of “exception regulations” to permit a controlled development of substandard housing in the peripheral fringes of both cities, and (iii) the synchronic prohibition of informal urbanisation in the 1920s and its illegal reproduction. KW - Stadtplanung KW - Verstädterung KW - Historische Soziologie KW - Urban Planning KW - Informal Urbanization KW - Comparative History KW - Historical Sociology KW - European Urban Studies Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220119-45693 ER - TY - THES A1 - Malik, Irfan T1 - An adaptive contact formulation for Isogeometric Finite Element Analysis N2 - Numerical simulation of physical phenomena, like electro-magnetics, structural and fluid mechanics is essential for the cost- and time-efficient development of mechanical products at high quality. It allows to investigate the behavior of a product or a system far before the first prototype of a product is manufactured. This thesis addresses the simulation of contact mechanics. Mechanical contacts appear in nearly every product of mechanical engineering. Gearboxes, roller bearings, valves and pumps are only some examples. Simulating these systems not only for the maximal/minimal stresses and strains but for the stress-distribution in case of tribo-contacts is a challenging task from a numerical point of view. Classical procedures like the Finite Element Method suffer from the nonsmooth representation of contact surfaces with discrete Lagrange elements. On the one hand, an error due to the approximate description of the surface is introduced. On the other hand it is difficult to attain a robust contact search because surface normals can not be described in a unique form at element edges. This thesis introduces therefore a novel approach, the adaptive isogeometric contact formulation based on polynomial Splines over hierarchical T-meshes (PHT-Splines), for the approximate solution of the non-linear contact problem. It provides a more accurate, robust and efficient solution compared to conventional methods. During the development of this method the focus was laid on the solution of static contact problems without friction in 2D and 3D in which the structures undergo small deformations. The mathematical description of the problem entails a system of partial differential equations and boundary conditions which model the linear elastic behaviour of continua. Additionally, it comprises side conditions, the Karush-Kuhn-Tuckerconditions, to prevent the contacting structures from non-physical penetration. The mathematical model must be transformed into its integral form for approximation of the solution. Employing a penalty method, contact constraints are incorporated by adding the resulting equations in weak form to the overall set of equations. For an efficient space discretization of the bulk and especially the contact boundary of the structures, the principle of Isogeometric Analysis (IGA) is applied. Isogeometric Finite Element Methods provide several advantages over conventional Finite Element discretization. Surface approximation with Non-Uniform Rational B-Splines (NURBS) allow a robust numerical solution of the contact problem with high accuracy in terms of an exact geometry description including the surface smoothness. The numerical evaluation of the contact integral is challenging due to generally non-conforming meshes of the contacting structures. In this work the highly accurate Mortar Method is applied in the isogeometric setting for the evaluation of contact contributions. This leads to an algebraic system of equations that is linearized and solved in sequential steps. This procedure is known as the Newton Raphson Method. Based on numerical examples, the advantages of the isogeometric approach with classical refinement strategies, like the p- and h-refinement, are shown and the influence of relevant algorithmic parameters on the approximate solution of the contact problem is verified. One drawback of the Spline approximations of stresses though is that they lack accuracy at the contact edge where the structures change their boundary from contact to no contact and where the solution features a kink. The approximation with smooth Spline functions yields numerical artefacts in the form of non-physical oscillations. This property of the numerical solution is not only a drawback for the simulation of e.g. tribological contacts, it also influences the convergence properties of iterative solution procedures negatively. Hence, the NURBS discretized geometries are transformed to Polynomial Splines over Hierarchical T-meshes (PHT-Splines), for the local refinement along contact edges to reduce the artefact of pressure oscillations. NURBS have a tensor product structure which does not allow to refine only certain parts of the geometrical domain while leaving other parts unchanged. Due to the Bézier Extraction, lying behind the transformation from NURBS to PHT-Splines, the connected mesh structure is broken up into separate elements. This allows an efficient local refinement along the contact edge. Before single elements are refined in a hierarchical form with cross-insertion, existing basis functions must be modified or eliminated. This process of truncation assures local and global linear independence of the refined basis which is needed for a unique approximate solution. The contact boundary is a priori unknown. Local refinement along the contact edge, especially for 3D problems, is for this reason not straight forward. In this work the use of an a posteriori error estimation procedure, the Super Convergent Recovery Solution Based Error Estimation Scheme, together with the Dörfler Marking Method is suggested for the spatial search of the contact edge. Numerical examples show that the developed method improves the quality of solutions along the contact edge significantly compared to NURBS based approximate solutions. Also, the error in maximum contact pressures, which correlates with the pressure artefacts, is minimized by the adaptive local refinement. In a final step the practicability of the developed solution algorithm is verified by an industrial application: The highly loaded mechanical contact between roller and cam in the drive train of a high-pressure fuel pump is considered. KW - Isogeometrische Analyse KW - Elementanalyse KW - Isogeometric Analysis KW - Error estimation KW - adaptive contact KW - Finite Element Analysis KW - Computational contact mechanics Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220324-46129 ER - TY - JOUR A1 - Maiwald, Holger A1 - Schwarz, Jochen A1 - Kaufmann, Christian A1 - Langhammer, Tobias A1 - Golz, Sebastian A1 - Wehner, Theresa T1 - Innovative Vulnerability and Risk Assessment of Urban Areas against Flood Events: Prognosis of Structural Damage with a New Approach Considering Flow Velocity JF - Water N2 - The floods in 2002 and 2013, as well as the recent flood of 2021, caused billions Euros worth of property damage in Germany. The aim of the project Innovative Vulnerability and Risk Assessment of Urban Areas against Flood Events (INNOVARU) involved the development of a practicable flood damage model that enables realistic damage statements for the residential building stock. In addition to the determination of local flood risks, it also takes into account the vulnerability of individual buildings and allows for the prognosis of structural damage. In this paper, we discuss an improved method for the prognosis of structural damage due to flood impact. Detailed correlations between inundation level and flow velocities depending on the vulnerability of the building types, as well as the number of storeys, are considered. Because reliable damage data from events with high flow velocities were not available, an innovative approach was adopted to cover a wide range of flow velocities. The proposed approach combines comprehensive damage data collected after the 2002 flood in Germany with damage data of the 2011 Tohoku earthquake tsunami in Japan. The application of the developed methods enables a reliable reinterpretation of the structural damage caused by the August flood of 2002 in six study areas in the Free State of Saxony. KW - Bauschaden KW - Hochwasser KW - Hochwasserschadensmodell KW - Strukturschaden KW - Strömungsgeschwindigkeit KW - Schadensprognose KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221012-47254 UR - https://www.mdpi.com/2073-4441/14/18/2793 VL - 2022 IS - Volume 14, issue 18, article 2793 SP - 1 EP - 28 PB - MDPI CY - Basel ER - TY - THES A1 - López Zermeño, Jorge Alberto T1 - Isogeometric and CAD-based methods for shape and topology optimization: Sensitivity analysis, Bézier elements and phase-field approaches N2 - The Finite Element Method (FEM) is widely used in engineering for solving Partial Differential Equations (PDEs) over complex geometries. To this end, it is required to provide the FEM software with a geometric model that is typically constructed in a Computer-Aided Design (CAD) software. However, FEM and CAD use different approaches for the mathematical description of the geometry. Thus, it is required to generate a mesh, which is suitable for FEM, based on the CAD model. Nonetheless, this procedure is not a trivial task and it can be time consuming. This issue becomes more significant for solving shape and topology optimization problems, which consist in evolving the geometry iteratively. Therefore, the computational cost associated to the mesh generation process is increased exponentially for this type of applications. The main goal of this work is to investigate the integration of CAD and CAE in shape and topology optimization. To this end, numerical tools that close the gap between design and analysis are presented. The specific objectives of this work are listed below: • Automatize the sensitivity analysis in an isogeometric framework for applications in shape optimization. Applications for linear elasticity are considered. • A methodology is developed for providing a direct link between the CAD model and the analysis mesh. In consequence, the sensitivity analysis can be performed in terms of the design variables located in the design model. • The last objective is to develop an isogeometric method for shape and topological optimization. This method should take advantage of using Non-Uniform Rational B-Splines (NURBS) with higher continuity as basis functions. Isogeometric Analysis (IGA) is a framework designed to integrate the design and analysis in engineering problems. The fundamental idea of IGA is to use the same basis functions for modeling the geometry, usually NURBS, for the approximation of the solution fields. The advantage of integrating design and analysis is two-fold. First, the analysis stage is more accurate since the system of PDEs is not solved using an approximated geometry, but the exact CAD model. Moreover, providing a direct link between the design and analysis discretizations makes possible the implementation of efficient sensitivity analysis methods. Second, the computational time is significantly reduced because the mesh generation process can be avoided. Sensitivity analysis is essential for solving optimization problems when gradient-based optimization algorithms are employed. Automatic differentiation can compute exact gradients, automatically by tracking the algebraic operations performed on the design variables. For the automation of the sensitivity analysis, an isogeometric framework is used. Here, the analysis mesh is obtained after carrying out successive refinements, while retaining the coarse geometry for the domain design. An automatic differentiation (AD) toolbox is used to perform the sensitivity analysis. The AD toolbox takes the code for computing the objective and constraint functions as input. Then, using a source code transformation approach, it outputs a code for computing the objective and constraint functions, and their sensitivities as well. The sensitivities obtained from the sensitivity propagation method are compared with analytical sensitivities, which are computed using a full isogeometric approach. The computational efficiency of AD is comparable to that of analytical sensitivities. However, the memory requirements are larger for AD. Therefore, AD is preferable if the memory requirements are satisfied. Automatic sensitivity analysis demonstrates its practicality since it simplifies the work of engineers and designers. Complex geometries with sharp edges and/or holes cannot easily be described with NURBS. One solution is the use of unstructured meshes. Simplex-elements (triangles and tetrahedra for two and three dimensions respectively) are particularly useful since they can automatically parameterize a wide variety of domains. In this regard, unstructured Bézier elements, commonly used in CAD, can be employed for the exact modelling of CAD boundary representations. In two dimensions, the domain enclosed by NURBS curves is parameterized with Bézier triangles. To describe exactly the boundary of a two-dimensional CAD model, the continuity of a NURBS boundary representation is reduced to C^0. Then, the control points are used to generate a triangulation such that the boundary of the domain is identical to the initial CAD boundary representation. Thus, a direct link between the design and analysis discretizations is provided and the sensitivities can be propagated to the design domain. In three dimensions, the initial CAD boundary representation is given as a collection of NURBS surfaces that enclose a volume. Using a mesh generator (Gmsh), a tetrahedral mesh is obtained. The original surface is reconstructed by modifying the location of the control points of the tetrahedral mesh using Bézier tetrahedral elements and a point inversion algorithm. This method offers the possibility of computing the sensitivity analysis using the analysis mesh. Then, the sensitivities can be propagated into the design discretization. To reuse the mesh originally generated, a moving Bézier tetrahedral mesh approach was implemented. A gradient-based optimization algorithm is employed together with a sensitivity propagation procedure for the shape optimization cases. The proposed shape optimization approaches are used to solve some standard benchmark problems in structural mechanics. The results obtained show that the proposed approach can compute accurate gradients and evolve the geometry towards optimal solutions. In three dimensions, the moving mesh approach results in faster convergence in terms of computational time and avoids remeshing at each optimization step. For considering topological changes in a CAD-based framework, an isogeometric phase-field based shape and topology optimization is developed. In this case, the diffuse interface of a phase-field variable over a design domain implicitly describes the boundaries of the geometry. The design variables are the local values of the phase-field variable. The descent direction to minimize the objective function is found by using the sensitivities of the objective function with respect to the design variables. The evolution of the phase-field is determined by solving the time dependent Allen-Cahn equation. Especially for topology optimization problems that require C^1 continuity, such as for flexoelectric structures, the isogeometric phase field method is of great advantage. NURBS can achieve the desired continuity more efficiently than the traditional employed functions. The robustness of the method is demonstrated when applied to different geometries, boundary conditions, and material configurations. The applications illustrate that compared to piezoelectricity, the electrical performance of flexoelectric microbeams is larger under bending. In contrast, the electrical power for a structure under compression becomes larger with piezoelectricity. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,4 KW - CAD KW - Gestaltoptimierung KW - Topologieoptimierung KW - Isogeometrische Analyse KW - Finite-Elemente-Methode KW - Computer-Aided Design KW - Shape Optimization KW - Topology Optimization KW - Isogeometric Analysis KW - Finite Element Method Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220831-47102 ER - TY - JOUR A1 - Lutolli, Blerim T1 - A Review of Domed Cities and Architecture: Past, Present and Future JF - Future cities and environment N2 - The goal of architecture is changing in response to the expanding role of cities, rapid urbanization, and transformation under changing economic, environmental, social, and demographic factors. As cities increased in the early modern era, overcrowding, urbanization, and pollution conditions led reformers to consider the future shape of the cities. One of the most critical topics in contemporary architecture is the subject of the future concepts of living. In most cases, domed cities, as a future concept of living, are rarely considered, and they are used chiefly as “utopian” visions in the discourse of future ways of living. This paper highlights the reviews of domed cities to deepen the understanding of the idea in practice, like its approach in terms of architecture. The main aim of this paper is to provide a broad overview for domed cities in the face of pollution as one of the main concerns in many European cities. As a result, the significance of the reviews of the existing projects is focused on their conceptual quality. This review will pave the way for further studies in terms of future developments in the realm of domed cities. In this paper, the city of Celje, one of the most polluted cities in Slovenia, is taken as a case study for considering the concept of Dome incorporated due to the lack of accessible literature on the topic. This review’s primary contribution is to allow architects to explore a broad spectrum of innovation by comparing today’s achievable statuses against the possibilities generated by domed cities. As a result of this study, the concept of living under the Dome remains to be developed in theory and practice. The current challenging climatic situation will accelerate the evolution of these concepts, resulting in the formation of new typologies, which are a requirement for humanity. KW - Architektur KW - Wohnform KW - Umweltverschutzung KW - Domed Cities KW - Architecture KW - Concept of Living KW - Architecture Pollution KW - Kuppelstadt KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221103-47335 UR - https://futurecitiesandenvironment.com/articles/10.5334/fce.154/ VL - 2022 IS - Volume 8, issue 1 SP - 1 EP - 9 PB - Ubiquity Press Limited CY - London ER - TY - THES A1 - Liu, Bokai T1 - Stochastic multiscale modeling of polymeric nanocomposites using Data-driven techniques N2 - In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks. Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis. While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models. As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling. This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components: -Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS). -Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks. -Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models. T3 - ISM-Bericht // Institut für Strukturmechanik, Bauhaus-Universität Weimar - 2022,3 KW - Polymere KW - Nanoverbundstruktur KW - multiscale KW - nanocomposite KW - stochastic KW - Data-driven Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220503-46379 ER - TY - THES A1 - Legatiuk, Anastasiia T1 - Discrete potential and function theories on a rectangular lattice and their applications N2 - The growing complexity of modern engineering problems necessitates development of advanced numerical methods. In particular, methods working directly with discrete structures, and thus, representing exactly some important properties of the solution on a lattice and not just approximating the continuous properties, become more and more popular nowadays. Among others, discrete potential theory and discrete function theory provide a variety of methods, which are discrete counterparts of the classical continuous methods for solving boundary value problems. A lot of results related to the discrete potential and function theories have been presented in recent years. However, these results are related to the discrete theories constructed on square lattices, and, thus, limiting their practical applicability and potentially leading to higher computational costs while discretising realistic domains. This thesis presents an extension of the discrete potential theory and discrete function theory to rectangular lattices. As usual in the discrete theories, construction of discrete operators is strongly influenced by a definition of discrete geometric setting. For providing consistent constructions throughout the whole thesis, a detailed discussion on the discrete geometric setting is presented in the beginning. After that, the discrete fundamental solution of the discrete Laplace operator on a rectangular lattice, which is the core of the discrete potential theory, its numerical analysis, and practical calculations are presented. By using the discrete fundamental solution of the discrete Laplace operator on a rectangular lattice, the discrete potential theory is then constructed for interior and exterior settings. Several discrete interior and exterior boundary value problems are then solved. Moreover, discrete transmission problems are introduced and several numerical examples of these problems are discussed. Finally, a discrete fundamental solution of the discrete Cauchy-Riemann operator on a rectangular lattice is constructed, and basics of the discrete function theory on a rectangular lattice are provided. This work indicates that the discrete theories provide solution methods with very good numerical properties to tackle various boundary value problems, as well as transmission problems coupling interior and exterior problems. The results presented in this thesis provide a basis for further development of discrete theories on irregular lattices. KW - Diskrete Funktionentheorie KW - Diskrete Potentialtheorie KW - Diskrete Fundamentallösung KW - Transmissionsaufgabe KW - Discrete potential theory KW - Discrete function theory KW - Transmission problem KW - Discrete fundamental solution Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20221220-48654 ER - TY - JOUR A1 - Kumari, Vandana A1 - Harirchian, Ehsan A1 - Lahmer, Tom A1 - Rasulzade, Shahla T1 - Evaluation of Machine Learning and Web-Based Process for Damage Score Estimation of Existing Buildings JF - Buildings N2 - The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency KW - Maschinelles Lernen KW - rapid assessment KW - Machine learning KW - Vulnerability assessment KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220509-46387 UR - https://www.mdpi.com/2075-5309/12/5/578 VL - 2022 IS - Volume 12, issue 5, article 578 SP - 1 EP - 23 PB - MDPI CY - Basel ER - TY - JOUR A1 - Kraaz, Luise A1 - Koop, Maria A1 - Wunsch, Maximilian A1 - Plank-Wiedenbeck, Uwe T1 - The Scaling Potential of Experimental Knowledge in the Case of the Bauhaus.MobilityLab, Erfurt (Germany) JF - Urban Planning N2 - Real-world labs hold the potential to catalyse rapid urban transformations through real-world experimentation. Characterised by a rather radical, responsive, and location-specific nature, real-world labs face constraints in the scaling of experimental knowledge. To make a significant contribution to urban transformation, the produced knowledge must go beyond the level of a building, street, or small district where real-world experiments are conducted. Thus, a conflict arises between experimental boundaries and the stimulation of broader implications. The challenges of scaling experimental knowledge have been recognised as a problem, but remain largely unexplained. Based on this, the article will discuss the applicability of the “typology of amplification processes” by Lam et al. (2020) to explore and evaluate the potential of scaling experimental knowledge from real-world labs. The application of the typology is exemplified in the case of the Bauhaus.MobilityLab. The Bauhaus.MobilityLab takes a unique approach by testing and developing cross-sectoral mobility, energy, and logistics solutions with a distinct focus on scaling knowledge and innovation. For this case study, different qualitative research techniques are combined according to “within-method triangulation” and synthesised in a strengths, weaknesses, opportunities, and threats (SWOT) analysis. The analysis of the Bauhaus.MobilityLab proves that the “typology of amplification processes” is useful as a systematic approach to identifying and evaluating the potential of scaling experimental knowledge. KW - Stadtplanung KW - Infrastrukturplanung KW - Transformation KW - Reallabor KW - Amplifikationsprozesse KW - Bauhaus.MobilityLab KW - experimentelles Wissen KW - Realexperimente KW - OA-Publikationsfonds2022 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230509-63633 UR - https://www.cogitatiopress.com/urbanplanning/article/view/5329 VL - 2022 IS - Volume 7, Issue 3 SP - 274 EP - 284 ER - TY - THES A1 - Kiesel, Johannes T1 - Harnessing Web Archives to Tackle Selected Societal Challenges N2 - With the growing importance of the World Wide Web, the major challenges our society faces are also increasingly affecting the digital areas of our lives. Some of the associated problems can be addressed by computer science, and some of these specifically by data-driven research. To do so, however, requires to solve open issues related to archive quality and the large volume and variety of the data contained. This dissertation contributes data, algorithms, and concepts towards leveraging the big data and temporal provenance capabilities of web archives to tackle societal challenges. We selected three such challenges that highlight the central issues of archive quality, data volume, and data variety, respectively: (1) For the preservation of digital culture, this thesis investigates and improves the automatic quality assurance of the web page archiving process, as well as the further processing of the resulting archive data for automatic analysis. (2) For the critical assessment of information, this thesis examines large datasets of Wikipedia and news articles and presents new methods for automatically determining quality and bias. (3) For digital security and privacy, this thesis exploits the variety of content on the web to quantify the security of mnemonic passwords and analyzes the privacy-aware re-finding of the various seen content through private web archives. N2 - Mit der wachsenden Bedeutung des World Wide Webs betreffen die großen Herausforderungen unserer Gesellschaft zunehmend auch die digitalen Bereiche unseres Lebens. Einige der zugehörigen Probleme können durch die Informatik, und einige von diesen speziell durch datengetriebene Forschung, angegangen werden. Dazu müssen jedoch offene Fragen im Zusammenhang mit der Qualität der Archive und der großen Menge und Vielfalt der enthaltenen Daten gelöst werden. Diese Dissertation trägt mit Daten, Algorithmen und Konzepten dazu bei, die große Datenmenge und temporale Protokollierung von Web-Archiven zu nutzen, um gesellschaftliche Herausforderungen zu bewältigen. Wir haben drei solcher Herausforderungen ausgewählt, die die zentralen Probleme der Archivqualität, des Datenvolumens und der Datenvielfalt hervorheben: (1) Für die Bewahrung der digitalen Kultur untersucht und verbessert diese Arbeit die automatische Qualitätsbestimmung einer Webseiten-Archivierung, sowie die weitere Aufbereitung der dabei entstehenden Archivdaten für automatische Auswertungen. (2) Für die kritische Bewertung von Information untersucht diese Arbeit große Datensätze an Wikipedia- und Nachrichtenartikeln und stellt neue Verfahren zur Bestimmung der Qualität und Einseitigkeit/Parteilichkeit vor. (3) Für die digitale Sicherheit und den Datenschutz nutzt diese Arbeit die Vielfalt der Inhalte im Internet, um die Sicherheit von mnemonischen Passwörtern zu quantifizieren, und analysiert das datenschutzbewusste Wiederauffinden der verschiedenen gesehenen Inhalte mit Hilfe von privaten Web-Archiven. KW - Informatik KW - Internet KW - Web archive Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220622-46602 ER -