Filtern
Dokumenttyp
- Konferenzveröffentlichung (38)
- Artikel (Wissenschaftlicher) (22)
- Dissertation (16)
- Teil eines Buches (Kapitel) (10)
- Bachelorarbeit (8)
- Masterarbeit (3)
- Buch (Monographie) (2)
- Periodikum (2)
- Studienarbeit (2)
- Habilitation (1)
Institut
- Graduiertenkolleg 1462 (19)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (19)
- Institut für Strukturmechanik (ISM) (18)
- Universitätsbibliothek (11)
- Professur Bauphysik (8)
- Professur Informatik in der Architektur (6)
- F. A. Finger-Institut für Baustoffkunde (FIB) (4)
- An-Institute (2)
- Institut für Europäische Urbanistik (2)
- Professur Bauchemie und Polymere Werkstoffe (2)
Schlagworte
- Angewandte Mathematik (50)
- Angewandte Informatik (36)
- Computerunterstütztes Verfahren (36)
- Strukturmechanik (14)
- Elektronisches Buch (7)
- Bauphysik (3)
- Bibliothek (3)
- E-Book-Reader (3)
- Urheberrecht (3)
- Architektur (2)
Erscheinungsjahr
- 2012 (105) (entfernen)
We study the Weinstein equation u on the upper half space R3+. The Weinstein equation is connected to the axially symmetric potentials. We compute solutions of the Weinstein equation depending on the hyperbolic distance and x2. These results imply the explicit mean value properties. We also compute the fundamental solution. The main tools are the hyperbolic metric and its invariance properties.
Non-destructive techniques for damage detection became the focus of engineering interests in the last few years. However, applying these techniques to large complex structures like civil engineering buildings still has some limitations since these types of structures are
unique and the methodologies often need a large number of specimens for reliable results. For this reason, cost and time can greatly influence the final results.
Model Assisted Probability Of Detection (MAPOD) has taken its place among the ranks of damage identification techniques, especially with advances in computer capacity and modeling tools. Nevertheless, the essential condition for a successful MAPOD is having a reliable model in advance. This condition is opening the door for model assessment and model quality problems. In this work, an approach is proposed that uses Partial Models (PM) to compute the Probability Of damage Detection (POD). A simply supported beam, that can be structurally modified and
tested under laboratory conditions, is taken as an example. The study includes both experimental and numerical investigations, the application of vibration-based damage detection approaches and a comparison of the results obtained based on tests and simulations.
Eventually, a proposal for a methodology to assess the reliability and the robustness of the models is given.
The present research analyses the error on prediction obtained under different data availability scenarios to determine which measurements contribute to an improvement of model prognosis and which not. A fully coupled 2D hydromechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are ranked by scaled sensitivities, Particle Swarm Optimization is applied to determine the optimal parameter values and model validation is performed to determine the magnitude of error forecast. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain actual prediction errors.
The analyses presented here were performed to 31 data sets of 100 observations of varying data types. Calibrating with multiple information types instead of only one sort, brings better calibration results and improvement in model prognosis. However, when using several types of information the number of observations have to be increased to be able to cover a representative part of the model domain; otherwise a compromise between data availability and domain
coverage prove best. Which type of information for calibration contributes to the best prognoses, could not be determined in advance. For the error in model prognosis does not depends on the error in calibration, but on the parameter error, which unfortunately can not be determined in reality since we do not know its real value. Excellent calibration fits with parameters’ values near the limits of reasonable physical values, provided the highest prognosis errors. While models which included excess pore pressure values for calibration provided the best prognosis, independent of the calibration fit.
Electromagnetic wave propagation is currently present in the vast majority of situations which occur in veryday life, whether in mobile communications, DTV, satellite tracking, broadcasting, etc. Because of this the study of increasingly complex means of propagation of lectromagnetic waves has become necessary in order to optimize resources and increase the capabilities of the devices as required by the growing demand for such services.
Within the electromagnetic wave propagation different parameters are considered that characterize it under various circumstances and of particular importance are the reflectance and transmittance. There are several methods or the analysis of the reflectance and transmittance such as the method of approximation by boundary condition, the plane wave expansion method (PWE), etc., but this work focuses on the WKB and SPPS methods.
The implementation of the WKB method is relatively simple but is found to be relatively efficient only when working at high frequencies. The SPPS method (Spectral Parameter Powers Series) based on the theory of pseudoanalytic functions, is used to solve this problem through a new representation for solutions of Sturm Liouville equations and has recently proven to be a powerful tool to solve different boundary value and eigenvalue problems. Moreover, it has a very suitable structure for numerical implementation, which in this case took place in the Matlab software for the valuation of both conventional and turning points profiles.
The comparison between the two methods allows us to obtain valuable information about their perfor mance which is useful for determining the validity and propriety of their application for solving problems where these parameters are calculated in real life applications.
In this paper, wavelet energy damage indicator is used in response surface methodology to identify the damage in simulated filler beam railway bridge. The approximate model is addressed to include the operational and surrounding condition in the assessment. The procedure is split into two stages, the training and detecting phase. During training phase, a so-called response surface is built from training data using polynomial regression and radial basis function approximation approaches. The response surface is used to detect the damage in structure during detection phase. The results show that the response surface model is able to detect moderate damage in one of bridge supports while the temperatures and train velocities are varied.
Long-span cable supported bridges are prone to aerodynamic instabilities caused by wind and this phenomenon is usually a major design criterion. If the wind speed exceeds the critical flutter speed of the bridge, this constitutes an Ultimate Limit State. The prediction of the flutter boundary therefore requires accurate and robust models. This paper aims at studying various combinations of models to predict the flutter phenomenon.
Since flutter is a coupling of aerodynamic forcing with a structural dynamics problem, different types and classes of models can be combined to study the interaction. Here, both numerical approaches and analytical models are utilised and coupled in different ways to assess the prediction quality of the hybrid model. Models for aerodynamic forces employed are the analytical Theodorsen expressions for the motion-enduced aerodynamic forces of a flat plate and Scanlan derivatives as a Meta model. Further, Computational Fluid Dynamics (CFD) simulations using the Vortex Particle Method (VPM) were used to cover numerical models.
The structural representations were dimensionally reduced to two degree of freedom section models calibrated from global models as well as a fully three-dimensional Finite Element (FE) model. A two degree of freedom system was analysed analytically as well as numerically.
Generally, all models were able to predict the flutter phenomenon and relatively close agreement was found for the particular bridge. In conclusion, the model choice for a given practical analysis scenario will be discussed in the context of the analysis findings.
Dieses Arbeitspapier beschreibt, wie ausgehend von einem vorhandenen Straßennetzwerk Bebauungsareale mithilfe von Unterteilungsalgorithmen automatisch umgelegt, d.h. in Grundstücke unterteilt, und anschließend auf Basis verschiedener städtebaulicher Typen bebaut werden können. Die Unterteilung von Bebauungsarealen und die Generierung von Bebauungsstrukturen unterliegen dabei bestimmten stadtplanerischen Einschränkungen, Vorgaben und Parametern. Ziel ist es aus den dargestellten Untersuchungen heraus ein Vorschlagssystem für stadtplanerische Entwürfe zu entwickeln, das anhand der Umsetzung eines ersten Softwareprototyps zur Generierung von Stadtstrukturen weiter diskutiert wird.
Aktionsräume in Dresden
(2012)
In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten.
Volumerendering ist eine Darstellungstechnik, um verschiedene räumliche Mess- und Simulationsdaten anschaulich, interaktiv grafisch darzustellen. Im folgenden Beitrag wird ein Verfahren vorgestellt, mehrere Volumendaten mit einem Architekturflächenmodell zu überlagern. Diese komplexe Darstellungsberechnung findet mit hardwarebeschleunigten Shadern auf der Grafikkarte statt. Im Beitrag wird hierzu der implementierte Softwareprototyp "VolumeRendering" vorgestellt. Neben dem interaktiven Berechnungsverfahren wurde ebenso Wert auf eine nutzerfreundliche Bedienung gelegt. Das Ziel bestand darin, eine einfache Bewertung der Volumendaten durch Fachplaner zu ermöglichen. Durch die Überlagerung, z. B. verschiedener Messverfahren mit einem Flächenmodell, ergeben sich Synergien und neue Auswertungsmöglichkeiten. Abschließend wird anhand von Beispielen aus einem interdisziplinären Forschungsprojekt die Anwendung des Softwareprototyps illustriert.
In vorliegender Studie werden die Wohnstandortpräferenzen der Sinus-Milieugruppen in Dresden über eine standardisierte Befragung (n=318) untersucht. Es wird unterschieden zwischen handlungsleitenden Wohnstandortpräferenzen, die durch Anhaltspunkte auf der Handlungsebene stärker in Betracht gezogen werden sollten, und Wohnstandortpräferenzen, welche eher orientierenden Charakter haben. Die Wohnstandortpräferenzen werden untersucht anhand der Kategorien Ausstattung/Zustand der Wohnung/des näheren Wohnumfeldes, Versorgungsstruktur, soziales Umfeld, Baustrukturtyp, Ortsgebundenheit sowie des Aspektes des Images eines Stadtviertels. Um die Befragten den Sinus-Milieugruppen zuordnen zu können, wird ein Lebensweltsegment-Modell entwickelt, welches den Anspruch hat, die Sinus-Milieugruppen in der Tendenz abzubilden. Die Studie kommt zu dem Ergebnis, dass die Angehörigen der verschiedenen Lebensweltsegmente in jeder Kategorie - wenn auch z.T. auf geringerem Niveau - signifikante Unterschiede in der Bewertung einzelner Wohnstandortpräferenzen aufweisen.
Wissen wer wo wohnt
(2012)
In cities people live together in neighbourhoods. Here they can find the infrastructure they need, starting with shops for the daily purpose to the life-cycle based infrastructures like kindergartens or nursing homes. But not all neighbourhoods are identical. The infrastructure mixture varies from neighbourhood to neighbourhood, but different people have different needs which can change e.g. based on the life cycle situation or their affiliation to a specific milieu. We can assume that a person or family tries to settle in a specific neighbourhood that satisfies their needs. So, if the residents are happy with a neighbourhood, we can further assume that this neighbourhood satisfies their needs. The socio-oeconomic panel (SOEP) of the German Institute for Economy (DIW) is a survey that investigates the economic structure of the German population. Every four years one part of this survey includes questions about what infrastructures can be found in the respondents neighbourhood and the satisfaction of the respondent with their neighbourhood. Further, it is possible to add a milieu estimation for each respondent or household. This gives us the possibility to analyse the typical neighbourhoods in German cities as well as the infrastructure profiles of the different milieus. Therefore, we take the environment variables from the dataset and recode them into a binary variable – whether an infrastructure is available or not. According to Faust (2005), these sets can also be understood, as a network of actors in a neighbourhood, which share two, three or more infrastructures. Like these networks, this neighbourhood network can also be visualized as a bipartite affiliation network and therefore analysed using correspondence analysis. We will show how a neighbourhood analysis will benefit from an upstream correspondence analysis and how this could be done. We will also present and discuss the results of such an analysis.
The 19th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 4th till 6th July 2012. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!
Gegen Ende des 19. Jahrhunderts geriet das Dokumentationswesen in eine Krise: wie lässt sich das kulturelle Wissen nachhaltiger organisieren?
Paul Otlet (1868–1944), ein belgischer Industriellenerbe und studierter Rechtsanwalt, entwickelte zusammen mit Henri La Fontaine ab 1895 ein Ordnungs- und Klassifikationssystem, das das millionenfach publizierte „Weltwissen“ dokumentieren sollte. Otlets Anspruch war die Schaffung eines „Instrument d’ubiquité“, das zur „Hyper-Intelligence“ führen sollte. Jahrzehnte vor Web und Wikis weisen diese Ideen auf eine globale Vernetzung des Wissens hin.
Der vorliegende Titel erinnert an den Pionier Paul Otlet mit einer ausführlichen Einleitung von Frank Hartmann (Bauhaus-Universität Weimar), Beiträgen von W. Boyd Rayward (University of Illinois), Charles van den Heuvel (Königlich Niederländische Akademie der Wissenschaften) und Wouter Van Acker (Universität Gent).
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
Metakaolin made from kaolin is used around the world but rarely in Vietnam where abundant deposits of kaolin is found. The first studies of producing metakaolin were conducted with high quality Vietnamese kaolins. The results showed the potential to produce metakaolin, and its effect has on strength development of mortars and concretes. However, utilisation of a low quality kaolin for producing Vietnamese metakaolin has not been studied so far.
The objectives of this study were to produce a good quality metakaolin made from low quality Vietnamese kaolin and to facilitate the utilisation of Vietnamese metakaolin in composite cements.
In order to reach such goals, the optimal thermal conversion of Vietnamese kaolin into metakaolin was carried out by many investigations, and as such the optimal conversion is found using the analysis results of DSC/TGA, XRD and CSI. During the calcination in a range of 500 – 800 oC lasting for 1 – 5 hours, the characterisation of calcinated kaolin was also monitored for mass loss, BET surface, PSD, density as well as the presence of the residual water. It is found to have a well correlation between residual water and BET surface.
The pozzolanic activity of metakaolin was tested by various methods regarding to the saturated lime method, mCh and TGA-CaO method. The results of the study showed which method is the most suitable one to characterise the real activity of metakaolin and can reach the greatest agreement with concrete performance. Furthermore, the pozzolanic activity results tested using methods were also analysed and compared to each other with respect to the BET surface.
The properties of Vietnam metakaolin was established using investigations on water demand, setting time, spread-flowability, and strength. It is concluded that depending on the intended use of composite cement and weather conditions of cure, each Vietnamese metakaolin can be used appropriately to produce (1) a composite cement with a low water demand (2) a high strength of composite cement (3) a composite cement that aims to reduce CO2 emissions and to improve economics of cement products (4) a high performance mortar.
The durability of metakaolin mortar was tested to find the needed metakaolin content against ASR, sulfat and sulfuric acid attacks successfully.
Increasingly powerful hard- and software allows for the numerical simulation of complex physical phenomena with high levels of detail. In light of this development the definition of numerical models for the Finite Element Method (FEM) has become the bottleneck in the simulation process. Characteristic features of the model generation are large manual efforts and a de-coupling of geometric and numerical model. In the highly probable case of design revisions all steps of model preprocessing and mesh generation have to be repeated. This includes the idealization and approximation of a geometric model as well as the definition of boundary conditions and model parameters. Design variants leading to more resource-efficient structures might hence be disregarded due to limited budgets and constrained time frames.
A potential solution to above problem is given with the concept of Isogeometric Analysis (IGA). Core idea of this method is to directly employ a geometric model for numerical simulations, which allows to circumvent model transformations and the accompanying data losses. Basis for this method are geometric models described in terms of Non-uniform rational B-Splines (NURBS). This class of piecewise continuous rational polynomial functions is ubiquitous in computer graphics and Computer-Aided Design (CAD). It allows the description of a wide range of geometries using a compact mathematical representation. The shape of an object thereby results from the interpolation of a set of control points by means of the NURBS functions, allowing efficient representations for curves, surfaces and solid bodies alike. Existing software applications, however, only support the modeling and manipulation of the former two. The description of three-dimensional solid bodies consequently requires significant manual effort, thus essentially forbidding the setup of complex models.
This thesis proposes a procedural approach for the generation of volumetric NURBS models. That is, a model is not described in terms of its data structures but as a sequence of modeling operations applied to a simple initial shape. In a sense this describes the "evolution" of the geometric model under the sequence of operations. In order to adapt this concept to NURBS geometries, only a compact set of commands is necessary which, in turn, can be adapted from existing algorithms. A model then can be treated in terms of interpretable model parameters. This leads to an abstraction from its data structures and model variants can be set up by variation of the governing parameters.
The proposed concept complements existing template modeling approaches: templates can not only be defined in terms of modeling commands but can also serve as input geometry for said operations. Such templates, arranged in a nested hierarchy, provide an elegant model representation. They offer adaptivity on each tier of the model hierarchy and allow to create complex models from only few model parameters. This is demonstrated for volumetric fluid domains used in the simulation of vertical-axis wind turbines. Starting from a template representation of airfoil cross-sections, the complete "negative space" around the rotor blades can be described by a small set of model parameters, and model variants can be set up in a fraction of a second.
NURBS models offer a high geometric flexibility, allowing to represent a given shape in different ways. Different model instances can exhibit varying suitability for numerical analyses. For their assessment, Finite Element mesh quality metrics are regarded. The considered metrics are based on purely geometric criteria and allow to identify model degenerations commonly used to achieve certain geometric features. They can be used to decide upon model adaptions and provide a measure for their efficacy. Unfortunately, they do not reveal a relation between mesh distortion and ill-conditioning of the equation systems resulting from the numerical model.
Thin-walled cylindrical composite shell structures are often applied in aerospace for lighter and cheaper launcher transport system. These structures exhibit sensitivity to geometrical imperfection and are prone to buckling under axial compression. Today the design is based on NASA guidelines from the 1960’s [1] using a conservative lower bound curve embodying many experimental results of that time. It is well known that the advantages and different characteristics of composites as well as the evolution of manufacturing standards are not considered apporopriately in this outdated approach. The DESICOS project was initiated to provide new design guidelines regarding all the advantages of composites and allow further weight reduction of space structures by guaranteeing a more precise and robust design.
Therefore it is necessary among other things to understand how a cutout with different dimensions affects the buckling load of a thin-walled cylindrical shell structure in combination with initial geometric imperfections. This work is intended to identify a ratio between the cutout characteristic dimension (in this case the cutout diameter) and the structure characteristic dimension (in this case the cylinder radius) that can be used to tell if the buckling structure is dominated by initial imperfections or is dominated by the cutout.
Die Arbeit »Anachronismen: Historiografie und Kino« geht von einer zunächst einfachen Beobachtung aus: beinahe immer, wenn Historiker_innen sich mit Geschichtsfilmen auseinander setzen, findet sich die lautstark geführte Beschwerde über die zahlreichen und vermeidbaren Anachronismen der Filme, die sie als ernst zu nehmende historiografische Beiträge desavouieren.
Von hier ausgehend verfolgt die Arbeit ein dreifaches Projekt: zunächst in einer kritischen Analyse geschichtstheoretischer Texte einige Hinweise für den Status von Anachronismen für die moderne westliche Historiografie zu gewinnen. Zweitens zu untersuchen, welche Rolle Anachronismen für den Geschichtsfilm spielen. Und drittens von dort aus das epistemische Potential anachronistischen Geschichtskinos zu untersuchen.
Eine der Hauptthesen, welche den Blick sowohl auf die Filme wie auf die theoretischen Texte leitet, besagt, dass Anachronismen genau jene Punkte sind, an denen die Medien einer jeden Geschichtsschreibung beobachtbar werden. Die Beobachtung und Beschreibung dieser Medien der kinematografischen Geschichtsschreibung unternimmt die Arbeit unter Zuhilfenahme einiger theoretischer Überlegungen der Actor Network Theory (ANT).
Die Arbeit ist in vier Kapitel gegliedert, in deren Zentrum jeweils die Diskussion eines ANT-Begriffs sowie die Analyse eines Geschichtsfilmes steht. Zu den untersuchten Filmen gehören Shutter Island (Martin Scorsese, 2010), Chronik der Anna Magdalena Bach (Jean-Marie Straub/Danièle Huillet, 1968), Cleopatra (Joseph L. Mankiewicz, 1963) und Caravaggio (Derek Jarman, 1986). Die Arbeit kommentiert außerdem theoretische Texte zur Historiografie und zu Anachronismen von Walter Benjamin, Leo Bersani, Georges Didi-Huberman, Siegfried Kracauer, Friedrich Meinecke, Friedrich Nietzsche, Jacques Rancière, Leopold Ranke, Paul Ricœur, Georg Simmel, Hayden White u. a.
This thesis explores how architecture aids in the performance of open-ended narratives by engaging both actively and passively with memory, i.e. remembering and forgetting. I argue that architecture old and new stems from specific cultural and social forms, and is dictated by processes of remembering and forgetting. It is through interaction (between inhabitant and object) that architecture is given innate meanings within an urban environment that makes its role in the interplay one of investigative interest.
To enable the study of this performance, I develop a framework based on various theoretical paradigms to investigate three broad questions: 1) How does one study the performance of memory and forgetting through architecture in dynamic urban landscapes? 2) Is there a way to identify markers and elements within the urban environment that enable such a study? 3) What is the role that urban form plays within this framework and does the transformation of urban form imply the transformation of memory and forgetting?
The developed framework is applied to a macro (an urban level study of Bangalore, India) and micro level study (a singular or object level study of Stari Most/ Old Bridge, Mostar, BiH), to analyse the performance of remembering and forgetting in various urban spheres through interaction with architecture and form. By means of observations, archival research, qualitative mapping, drawings and narrative interviews, the study demonstrates that certain sites and characteristics of architecture enable the performance of remembering and the questioning of forgetting by embodying features that support this act.
Combining theory and empirical studies this thesis is an attempt to elucidate on the processes through which remembering and forgetting is initiated and experienced through architectural forms. The thesis argues for recognising the potential of architecture as one that embodies and supports the performance of memory and forgetting, by acting as an auratic contact zone.
Die Qualität von Beplankungselementen wirkt sich deutlich auf den Feuerwiderstand von Metallständer-Wandkonstruktionen aus. Daher wurde im Rahmen dieser Arbeit der Einfluss von Zusätzen in Gipsplatten bezüglich einer möglichen Verbesserung dieser Eigenschaft untersucht.
Zu diesem Zweck wurden spezielle, den jeweiligen Untersuchungsbedingungen angepasste Probekörper unter Verwendung verschiedenster Zusätze gefertigt. Die Beurteilung deren Auswirkungen erfolgte insbesondere mittels nachfolgender fünf Kriterien:
1) dem Zeitpunkt der Temperaturerhöhung nach der Probekörperentwässerung,
2) dem Maximalwert der Plattenrückseitentemperatur,
3) der Größe und der Anzahl der Risse,
4) der Plattenstabilität nach der Wärmebeanspruchung,
5) der Verkürzung von prismatischen Probekörpern.
Besonders wichtig war hierbei die Charakterisierung der Auswirkungen einer simulierten Brandbeanspruchung von 970 °C über 90 Minuten auf Labor-Gipsplatten. Dabei wurde die Temperaturänderung auf der Plattenrückseite über den gesamten Prüfzeitraum kontinuierlich erfasst. Die Bewertung des Zusammenhalts der Platten nach der thermischen Beanspruchung erfolgte erstmals quantitativ über Anzahl und Größe der an den Proben entstandenen Risse. Ursächlich für die Rissbildung ist die Verringerung des Probekörpervolumens infolge des ausgetriebenen Kristallwassers. Da dieser Parameter im Plattenversuch nicht bestimmt werden kann, wurde ergänzend das Längenänderungsverhalten von Prismen im Ergebnis einer 90minütigen Temperung bei 1000 °C im Muffelofen ermittelt.
Besonders vorteilhaft hat sich die Zugabe von 80 g/m2 Glasfasern und 7,75 % Kalksteinmehl auf das Verhalten von Gipsplatten bei Brandbeanspruchung ausgewirkt. Diese Verbesserung ist insbesondere auf höhere Stabilität und geringere Schrumpfung der Gipsplatte zurückzuführen.
Basierend auf den im Labormaßstab erhaltenen Ergebnissen wurden Rezepturvorschläge zur Verbesserung des Feuerwiderstandsverhaltens von Gipsplatten unter Praxisbedingungen entwickelt. Die Herstellung der erforderlichen großformatigen Platten erfolgte auf der Bandstraße der Knauf Gips KG. Diese Platten wurden als Wandkonstruktion mit zweilagiger Beplankung einer großtechnischen Prüfung erfolgreich unterzogen. Eine geringere Durchbiegung der Wandkonstruktion, eine verminderte Volumenreduzierung der Platten sowie eine erhöhte Plattenstabilität belegen die verbesserten Eigenschaften dieser modifizierten Feuerschutzplatte.
Weitere durchgeführte Untersuchungen ergaben, dass es unerheblich ist, ob die Platten auf Basis von Natur- oder REA-Gips bzw. mit hohem oder niedrigem Flächengewicht gefertigt wurden. Das eindeutig beste Ergebnis mit einer Feuerwiderstandsdauer von 118 Minuten hat eine Wandkonstruktion aus Feuerschutzplatten auf Basis eines Stuckgipses aus 100 % REA-Gips mit einem Anteil von 83,9 g/m2 Glasfasern und 1 % Vermiculit und einem Flächengewicht von 10,77 kg/m2, bei einer Plattenstärke von 12,5 mm.
Die als Ziel vorgebende Feuerwiderstandsdauer von 120 Minuten bei zweilagiger Beplankung ohne Dämmstoff könnte künftig erreicht werden, wenn es gelingt, die Volumenreduzierung noch besser zu kompensieren und die Plattenstabilität zu steigern. Eine Möglichkeit hierzu ist die Substitution der beidseitigen Kartonlagen durch eine Glasfaser-Vliesummantelung. Die Wandkonstruktion W112 ohne Dämmstoff erreicht dabei eine Feuerwiderstandsdauer von weit über 120 Minuten, wobei der Gipskern mit Glasfasern armiert ist.