Refine
Document Type
- Conference Proceeding (38)
- Article (22)
- Doctoral Thesis (16)
- Part of a Book (10)
- Bachelor Thesis (8)
- Master's Thesis (3)
- Book (2)
- Periodical (2)
- Study Thesis (2)
- Habilitation (1)
Institute
- Graduiertenkolleg 1462 (19)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (19)
- Institut für Strukturmechanik (ISM) (18)
- Universitätsbibliothek (11)
- Professur Bauphysik (8)
- Professur Informatik in der Architektur (6)
- F. A. Finger-Institut für Baustoffkunde (FIB) (4)
- An-Institute (2)
- Institut für Europäische Urbanistik (2)
- Professur Bauchemie und Polymere Werkstoffe (2)
Keywords
- Angewandte Mathematik (50)
- Angewandte Informatik (36)
- Computerunterstütztes Verfahren (36)
- Strukturmechanik (14)
- Elektronisches Buch (7)
- Bauphysik (3)
- Bibliothek (3)
- E-Book-Reader (3)
- Urheberrecht (3)
- Architektur (2)
Year of publication
- 2012 (105) (remove)
Wissen wer wo wohnt
(2012)
In cities people live together in neighbourhoods. Here they can find the infrastructure they need, starting with shops for the daily purpose to the life-cycle based infrastructures like kindergartens or nursing homes. But not all neighbourhoods are identical. The infrastructure mixture varies from neighbourhood to neighbourhood, but different people have different needs which can change e.g. based on the life cycle situation or their affiliation to a specific milieu. We can assume that a person or family tries to settle in a specific neighbourhood that satisfies their needs. So, if the residents are happy with a neighbourhood, we can further assume that this neighbourhood satisfies their needs. The socio-oeconomic panel (SOEP) of the German Institute for Economy (DIW) is a survey that investigates the economic structure of the German population. Every four years one part of this survey includes questions about what infrastructures can be found in the respondents neighbourhood and the satisfaction of the respondent with their neighbourhood. Further, it is possible to add a milieu estimation for each respondent or household. This gives us the possibility to analyse the typical neighbourhoods in German cities as well as the infrastructure profiles of the different milieus. Therefore, we take the environment variables from the dataset and recode them into a binary variable – whether an infrastructure is available or not. According to Faust (2005), these sets can also be understood, as a network of actors in a neighbourhood, which share two, three or more infrastructures. Like these networks, this neighbourhood network can also be visualized as a bipartite affiliation network and therefore analysed using correspondence analysis. We will show how a neighbourhood analysis will benefit from an upstream correspondence analysis and how this could be done. We will also present and discuss the results of such an analysis.
The 19th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 4th till 6th July 2012. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!
Gegen Ende des 19. Jahrhunderts geriet das Dokumentationswesen in eine Krise: wie lässt sich das kulturelle Wissen nachhaltiger organisieren?
Paul Otlet (1868–1944), ein belgischer Industriellenerbe und studierter Rechtsanwalt, entwickelte zusammen mit Henri La Fontaine ab 1895 ein Ordnungs- und Klassifikationssystem, das das millionenfach publizierte „Weltwissen“ dokumentieren sollte. Otlets Anspruch war die Schaffung eines „Instrument d’ubiquité“, das zur „Hyper-Intelligence“ führen sollte. Jahrzehnte vor Web und Wikis weisen diese Ideen auf eine globale Vernetzung des Wissens hin.
Der vorliegende Titel erinnert an den Pionier Paul Otlet mit einer ausführlichen Einleitung von Frank Hartmann (Bauhaus-Universität Weimar), Beiträgen von W. Boyd Rayward (University of Illinois), Charles van den Heuvel (Königlich Niederländische Akademie der Wissenschaften) und Wouter Van Acker (Universität Gent).
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
Metakaolin made from kaolin is used around the world but rarely in Vietnam where abundant deposits of kaolin is found. The first studies of producing metakaolin were conducted with high quality Vietnamese kaolins. The results showed the potential to produce metakaolin, and its effect has on strength development of mortars and concretes. However, utilisation of a low quality kaolin for producing Vietnamese metakaolin has not been studied so far.
The objectives of this study were to produce a good quality metakaolin made from low quality Vietnamese kaolin and to facilitate the utilisation of Vietnamese metakaolin in composite cements.
In order to reach such goals, the optimal thermal conversion of Vietnamese kaolin into metakaolin was carried out by many investigations, and as such the optimal conversion is found using the analysis results of DSC/TGA, XRD and CSI. During the calcination in a range of 500 – 800 oC lasting for 1 – 5 hours, the characterisation of calcinated kaolin was also monitored for mass loss, BET surface, PSD, density as well as the presence of the residual water. It is found to have a well correlation between residual water and BET surface.
The pozzolanic activity of metakaolin was tested by various methods regarding to the saturated lime method, mCh and TGA-CaO method. The results of the study showed which method is the most suitable one to characterise the real activity of metakaolin and can reach the greatest agreement with concrete performance. Furthermore, the pozzolanic activity results tested using methods were also analysed and compared to each other with respect to the BET surface.
The properties of Vietnam metakaolin was established using investigations on water demand, setting time, spread-flowability, and strength. It is concluded that depending on the intended use of composite cement and weather conditions of cure, each Vietnamese metakaolin can be used appropriately to produce (1) a composite cement with a low water demand (2) a high strength of composite cement (3) a composite cement that aims to reduce CO2 emissions and to improve economics of cement products (4) a high performance mortar.
The durability of metakaolin mortar was tested to find the needed metakaolin content against ASR, sulfat and sulfuric acid attacks successfully.
Increasingly powerful hard- and software allows for the numerical simulation of complex physical phenomena with high levels of detail. In light of this development the definition of numerical models for the Finite Element Method (FEM) has become the bottleneck in the simulation process. Characteristic features of the model generation are large manual efforts and a de-coupling of geometric and numerical model. In the highly probable case of design revisions all steps of model preprocessing and mesh generation have to be repeated. This includes the idealization and approximation of a geometric model as well as the definition of boundary conditions and model parameters. Design variants leading to more resource-efficient structures might hence be disregarded due to limited budgets and constrained time frames.
A potential solution to above problem is given with the concept of Isogeometric Analysis (IGA). Core idea of this method is to directly employ a geometric model for numerical simulations, which allows to circumvent model transformations and the accompanying data losses. Basis for this method are geometric models described in terms of Non-uniform rational B-Splines (NURBS). This class of piecewise continuous rational polynomial functions is ubiquitous in computer graphics and Computer-Aided Design (CAD). It allows the description of a wide range of geometries using a compact mathematical representation. The shape of an object thereby results from the interpolation of a set of control points by means of the NURBS functions, allowing efficient representations for curves, surfaces and solid bodies alike. Existing software applications, however, only support the modeling and manipulation of the former two. The description of three-dimensional solid bodies consequently requires significant manual effort, thus essentially forbidding the setup of complex models.
This thesis proposes a procedural approach for the generation of volumetric NURBS models. That is, a model is not described in terms of its data structures but as a sequence of modeling operations applied to a simple initial shape. In a sense this describes the "evolution" of the geometric model under the sequence of operations. In order to adapt this concept to NURBS geometries, only a compact set of commands is necessary which, in turn, can be adapted from existing algorithms. A model then can be treated in terms of interpretable model parameters. This leads to an abstraction from its data structures and model variants can be set up by variation of the governing parameters.
The proposed concept complements existing template modeling approaches: templates can not only be defined in terms of modeling commands but can also serve as input geometry for said operations. Such templates, arranged in a nested hierarchy, provide an elegant model representation. They offer adaptivity on each tier of the model hierarchy and allow to create complex models from only few model parameters. This is demonstrated for volumetric fluid domains used in the simulation of vertical-axis wind turbines. Starting from a template representation of airfoil cross-sections, the complete "negative space" around the rotor blades can be described by a small set of model parameters, and model variants can be set up in a fraction of a second.
NURBS models offer a high geometric flexibility, allowing to represent a given shape in different ways. Different model instances can exhibit varying suitability for numerical analyses. For their assessment, Finite Element mesh quality metrics are regarded. The considered metrics are based on purely geometric criteria and allow to identify model degenerations commonly used to achieve certain geometric features. They can be used to decide upon model adaptions and provide a measure for their efficacy. Unfortunately, they do not reveal a relation between mesh distortion and ill-conditioning of the equation systems resulting from the numerical model.
Thin-walled cylindrical composite shell structures are often applied in aerospace for lighter and cheaper launcher transport system. These structures exhibit sensitivity to geometrical imperfection and are prone to buckling under axial compression. Today the design is based on NASA guidelines from the 1960’s [1] using a conservative lower bound curve embodying many experimental results of that time. It is well known that the advantages and different characteristics of composites as well as the evolution of manufacturing standards are not considered apporopriately in this outdated approach. The DESICOS project was initiated to provide new design guidelines regarding all the advantages of composites and allow further weight reduction of space structures by guaranteeing a more precise and robust design.
Therefore it is necessary among other things to understand how a cutout with different dimensions affects the buckling load of a thin-walled cylindrical shell structure in combination with initial geometric imperfections. This work is intended to identify a ratio between the cutout characteristic dimension (in this case the cutout diameter) and the structure characteristic dimension (in this case the cylinder radius) that can be used to tell if the buckling structure is dominated by initial imperfections or is dominated by the cutout.
Die Arbeit »Anachronismen: Historiografie und Kino« geht von einer zunächst einfachen Beobachtung aus: beinahe immer, wenn Historiker_innen sich mit Geschichtsfilmen auseinander setzen, findet sich die lautstark geführte Beschwerde über die zahlreichen und vermeidbaren Anachronismen der Filme, die sie als ernst zu nehmende historiografische Beiträge desavouieren.
Von hier ausgehend verfolgt die Arbeit ein dreifaches Projekt: zunächst in einer kritischen Analyse geschichtstheoretischer Texte einige Hinweise für den Status von Anachronismen für die moderne westliche Historiografie zu gewinnen. Zweitens zu untersuchen, welche Rolle Anachronismen für den Geschichtsfilm spielen. Und drittens von dort aus das epistemische Potential anachronistischen Geschichtskinos zu untersuchen.
Eine der Hauptthesen, welche den Blick sowohl auf die Filme wie auf die theoretischen Texte leitet, besagt, dass Anachronismen genau jene Punkte sind, an denen die Medien einer jeden Geschichtsschreibung beobachtbar werden. Die Beobachtung und Beschreibung dieser Medien der kinematografischen Geschichtsschreibung unternimmt die Arbeit unter Zuhilfenahme einiger theoretischer Überlegungen der Actor Network Theory (ANT).
Die Arbeit ist in vier Kapitel gegliedert, in deren Zentrum jeweils die Diskussion eines ANT-Begriffs sowie die Analyse eines Geschichtsfilmes steht. Zu den untersuchten Filmen gehören Shutter Island (Martin Scorsese, 2010), Chronik der Anna Magdalena Bach (Jean-Marie Straub/Danièle Huillet, 1968), Cleopatra (Joseph L. Mankiewicz, 1963) und Caravaggio (Derek Jarman, 1986). Die Arbeit kommentiert außerdem theoretische Texte zur Historiografie und zu Anachronismen von Walter Benjamin, Leo Bersani, Georges Didi-Huberman, Siegfried Kracauer, Friedrich Meinecke, Friedrich Nietzsche, Jacques Rancière, Leopold Ranke, Paul Ricœur, Georg Simmel, Hayden White u. a.
This thesis explores how architecture aids in the performance of open-ended narratives by engaging both actively and passively with memory, i.e. remembering and forgetting. I argue that architecture old and new stems from specific cultural and social forms, and is dictated by processes of remembering and forgetting. It is through interaction (between inhabitant and object) that architecture is given innate meanings within an urban environment that makes its role in the interplay one of investigative interest.
To enable the study of this performance, I develop a framework based on various theoretical paradigms to investigate three broad questions: 1) How does one study the performance of memory and forgetting through architecture in dynamic urban landscapes? 2) Is there a way to identify markers and elements within the urban environment that enable such a study? 3) What is the role that urban form plays within this framework and does the transformation of urban form imply the transformation of memory and forgetting?
The developed framework is applied to a macro (an urban level study of Bangalore, India) and micro level study (a singular or object level study of Stari Most/ Old Bridge, Mostar, BiH), to analyse the performance of remembering and forgetting in various urban spheres through interaction with architecture and form. By means of observations, archival research, qualitative mapping, drawings and narrative interviews, the study demonstrates that certain sites and characteristics of architecture enable the performance of remembering and the questioning of forgetting by embodying features that support this act.
Combining theory and empirical studies this thesis is an attempt to elucidate on the processes through which remembering and forgetting is initiated and experienced through architectural forms. The thesis argues for recognising the potential of architecture as one that embodies and supports the performance of memory and forgetting, by acting as an auratic contact zone.
Die Qualität von Beplankungselementen wirkt sich deutlich auf den Feuerwiderstand von Metallständer-Wandkonstruktionen aus. Daher wurde im Rahmen dieser Arbeit der Einfluss von Zusätzen in Gipsplatten bezüglich einer möglichen Verbesserung dieser Eigenschaft untersucht.
Zu diesem Zweck wurden spezielle, den jeweiligen Untersuchungsbedingungen angepasste Probekörper unter Verwendung verschiedenster Zusätze gefertigt. Die Beurteilung deren Auswirkungen erfolgte insbesondere mittels nachfolgender fünf Kriterien:
1) dem Zeitpunkt der Temperaturerhöhung nach der Probekörperentwässerung,
2) dem Maximalwert der Plattenrückseitentemperatur,
3) der Größe und der Anzahl der Risse,
4) der Plattenstabilität nach der Wärmebeanspruchung,
5) der Verkürzung von prismatischen Probekörpern.
Besonders wichtig war hierbei die Charakterisierung der Auswirkungen einer simulierten Brandbeanspruchung von 970 °C über 90 Minuten auf Labor-Gipsplatten. Dabei wurde die Temperaturänderung auf der Plattenrückseite über den gesamten Prüfzeitraum kontinuierlich erfasst. Die Bewertung des Zusammenhalts der Platten nach der thermischen Beanspruchung erfolgte erstmals quantitativ über Anzahl und Größe der an den Proben entstandenen Risse. Ursächlich für die Rissbildung ist die Verringerung des Probekörpervolumens infolge des ausgetriebenen Kristallwassers. Da dieser Parameter im Plattenversuch nicht bestimmt werden kann, wurde ergänzend das Längenänderungsverhalten von Prismen im Ergebnis einer 90minütigen Temperung bei 1000 °C im Muffelofen ermittelt.
Besonders vorteilhaft hat sich die Zugabe von 80 g/m2 Glasfasern und 7,75 % Kalksteinmehl auf das Verhalten von Gipsplatten bei Brandbeanspruchung ausgewirkt. Diese Verbesserung ist insbesondere auf höhere Stabilität und geringere Schrumpfung der Gipsplatte zurückzuführen.
Basierend auf den im Labormaßstab erhaltenen Ergebnissen wurden Rezepturvorschläge zur Verbesserung des Feuerwiderstandsverhaltens von Gipsplatten unter Praxisbedingungen entwickelt. Die Herstellung der erforderlichen großformatigen Platten erfolgte auf der Bandstraße der Knauf Gips KG. Diese Platten wurden als Wandkonstruktion mit zweilagiger Beplankung einer großtechnischen Prüfung erfolgreich unterzogen. Eine geringere Durchbiegung der Wandkonstruktion, eine verminderte Volumenreduzierung der Platten sowie eine erhöhte Plattenstabilität belegen die verbesserten Eigenschaften dieser modifizierten Feuerschutzplatte.
Weitere durchgeführte Untersuchungen ergaben, dass es unerheblich ist, ob die Platten auf Basis von Natur- oder REA-Gips bzw. mit hohem oder niedrigem Flächengewicht gefertigt wurden. Das eindeutig beste Ergebnis mit einer Feuerwiderstandsdauer von 118 Minuten hat eine Wandkonstruktion aus Feuerschutzplatten auf Basis eines Stuckgipses aus 100 % REA-Gips mit einem Anteil von 83,9 g/m2 Glasfasern und 1 % Vermiculit und einem Flächengewicht von 10,77 kg/m2, bei einer Plattenstärke von 12,5 mm.
Die als Ziel vorgebende Feuerwiderstandsdauer von 120 Minuten bei zweilagiger Beplankung ohne Dämmstoff könnte künftig erreicht werden, wenn es gelingt, die Volumenreduzierung noch besser zu kompensieren und die Plattenstabilität zu steigern. Eine Möglichkeit hierzu ist die Substitution der beidseitigen Kartonlagen durch eine Glasfaser-Vliesummantelung. Die Wandkonstruktion W112 ohne Dämmstoff erreicht dabei eine Feuerwiderstandsdauer von weit über 120 Minuten, wobei der Gipskern mit Glasfasern armiert ist.