Refine
Document Type
- Doctoral Thesis (494) (remove)
Institute
- Institut für Strukturmechanik (ISM) (56)
- Institut für Europäische Urbanistik (29)
- Promotionsstudiengang Kunst und Design-Freie Kunst-Medienkunst (Ph.D) (25)
- F. A. Finger-Institut für Baustoffkunde (FIB) (20)
- Professur Sozialwissenschaftliche Stadtforschung (16)
- Professur Baubetrieb und Bauverfahren (15)
- Professur Denkmalpflege und Baugeschichte (14)
- Professur Informatik im Bauwesen (12)
- Professur Informatik in der Architektur (12)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Architektur (25)
- Beton (21)
- Stadtplanung (18)
- Finite-Elemente-Methode (17)
- Optimierung (14)
- Stadtentwicklung (13)
- Denkmalpflege (12)
- Isogeometric Analysis (10)
- Kunst (10)
- Modellierung (10)
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
Das Bauwesen hat sich in den letzten Jahren durch die Globalisierung des Marktes verbunden mit einer verstärkten Nutzung moderner Technologien stark gewandelt. Die Planung und die Durchführung von Bauvorhaben werden zunehmend komplexer und sind mit erhöhten Risiken verbunden. Geld- und Zeitressourcen werden bei einem immer härter werdenden Konkurrenzkampf knapper.
Das Projektmanagement stellt Lösungsansätze bereit, um Bauvorhaben auch unter erschwerten Bedingungen und erhöhten Risiken erfolgreich zum Abschluss zu bringen. Dabei hat ein systematisches Risikomanagement beginnend bei der Projektentwicklung bis zum Projektabschluss eine für den Projekterfolg entscheidende Bedeutung.
Ziel der Arbeit ist es, eine quantitative Risikoerfassung für Projektmanager als professionelle Bauherrenvertretung und die Simulation der Risikoauswirkungen auf den Verlauf eines Projektes während der Planungs- und Bauphase zu ermöglichen. Mit Hilfe eines abstrakten Modells soll eine differenzierte, praxisnahe Simulation durchführbar sein, die die verschiedenen Arten der Leistungs- und Kostenentstehung widerspiegelt. Parallel dazu soll die Beschreibung von Risiken so abstrahiert werden, dass beliebige Risiken quantitativ erfassbar und anschließend ihre Auswirkungen inklusive mögliche Gegenmaßnahmen in das Modell integrierbar sind.
Anhand zweier Beispiele werden die unterschiedlichen Einsatzmöglichkeiten der quantitativen Erfassung von Projektrisiken und der anschließenden Simulation ihrer Auswirkungen aufgezeigt. Bei dem ersten Beispiel, einem realen, bereits abgeschlossenen Schieneninfrastrukturprojekt, wird die Wirksamkeit einer vorbeugenden Maßnahme gegen ein Projektrisiko untersucht. Im zweiten Beispiel wird ein Planspielansatz zur praxisnahen Aus- und Weiterbildung von Projektmanagern entwickelt. Inhalt des Planspiels ist die Planung und Errichtung eines privatfinanzierten, öffentlichen Repräsentationsbaus mit teilweiser Fremdnutzung.
Resumen
La pregunta de investigación que guía el trabajo es: ¿cuál es la lógica de reproducción urbana de una ciudad intermedia en el contexto de los territorios possocialista? y ¿por qué referimos a esa lógica en términos de desarrollo urbano posfordista? Lo que se demuestra con esta investigación es cómo la forma particular de entrecruzamiento de territorialidades - estrategias espacializadas de los agentes - y las sinergias que de ellas se derivan con sus respectivas expresiones en el espacio, caracterizan el desarrollo urbano de Weimar.
El objetivo general del trabajo es interpretar el tipo de desarrollo urbano posfordista alcanzado por esta ciudad intermedia, que se configura en relación a su hinterland inmediato como una ciudad-región en una región de ciudades. Los objetivos específicos son: contextualizar geohistóricamente la ciudad-región Weimar, analizar los factores que posibilitan y condicionan el desarrollo urbano, identificar la constelación de agentes sociales que participan en el proceso de governance de la ciudad considerando sus estrategias, e indagar sobre las sinergias territoriales a partir del cruce y convergencia de esas estrategias convertidas en territorialidades.
En el trabajo se aplico una lógica metodológica cualitativa de investigación basada en: relevamiento bibliográfico, lectura de periódicos locales y regionales, observaciones in situ, captación de atmósfera, entrevistas complementadas con estadísticas oficiales y recolección de folletos informativos.
Uno de los principales aportes del trabajo es la definición del concepto de desarrollo urbano posfordista como una modalidad de reproducción urbana en el contexto de relaciones capitalistas de producción caracterizadas por producción flexible, terciarización de servicios públicos, creación de nuevos niveles territoriales de regulación, flexibilización de los mercados de trabajo urbano y precarización de las condiciones de vida. En las últimas décadas se asiste a cambios profundos en las ciudades. Así, es posible hablar de ciudades en transición que ensayan diversidad de modalidades ante los desafíos que implican estos cambios. En efecto, en muchos espacios urbanos comenzaron a suceder procesos de rápida desindustrialización y/o reindustrialización, con lo cual las ciudades han perdido parcialmente su base económica o han debido reestructurarse como un lugar vinculado a la economía de servicios. Incluso en aquellas ciudades que mantuvieron industrias de producción masiva, propias de las llamadas economía fordista, como por ejemplo la industria automotriz, requirieron importantes cambios en sus planificaciones espaciales. De este modo, los cambios en las modalidades de producción y sus impactos socioterritoriales en el contexto de la glocalización permiten observar el paso de una sociedad fordista a una posfordista.
In dieser Arbeit werden die Ergebnisse von experimentellen Untersuchungen an unbewehrten und bewehrten modifizierten Betonen unter monoton steigender Belastung bis zum Bruch, einfacher Kurzzeitbelastung im Grenzbereich der Tragfähigkeit und mehrfach wiederholter Belastung mit kontinuierlicher Be- und Entlastungsgeschwindigkeit vorgestellt und ausgewertet. Für die Modifizierung der Betone wurden zwei grundsätzliche Vorgehens¬weisen angewendet: die Variation der Gesteinskörnung und die Modifizierung der Bindemittelphase mit thermoplastischen Polymeren. Die Auswirkungen der Modifikationen auf die Festigkeitseigenschaften und das Formänderungsverhalten des Betons bei Kurzzeitbelastung waren dabei von besonderem Interesse.
Die beobachteten Veränderungen der Festbetoneigenschaften sowie der nichtlineare Zu-sammenhang zwischen den elastischen und nichtelastischen Verformungsanteilen signali-sieren, dass derartige Modifizierungen das Verformungs- und Bruchverhalten von Beton sig-nifikant beeinflussen und somit beim Nachweis der Tragfähigkeit und Gebrauchstauglichkeit berücksichtigt werden müssen. Neben der Evaluierung des beanspruchungsabhängigen Formänderungsverhaltens werden die etablierten Ansätze zur Beschreibung der Gefügezu-standsbereiche bei Druckbelastung weiter entwickelt, so dass die Übergänge zwischen den Bereichen exakt ermittelt und die Ausprägung der Bereiche quantifiziert werden können. Damit ist ein genauerer Vergleich der durch die Modifizierungen hervorgerufenen Verände-rungen möglich.
Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources.
In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts.
Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques.
The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches.
This thesis focuses on the analysis and design of hash functions and authenticated encryption schemes that are blockcipher based. We give an introduction into these fields of research – taking in a blockcipher
based point of view – with special emphasis on the topics of double length, double call blockcipher based compression functions. The first main topic (thesis parts I - III) is on analysis and design of
hash functions. We start with a collision security analysis of some well known double length blockcipher based compression functions and hash functions: Abreast-DM, Tandem-DM and MDC-4. We also propose new double length compression functions that have elevated collision security guarantees. We complement the collision analysis with a preimage analysis by stating (near) optimal security results for Abreast-DM, Tandem-DM, and Hirose-DM. Also, some generalizations are discussed. These are the first preimage security results for blockcipher based double length hash functions that go beyond the birthday barrier.
We then raise the abstraction level and analyze the notion of ’hash function indifferentiability from a random oracle’. So we not anymore focus on how to obtain a good compression function but, instead, on how to obtain a good hash function using (other) cryptographic primitives. In particular we give some examples when this strong notion of hash function security might give questionable advice for building a practical hash function. In the second main topic (thesis part IV), which is on authenticated encryption schemes, we present an on-line authenticated encryption scheme, McOEx, that simultaneously achieves privacy and confidentiality and is secure against nonce-misuse. It is the first dedicated scheme that achieves high standards of security and – at the same time – is on-line computable.
Einleitung:
Die Kunst und der Kunstbetrieb haben sich in den letzten Jahrzehnten stark verändert und werden sich aller Voraussicht nach in Zukunft noch weit rascher und durchgreifender ändern. In meiner Dissertation geht es um eine Analyse des Jetzt-Zustandes des Kunstbetriebs und um die Konsequenzen die daraus für die zu erwartende Entwicklung zu ziehen sind, insbesondere bezüglich der Ausbildung von Künstlern an Kunsthochschulen. Dort sollten meines Erachtens die beruflichen Aspekte des künstlerischen Feldes (in und außerhalb der Akademie) verstärkt erläutert und vermittelt werden.
Der Fokus der Arbeit liegt auf den folgenden 4 Aspekten: Der Künstler, die Arbeitswelt, die Ausbildung und das Netz und die Vernetzung und ihren Zusammenhängen.
Diese Feststellungen basieren auf meinen Recherchen zu den vier Hauptthemen im Rahmen meiner Arbeit in der Lehre und der eigenen künstlerischen Praxis der letzten Jahre und spiegeln diese wider und sollen gleichzeitig als Beispiel für ihre Anwendung dienen und bieten einen Überblick in deren Ausführung in der Praxis.
Hinweis
Die hier vorliegende Dateien (in 5 Teilen) sind die digitale Veröffentlichung meiner Dissertation im Rahmen der Promotion im Studiengang "Kunst und Design" an der Bauhaus-Universität Weimar.
Diese Publikation ist open source und wird in einem offenen und kollaborativen Prozess weiterentwickelt werden. Die jeweils aktuelle Version wird hier zu finden sein: http://phd.nts.is Dort befinden sich auch weitere Formate zum Download, ebenso wie der vollständige (markdown-formatierte) Quelltext.
(Aus urheber- und lizenzrechtlichen Gründen sind in dieser Version der Bildtafeln einige Bilder ausgelassen. Die gedruckte Ausgabe enthält alle Bildtafeln, diese liegt in der Bibliothek der Bauhaus-Universität aus.)
Teile:
- Thesenpapier
- PhD Dissertation
- Bildtafeln
- Der 5-Jahres-Plan
- KIOSK09-Katalog
Bentonite sind quellfähige Tone, die häufig in der Umwelttechnik (in Abdichtungsbauwerken oder in der Bodensanierung) eingesetzt werden. Ziel der Arbeit war die Klärung, wie eine unterschiedliche Kationenbelegung mit Cu2+ und NH4+ die Eigenschaften der Bentonite bei Raumtemperatur und nach moderater Wärmebehandlung (300 – 450°C) beeinflusst. Im Blickpunkt stand insbesondere die gleichzeitige Präsenz von Kupfer- und Ammoniumionen, die als Vertreter für häufig auftretende Inhaltsstoffe von Wässern in der Umgebung von technischen Bentoniten ausgewählt wurden.
Die Untersuchungen zur Cu2+-Sorption bei Raumtemperatur und nach moderater Wärmebehandlung (300 – 450°C) erfolgten an Pulverproben von zwei technischen Bentoniten, die sich in der ursprünglichen Kationenbelegung, Art und Anteil an Nebengemengteilen, sowie der Schichtladungsverteilung in den Montmorilloniten unterscheiden. Vor der Wärmebehandlung wurden die Bentonite durch Kontakt mit verschieden konzentrierten Kupfer- und Ammoniumlösungen mit unterschiedlichen Gehalten der Kationen Cu2+, NH4+, Na+, Ca2+, Mg2+ belegt.
Der Eintrag von Kupferionen in die Bentonite durch Kationenaustausch bei Raumtemperatur wurde erwartungsgemäß durch präsente Nebengemengteile (wie Carbonat) beeinflusst, so dass die Kupferionen zusätzlich spezifisch adsorbiert und in festen Phasen angereichert wurden.
Die Cu2+-Fixierung infolge der Wärmebehandlung wurde vom Cu2+-Totalgehalt in den Bentoniten, der Präsenz von Nebengemengteilen und die Schichtladungsverteilung in den Montmorilloniten beeinflusst. Es waren generell Behandlungstemperaturen von > 400°C erforderlich, um Cu2+-Fixierungsraten von > 95% zu erzielen.
Waren während der Wärmebehandlung neben Cu2+-Ionen gleichzeitig NH4+-Ionen in den Bentoniten präsent, konnte die Cu2+-Fixierungstemperatur herabgesetzt werden. Die Deammonisierung (NH4+ --> NH3 + H+) der NH4+-belegten Bentonite fand größtenteils unterhalb der Dehydroxylierungstemperatur der Bentonite statt.
Durch Untersuchungen (XRD, FTIR, NMR, ESR) zum Mechanismus der Cu2+-Einbindung in die Bentonite an speziell aufbereiteten Proben (carbonatfrei, < 2 µm) konnte nachgewiesen werden, dass in den Cu2+-belegten Montmorilloniten die Cu2+-Ionen infolge der Wärmebehandlung nicht bis in die Oktaederschicht der Tonminerale vordringen, sondern nur bis in die Tetraederschicht wandern. In den NH4+-belegten Montmorilloniten treten im Zusammenhang mit der Deammonisierung keine zusätzlichen Strukturänderungen (wie Auflösung der Oktaederschicht) infolge der Wärmebehandlung auf.
Web applications that are based on user-generated content are often criticized for containing low-quality information; a popular example is the online encyclopedia Wikipedia. The major points of criticism pertain to the accuracy, neutrality, and reliability of information. The identification of low-quality information is an important task since for a huge number of people around the world it has become a habit to first visit Wikipedia in case of an information need. Existing research on quality assessment in Wikipedia either investigates only small samples of articles, or else deals with the classification of content into high-quality or low-quality. This thesis goes further, it targets the investigation of quality flaws, thus providing specific indications of the respects in which low-quality content needs improvement. The original contributions of this thesis, which relate to the fields of user-generated content analysis, data mining, and machine learning, can be summarized as follows:
(1) We propose the investigation of quality flaws in Wikipedia based on user-defined cleanup tags. Cleanup tags are commonly used in the Wikipedia community to tag content that has some shortcomings. Our approach is based on the hypothesis that each cleanup tag defines a particular quality flaw.
(2) We provide the first comprehensive breakdown of Wikipedia's quality flaw structure. We present a flaw organization schema, and we conduct an extensive exploratory data analysis which reveals (a) the flaws that actually exist, (b) the distribution of flaws in Wikipedia, and, (c) the extent of flawed content.
(3) We present the first breakdown of Wikipedia's quality flaw evolution. We consider the entire history of the English Wikipedia from 2001 to 2012, which comprises more than 508 million page revisions, summing up to 7.9 TB. Our analysis reveals (a) how the incidence and the extent of flaws have evolved, and, (b) how the handling and the perception of flaws have changed over time.
(4) We are the first who operationalize an algorithmic prediction of quality flaws in Wikipedia. We cast quality flaw prediction as a one-class classification problem, develop a tailored quality flaw model, and employ a dedicated one-class machine learning approach. A comprehensive evaluation based on human-labeled Wikipedia articles underlines the practical applicability of our approach.
Metakaolin made from kaolin is used around the world but rarely in Vietnam where abundant deposits of kaolin is found. The first studies of producing metakaolin were conducted with high quality Vietnamese kaolins. The results showed the potential to produce metakaolin, and its effect has on strength development of mortars and concretes. However, utilisation of a low quality kaolin for producing Vietnamese metakaolin has not been studied so far.
The objectives of this study were to produce a good quality metakaolin made from low quality Vietnamese kaolin and to facilitate the utilisation of Vietnamese metakaolin in composite cements.
In order to reach such goals, the optimal thermal conversion of Vietnamese kaolin into metakaolin was carried out by many investigations, and as such the optimal conversion is found using the analysis results of DSC/TGA, XRD and CSI. During the calcination in a range of 500 – 800 oC lasting for 1 – 5 hours, the characterisation of calcinated kaolin was also monitored for mass loss, BET surface, PSD, density as well as the presence of the residual water. It is found to have a well correlation between residual water and BET surface.
The pozzolanic activity of metakaolin was tested by various methods regarding to the saturated lime method, mCh and TGA-CaO method. The results of the study showed which method is the most suitable one to characterise the real activity of metakaolin and can reach the greatest agreement with concrete performance. Furthermore, the pozzolanic activity results tested using methods were also analysed and compared to each other with respect to the BET surface.
The properties of Vietnam metakaolin was established using investigations on water demand, setting time, spread-flowability, and strength. It is concluded that depending on the intended use of composite cement and weather conditions of cure, each Vietnamese metakaolin can be used appropriately to produce (1) a composite cement with a low water demand (2) a high strength of composite cement (3) a composite cement that aims to reduce CO2 emissions and to improve economics of cement products (4) a high performance mortar.
The durability of metakaolin mortar was tested to find the needed metakaolin content against ASR, sulfat and sulfuric acid attacks successfully.
The present thesis studies the effects of rice husk ash (RHA) as a pozzolanic admixture and the combination of RHA and ground granulated blast-furnace slag (GGBS) on properties of ultra-high performance concrete (UHPC). The ultimate purpose of this study is to replace completely silica fume (SF) and partially Portland cement by RHA and GGBS to achieve sustainable UHPC. To reach this aim, characteristics of RHA in dependence of grinding period, especially its pozzolanic reactivity in saturated Ca(OH)2 solution and in a cementitious system at a very low water binder ration (w/b) were assessed. The influences of RHA on compatibility between superplasticizer and binder, workability, compressive strength, shrinkage, internal relative humidity, microstructure and durability of UHPC were also evaluated. Furthermore, synergic effects of RHA and GGBS on the properties of UHPC were investigated to produce more sustainable UHPC. Finally, various heat treatments were applied to study the properties of UHPC under these conditions. All the characteristics of these UHPCs containing RHA were compared to those of mixtures containing SF.
Die historische Entwicklung frühneuzeitlicher Wohngebäude (1450 bis 1637) der Stadt Meißen wird hier vorwiegend mit bauarchäologischen und baugeschichtlichen Methoden unter zusätzlicher Nutzung farbarchäologischer, dendrochronologischer und archivalischer Quellen untersucht. Eine bau- und raumstrukturelle Differenzierung der Wohnformen unterschiedlicher sozialer Gruppierungen- Handwerker, frühes Bürgertum sowie hoher und niederer Klerus- scheint dabei möglich zu sein. Die Untersuchung fußt auf der teilweise sehr detaillierten Auswertung von Befunden, die in zwanzig Wohngebäuden gewonnen wurden.
The aim of this doctoral thesis was to investigate whether the German term “shrinking city” is appropriate to depopulating Polish cities. In order to do so an attempt to define the currently still vague notion of “shrinking city” was made. The urban development of Eastern Germany was thoroughly examined both in a short term perspective and in a wide historical as well as international context, with the Polish urban development used as reference. 25 cities (kreisfreie Städte) in Eastern Germany and depopulating Polish cities: Łódź and the Metropolis Silesia were chosen as case studies.
On the basis of the gathered information a “shrinking city” in Eastern Germany was defined as a city with a long-lasting population decrease coupled with over-dimensioned, growth-oriented development policies carried out for decades. Such a development path is triggering negative consequences in the spatial, economic and also demographic dimension, which tend to intensify each other.
The thesis postulates that the definition of the “shrinking city in Eastern Germany” is not appropriate to depopulating cities in Poland. Polish cities are characterized by a short-lasting population decrease and this trend is not triggering negative spatial and economic consequences. Oversized growth development policies were never present in the cities and they still suffer from great deficiencies in housing and other basic infrastructure, which derive from the socialist period. Furthermore, radical de-economization, known from Eastern German cities, did not occur in the Polish cities. Both Łódź and the Metropolis Silesia remain main production centers of the country.
This doctoral thesis presents a contradictory view to contemporary publications on “shrinking cities”, in which this phenomenon is regarded as having occurred suddenly after the collapse of the socialism. It proved that “shrinking cities” in Eastern Germany are not the outcome of short-lasting processes, but are deeply rooted in the past. Moreover, they represent a very distinct development pattern that highly differentiates from the one found in Central Eastern Europe and the one in Western Europe. In this way the doctoral thesis provided a new, critical approach to the discourse on “shrinking cities” in Germany. It also draws attention to the importance of the historical analysis in cities’ development research, particularly in cross border studies. In time of European integration peculiarities resulting from centuries of different spatial, economic and social development paths should not be underestimated.
Die Arbeit zeigt die wesentlichen Gründe auf, warum betahalbhydratreiche Niederbranntgipsbinder (industriell als Stuckgips bezeichnet) oft sehr unterschiedliche Eigenschaften aufweisen.
Der Anteil an Halbhydrat, welches aus dem stark hygroskopischen Anhydrit III (A III) durch die Reaktion mit Luftfeuchtigkeit entsteht, stellt einen erheblichen, bislang vollkommen unbeachteten Einfluss dar. Dieses Halbhydrat aus A III zeigt andere Oberflächeneigenschaften und ein Reaktionsverhalten, das von frisch gebranntem Betahalbhydrat abweicht.
Es zeigt sich, wie weitreichend der Einfluss physiko-chemischer Oberflächenprozesse wie Adsorption und Kondensation ist. Hierdurch wird nicht nur die Oberflächenenergie der Partikel abgebaut, sondern auch eine Verminderung der Hydratationswärme verursacht. Somit wirken sich physikalische Vorgänge thermodynamisch aus. Einwirkende und resultierende Parameter einer Alterung wirken wie folgt äußerst komplex zusammen:
Die dominierenden Bindemitteleigenschaften Abbindeverhalten und Wasseranspruch verändern sich durch eine Alterung sowohl aufgrund der Phasenumwandlungen als auch infolge der Veränderungen der Kristallite. Ebenso einflussreich ist die Veränderung der Oberflächencharakteristik. Die Auswirkung der Alterung auf die Reaktivität geht deutlich über den Abbau von Anhydrit III, die Dezimierung von abbindefähigem Material und die beschleunigende Wirkung von Alterungsdihydrat hinaus. Das Wachstum der Kristallite von Halbhydrat und die Verringerung der inneren Energie sowie die energetisch günstige spontane Beladung der Kristallgitterkanäle kleinster Anhydrit III-Kristallite mit dampfförmigem Wasser müssen als maßgebliche Ursachen für die Abnahme der Reaktivität infolge der Alterung herausgestellt werden. Die Abnahme der spezifischen Oberfläche und der Oberflächenenergie wirken sich außerdem auf den Lösungs- und den Hydratationsprozess aus. Der auf der Oberfläche von Anhydrit III kristallisierte Anhydrit II wirkt sich auch nach der Umwandlung von A III in Halbhydrat lösungshemmend aus. Infolge der alterungsbedingten Dihydratbildung, die bei anhaltender Feuchteeinwirkung einsetzt, wird diese Wirkung aufgehoben bzw. vermindert. Obgleich Dihydrat für seinen Beschleunigungseffekt bekannt ist, entfaltet Alterungsdihydrat infolge seiner besonderen Ausbildung innerhalb der wenige Moleküllagen umfassenden Kondenswasserschicht nur eine geringe keimbildende Wirkung.
Eine wesentliche Erkenntnis betrifft den Bindungscharakter des Überstöchiometrischen Wassers. Diesbezüglich ist eine rein physikalische Bindung nachweisbar. Das in der Arbeit als stärker adsorptiv gebunden bezeichnete Wasser kommt neben der Freien Feuchte ausschließlich bei Anwesenheit von Halbhydrat vor. Dieser Zusammenhang wird erstmalig hergestellt und mit Hilfe der kristallchemisch bedingten höheren Oberflächenenergie von Halbhydrat erklärt.
Text classification deals with discovering knowledge in texts and is used for extracting, filtering, or retrieving information in streams and collections. The discovery of knowledge is operationalized by modeling text classification tasks, which is mainly a human-driven engineering process. The outcome of this process, a text classification model, is used to inductively learn a text classification solution from a priori classified examples. The building blocks of modeling text classification tasks cover four aspects: (1) the way examples are represented, (2) the way examples are selected, (3) the way classifiers learn from examples, and (4) the way models are selected.
This thesis proposes methods that improve the prediction quality of text classification solutions for unseen examples, especially for non-standard tasks where standard models do not fit. The original contributions are related to the aforementioned building blocks: (1) Several topic-orthogonal text representations are studied in the context of non-standard tasks and a new representation, namely co-stems, is introduced. (2) A new active learning strategy that goes beyond standard sampling is examined. (3) A new one-class ensemble for improving the effectiveness of one-class classification is proposed. (4) A new model selection framework to cope with subclass distribution shifts that occur in dynamic environments is introduced.
The automotive industry requires realistic virtual reality applications more than other domains to increase the efficiency of product development. Currently, the visual quality of virtual invironments resembles reality, but interaction within these environments is usually far from what is known in everyday life. Several realistic research approaches exist, however they are still not all-encompassing enough to be usable in industrial processes. This thesis realizes lifelike direct multi-hand and multi-finger interaction with arbitrary objects, and proposes algorithmic and technical improvements that also approach lifelike usability. In addition, the thesis proposes methods to measure the effectiveness and usability of such interaction techniques as well as discusses different types of grasping feedback that support the user during interaction. Realistic and reliable interaction is reached through the combination of robust grasping heuristics and plausible pseudophysical object reactions. The easy-to-compute grasping rules use the objects’ surface normals, and mimic human grasping behavior. The novel concept of Normal Proxies increases grasping stability and diminishes challenges induced by adverse normals. The intricate act of picking-up thin and tiny objects remains challenging for some users. These cases are further supported by the consideration of finger pinches, which are measured with a specialized finger tracking device. With regard to typical object constraints, realistic object motion is geometrically calculated as a plausible reaction on user input. The resulting direct finger-based
interaction technique enables realistic and intuitive manipulation of arbitrary objects. The thesis proposes two methods that prove and compare effectiveness and usability. An expert review indicates that experienced users quickly familiarize themselves with the technique. A quantitative and qualitative user study shows that direct finger-based interaction is preferred over indirect interaction in the context of functional car assessments. While controller-based interaction is more robust, the direct finger-based interaction provides greater realism, and becomes nearly as reliable when the pinch-sensitive mechanism is used. At present, the haptic channel is not used in industrial virtual reality applications. That is why it can be used for grasping feedback which improves the users’ understanding of the grasping situation. This thesis realizes a novel pressure-based tactile feedback at the fingertips. As an alternative, vibro-tactile feedback at the same location is realized as well as visual feedback by the coloring of grasp-involved finger segments. The feedback approaches are also compared within the user study, which reveals that grasping feedback is a requirement to judge grasp status and that tactile feedback improves interaction independent of the used display system. The considerably stronger vibrational tactile feedback can quickly become annoying during interaction. The interaction improvements and hardware enhancements make it possible to interact with virtual objects in a realistic and reliable manner. By addressing realism and reliability, this thesis paves the way for the virtual evaluation of human-object interaction, which is necessary for a broader application of virtual environments in the automotive industry and other domains.
In this research work, an energy approach is employed for assessing quality in dynamic soil-structure interaction (SSI) models, and energy measures are introduced and investigated as general indicators of structural response.
Dynamic SSI models with various abstraction levels are then investigated according to different coupling scenarios for soil and structure models.
The hypothesis of increasing model uncertainty with decreasing complexity is investigated and a mathematical framework is provided for the treatment of model uncertainty. This framework is applied to a case study involving alternative models for incorporating dynamic SSI effects. In the evaluation process, energy measures are used within the framework of the \textit{adjustment factor} approach in order to quantitatively assess the uncertainty associated with SSI models. Two primary types of uncertainty are considered, namely the uncertainty in the model framework and the uncertainty in the model input parameters.
Investigations on model framework uncertainty show that the more complex three-dimensional FE model has the best quality of the models investigated, whereas the Wolf SSI model produces the lowest model uncertainty of the simpler models. The fixed-base model produces the highest estimated uncertainty and accordingly the worst quality of all models investigated.
These results confirm the hypothesis of increasing model uncertainty with decreasing complexity only when the assessment is based on the ratio of structural hysteretic energy to input energy as a response indicator.
Increasingly powerful hard- and software allows for the numerical simulation of complex physical phenomena with high levels of detail. In light of this development the definition of numerical models for the Finite Element Method (FEM) has become the bottleneck in the simulation process. Characteristic features of the model generation are large manual efforts and a de-coupling of geometric and numerical model. In the highly probable case of design revisions all steps of model preprocessing and mesh generation have to be repeated. This includes the idealization and approximation of a geometric model as well as the definition of boundary conditions and model parameters. Design variants leading to more resource-efficient structures might hence be disregarded due to limited budgets and constrained time frames.
A potential solution to above problem is given with the concept of Isogeometric Analysis (IGA). Core idea of this method is to directly employ a geometric model for numerical simulations, which allows to circumvent model transformations and the accompanying data losses. Basis for this method are geometric models described in terms of Non-uniform rational B-Splines (NURBS). This class of piecewise continuous rational polynomial functions is ubiquitous in computer graphics and Computer-Aided Design (CAD). It allows the description of a wide range of geometries using a compact mathematical representation. The shape of an object thereby results from the interpolation of a set of control points by means of the NURBS functions, allowing efficient representations for curves, surfaces and solid bodies alike. Existing software applications, however, only support the modeling and manipulation of the former two. The description of three-dimensional solid bodies consequently requires significant manual effort, thus essentially forbidding the setup of complex models.
This thesis proposes a procedural approach for the generation of volumetric NURBS models. That is, a model is not described in terms of its data structures but as a sequence of modeling operations applied to a simple initial shape. In a sense this describes the "evolution" of the geometric model under the sequence of operations. In order to adapt this concept to NURBS geometries, only a compact set of commands is necessary which, in turn, can be adapted from existing algorithms. A model then can be treated in terms of interpretable model parameters. This leads to an abstraction from its data structures and model variants can be set up by variation of the governing parameters.
The proposed concept complements existing template modeling approaches: templates can not only be defined in terms of modeling commands but can also serve as input geometry for said operations. Such templates, arranged in a nested hierarchy, provide an elegant model representation. They offer adaptivity on each tier of the model hierarchy and allow to create complex models from only few model parameters. This is demonstrated for volumetric fluid domains used in the simulation of vertical-axis wind turbines. Starting from a template representation of airfoil cross-sections, the complete "negative space" around the rotor blades can be described by a small set of model parameters, and model variants can be set up in a fraction of a second.
NURBS models offer a high geometric flexibility, allowing to represent a given shape in different ways. Different model instances can exhibit varying suitability for numerical analyses. For their assessment, Finite Element mesh quality metrics are regarded. The considered metrics are based on purely geometric criteria and allow to identify model degenerations commonly used to achieve certain geometric features. They can be used to decide upon model adaptions and provide a measure for their efficacy. Unfortunately, they do not reveal a relation between mesh distortion and ill-conditioning of the equation systems resulting from the numerical model.
Der Film BERLIN – DIE SINFONIE DER GROSSTADT (1927, Walter Ruttman) gilt bis heute als der Sinfonische Dokumentarfilm schlechthin. Er präsentiert nicht nur die Traditionslinien, die ihre Wurzeln in der neuen Sachlichkeit und den ersten Dokumentarfilmtheorien fanden, sondern auch die künstlerische und filmische Vorstellung von Ruttmann. Um den höchsten Grad an Objektivität in der Realitätsdarstellung zu erreichen, hat dieser Film einerseits treu alle theoretischen und ästhetischen Gedanken der neusachlichen Kunst zum ersten Mal filmisch in einem DOKUMENTARFILM dargestellt. Noch dazu drückt der Berlin-Film seinen kulturellen Beitrag mit ausschliesslich filmischen Mitteln aus, also mit rein visuellen Mitteln, oder anders gesagt mit FILMSPRACHE.
Im Gegensatz zur ersten Ankündigung des Films und der Filmliteratur, die den Film als Dokumentarfilm ansehen, ist er in Wirklichkeit einer der ersten innovativen Musikfilme in seinem historischen Kontext, welcher mit der Montage von Wirklichkeitsabbilden autonome Musik zu interpretieren versucht. Seine sachlichen, dokumentarischen Reportage-Bilder ordnen sich der abstrakten Struktur und dem Taktmaß der Musik unter. Als Folge entsteht laut Ruttmann eine VISUELLE SINFONIE, die eigentlich vielmehr eine Vision von Berlin darstellte, als zukünftige industrielle Musterstadt. Mit anderen Worten ist Berlin in diesem Film eine realistische Illusion, aber keineswegs ein reales Bild. Er bildet ein neues Filmgenre, den KUNSTDOKUMENTARFILM, das den Grundprinzipien des Dokumentarfilms nicht folgt.
Dies unterscheidet ihn aber auch von den KÜNSTLERISCHEN DOKUMENTARFILMEN, die einerseits dem Film BERLIN – DIE SINFONIE DER GROSSTADT zwar formalstilistisch und ästhetisch ähneln. Aber im Gegensatz zu ihm erfüllen diese die elementaren Voraussetzungen für einen Dokumentarfilm. Beispiele dafür sind: DER MANN MIT DER KAMERA (1929, Dsiga Vertov), KOYAANISQATSI (1982,Godfry Reggio) und das Remake BERLIN, SINFONIE EINER GROßSTADT (2002, Thomas Schadt).
BERLIN – DIE SINFONIE DER GROSSTADT von Walter Ruttmann stellt nicht nur ein neues Filmgenre dar, er ist auch eine singuläre Erscheinung in der Filmgeschichte. In diesem Sinne spielte er, laut dem Filmkritiker Helmut Korte, eine entscheidende Rolle in der Weiterentwicklung des absoluten Films „als Musterbeispiel für die Möglichkeiten des künstlerisch-dokumentarischen Films“ (Korte 1991: 76).
Post Conflict Reconstruction is a very complex topic, whether it is to be undertaken by the Local or the International Community. The process of the Post Conflict Development is to be very hard to investigate, primarily for the combination of socio-cultural phenomena, war and political instability; having difficulties of conducting solid empirical analysis (obtaining reliable data) and dealing with war-torn communities. The multifaceted process of the reconstruction is ought to touch a lot of countries vital segments, whereas each of them requires different approach; coordination with one another; and unification in their common aim.
The emergency of the assistance programs are not equal, same as with the priority and weight when compared with each other, therefore occasionally there are programs for the success of which the other less important actions are violated or neglected.
The case is with the International Community presence ( the set up), which aside from their mission and projects, it is considered to play a very important role on the urban development of a post conflict city; the setting was never planned or considered in a holistic manner, therefore IC establishment was done ad hoc and it was guided by issues which did not help at its greatest to the urban development of the city and more over to the citizens who were most in need.
The study is about the Urban Development, due to the fact that the biggest concentration of the International Community is likely to be in the urban centers, and the experienced changes are of a much considerable magnitude. The reconstruction phase is likely to be lasting at about 10 years and more , consequently the International Community for that time being tends to be recognized as temporary citizens of the city, and it is inevitably that they will be having an impact on the urban development of the city; in that basis it is considered to be significant that the International Community Establishment/Set Up be included into the International Organizations mission and assist in the overall mission of the reconstruction.