Filtern
Dokumenttyp
- Dissertation (70) (entfernen)
Institut
- Institut für Strukturmechanik (ISM) (10)
- Institut für Europäische Urbanistik (5)
- Professur Mediensicherheit (5)
- Professur Betriebswirtschaftslehre im Bauwesen (4)
- Professur Content Management und Webtechnologien (3)
- Professur Sozialwissenschaftliche Stadtforschung (3)
- F. A. Finger-Institut für Baustoffkunde (FIB) (2)
- Medienkunst/Mediengestaltung (2)
- Professur Baumechanik (2)
- Professur Geschichte und Theorie künstlicher Welten (2)
Schlagworte
- Kryptologie (5)
- Architektur (4)
- Finite-Elemente-Methode (4)
- Kunst (3)
- Optimierung (3)
- Public Private Partnership (3)
- Beton (2)
- Information Retrieval (2)
- Medienkunst (2)
- Medienphilosophie (2)
Increasing structural robustness is the goal which is of interest for structural engineering community. The partial collapse of RC buildings is subject of this dissertation. Understanding the robustness of RC buildings will guide the development of safer structures against abnormal loading scenarios such as; explosions, earthquakes, fine, and/or long-term accumulation effects leading to deterioration or fatigue. Any of these may result in local immediate structural damage, that can propagate to the rest of the structure causing what is known by the disproportionate collapse.
This work handels collapse propagation through various analytical approaches which simplifies the mechanical description of damaged reinfoced concrete structures due to extreme acidental event.
Environmental and operational variables and their impact on structural responses have been acknowledged as one of the most important challenges for the application of the ambient vibration-based damage identification in structures. The damage detection procedures may yield poor results, if the impacts of loading and environmental conditions of the structures are not considered.
The reference-surface-based method, which is proposed in this thesis, is addressed to overcome this problem. In the proposed method, meta-models are used to take into account significant effects of the environmental and operational variables. The usage of the approximation models, allows the proposed method to simply handle multiple non-damaged variable effects simultaneously, which for other methods seems to be very complex. The input of the meta-model are the multiple non-damaged variables while the output is a damage indicator.
The reference-surface-based method diminishes the effect of the non-damaged variables to the vibration based damage detection results. Hence, the structure condition that is assessed by using ambient vibration data at any time would be more reliable. Immediate reliable information regarding the structure condition is required to quickly respond to the event, by means to take necessary actions concerning the future use or further investigation of the structures, for instance shortly after extreme events such as earthquakes.
The critical part of the proposed damage detection method is the learning phase, where the meta-models are trained by using input-output relation of observation data. Significant problems that may encounter during the learning phase are outlined and some remedies to overcome the problems are suggested.
The proposed damage identification method is applied to numerical and experimental models. In addition to the natural frequencies, wavelet energy and stochastic subspace damage indicators are used.
The construction and operation of a sanitary landfill (SLF) in the Philippines presents concerns on the regulation of the activities of the informal sector in the area. In anticipation of these directives, an association of informal waste reclaimers group called Uswag Calajunan Livelihood Association, Inc. (UCLA) was formed in May 2009. One option identified was the waste-to-energy activity through the production of fuel briquettes. With the availability of raw materials in the area, what was lacking then was an appropriate technology that would cater to their needs. This study, therefore, presented the case of UCLA on how socio-economic and technical aspects was integrated for the development and improvement of a briquetting technology needed in the production of quality briquettes as part of their income generating activities. A non-experimental posttest only design was utilized for the collection of descriptive information. Descriptions and discussions were also made on the enhancement of the briquetting machine from the first hand-press molder developed until the finalized design was attained.
Results revealed that the improved briquetting technology withstood the wear and tear of operation showing a significant (P<0.01) increase on the production rate (220 pcs/hr; 4 kg/hr) and bulk density (444.83 kg/m3) of briquettes produced. The quality of cylindrical briquettes produced in terms of bulk density, heating value (15.13 MJ/kg), moisture (6.2%), N and S closely met or has met the requirements of DIN 51731. Based on the operating expenses, the briquettes may be marked-up to Php0.25/pc (USD0.006) or Php15.00/kg (USD0.34) for profit generation. The potential daily earnings of Php130.00 (USD2.95) to Php288.56 (USD6.56) generated in producing briquettes are higher when compared to the majority of waste reclaimers’ daily income of Php124.00 (USD2.82). The high positive response (93%) on the usability of briquettes and the willingness of the respondents (81%) to buy them when sold in the market indicates its promising potential as fuel in the nearby communities. Results of briquette production citing the case of UCLA could be considered as potential source of income given the social, technical, economic and environmental feasibility of the experiment. This method of utilizing wastes in an urban setting of a developing country with similar socio-economic and physical set-ups may also be recommended for testing or replication.
This research represents an effort made towards contribute to the critical thinking from an analysis of the hegemonic neoliberal ideology, which supports the idea of the end of history and the technocratic universalism which in turn implies the imposition of a single model of life, denying, in the name of realism and the end of utopias, any other alternative possibility.
This makes it necessary to recover the critical thinking to analyze and understand the reality, thus overcoming the ideological barrier towards claiming that things can be otherwise.
It is clear from this research that the discourse of sustainable development has unquestionably transformed the context and content of political activity in Europe. This discourse has exercised and obvious influence in the Governance processes, mainly because it has contributed to the introduction of a new political field, which was then promoted, either explicitly or implicitly by policy-makers, researchers on the field and practitioners during the last three decades. Though it may be bold to affirm that the discourse of sustainable development is the sole driver of these whole set of changes, there is no doubt that it has played a key part in the way in which the governance priorities have been handled in the European continent.
The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties.
Primary novel factors of this work center on two aspects.
1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners.
2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions.
Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis.
This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space.
This study permits a reliability analysis to solve the mechanical behaviour issues existing in the current structural design of fabric structures. Purely predictive material models are highly desirable to facilitate an optimized design scheme and to significantly reduce time and cost at the design stage, such as experimental characterization.
The present study examined the role of three major tasks; a) single-objective optimization, b) sensitivity analyses and c) multi-objective optimization on proposed weave structures for woven fabric composites. For single-objective optimization task, the first goal is to optimize the elastic properties of proposed complex weave structure under unit cells basis based on periodic boundary conditions.
We predict the geometric characteristics towards skewness of woven fabric composites via Evolutionary Algorithm (EA) and a parametric study. We also demonstrate the effect of complex weave structures on the fray tendency in woven fabric composites via tightness evaluation. We utilize a procedure which does not require a numerical averaging process for evaluating the elastic properties of woven fabric composites. The fray tendency and skewness of woven fabrics depends upon the behaviour of the floats which is related to the factor of weave. Results of this study may suggest a broader view for further research into the effects of complex weave structures or may provide an alternative to the fray and skewness problems of current weave structure in woven fabric composites.
A comprehensive study is developed on the complex weave structure model which adopts the dry woven fabric of the most potential pattern in singleobjective optimization incorporating the uncertainties parameters of woven fabric composites. The comprehensive study covers the regression-based and variance-based sensitivity analyses. The second task goal is to introduce the fabric uncertainties parameters and elaborate how they can be incorporated into finite element models on macroscopic material parameters such as elastic modulus and shear modulus of dry woven fabric subjected to uni-axial and biaxial deformations. Significant correlations in the study, would indicate the need for a thorough investigation of woven fabric composites under uncertainties parameters. The study describes here could serve as an alternative to identify effective material properties without prolonged time consumption and expensive experimental tests.
The last part focuses on a hierarchical stochastic multi-scale optimization approach (fine-scale and coarse-scale optimizations) under geometrical uncertainties parameters for hybrid composites considering complex weave structure. The fine-scale optimization is to determine the best lamina pattern that maximizes its macroscopic elastic properties, conducted by EA under the following uncertain mesoscopic parameters: yarn spacing, yarn height, yarn width and misalignment of yarn angle. The coarse-scale optimization has been carried out to optimize the stacking sequences of symmetric hybrid laminated composite plate with uncertain mesoscopic parameters by employing the Ant Colony Algorithm (ACO). The objective functions of the coarse-scale optimization are to minimize the cost (C) and weight (W) of the hybrid laminated composite plate considering the fundamental frequency and the buckling load factor as the design constraints.
Based on the uncertainty criteria of the design parameters, the appropriate variation required for the structural design standards can be evaluated using the reliability tool, and then an optimized design decision in consideration of cost can be subsequently determined.
As human thought was developing, likewise, the technology used for illumination was growing. But a haul through history, reviewing its pages and analyzing it, inherently brings up old and new question, like: Is it possible to alter negatively the image of historic buildings and monuments through inadequate lighting to the degree of distorting the perception that people have of the work? and if so, what are the causes that generate it? Do the light designers take into consideration criteria to protect not only historic buildings and monuments, but also the environment? What are the consequences that may generate the inadequate lighting of urban heritage to the environment? What are the factors to consider for a proper illumination of urban heritage? The answers to these questions will help lay the foundation for proper illumination of the urban heritage, avoiding at the maximum the light pollution and the effects that it generates, seeking a balance and harmonious reconciliation between the technology, urban heritage and environment, taking as a framework and the case study the urban heritage of a city from the colonial era in southern Mexico, with pre-Hispanic roots and where today you can still see through its streets and buildings an atmosphere of mysticism reflection of their folklore and traditions, this city is known as Chiapa de Corzo, Chiapas.
Die Einflüsse polymerer Zusätze auf die Ausbildung der Mikrostruktur im frühen Stadium der Erhärtung und auf die Eigenschaften, insbesondere die Dauerhaftigkeit der modifizierten Mörtel wurden erforscht. Es sollte die Frage beantwortet werden, ob durch die Modifizierung die Dauerhaftigkeit von Mörteln mehr verbessert werden kann, als dies durch übliche betontechnologische Maßnahmen möglich ist. Die Ausbildung der Mikrostruktur in den ersten 24 Stunden der Erhärtung wurde mit verschiedenen Methoden, u.a. mittels ESEM, untersucht. Es wurden Modellvorstellungen zur Ausbildung der organischen Matrix und der anorganischen Matrix entwickelt: Interaktionen sind Adsorptionsreaktionen, Agglomerationen und Behinderung der Hydratation. Es wurden Frisch- und Festmörteluntersuchungen beschrieben und interpretiert. Unterschiedliche Dauerhaftigkeitsuntersuchungen wurden durchgeführt und bewertet. Die Mikrostruktur der Festmörtel wurde hinsichtlich ihres Einflusses auf die Dauerhaftigkeit betrachtet.
This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed.
A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth.
Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven.
The task-based view of web search implies that retrieval should take the user perspective into account. Going beyond merely retrieving the most relevant result set for the current query, the retrieval system should aim to surface results that are actually useful to the task that motivated the query.
This dissertation explores how retrieval systems can better understand and support their users’ tasks from three main angles: First, we study and quantify search engine user behavior during complex writing tasks, and how task success and behavior are associated in such settings. Second, we investigate search engine queries formulated as questions, and explore patterns in a large query log that may help search engines to better support this increasingly prevalent interaction pattern. Third, we propose a novel approach to reranking the search result lists produced by web search engines, taking into account retrieval axioms that formally specify properties of a good ranking.
In this work, practice-based research is conducted to rethink the understanding of aesthetics, especially in relation to current media art. Granted, we live in times when technologies merge with living organisms, but we also live in times that provide unlimited resources of knowledge and maker tools. I raise the question: In what way does the hybridization of living organisms and non-living technologies affect art audiences in the culture that may be defined as Maker culture? My hypothesis is that active participation of an audience in an artwork is inevitable for experiencing the artwork itself, while also suggesting that the impact of the umwelt changes the perception of an artwork. I emphasize artistic projects that unfold through mutual interaction among diverse peers, including humans, non-human organisms, and machines. In my thesis, I pursue collaborative scenarios that lead to the realization of artistic ideas: (1) the development of ideas by others influenced by me and (2) the materialization of my own ideas influenced by others. By developing the scenarios of collaborative work as an artistic experience, I conclude that the role of an artist in Maker culture is to mediate different types of knowledge and different positions, whereas the role of the audience is to actively engage in the artwork itself. At the same time, aesthetics as experience is triggered by the other, including living and non-living actors. It is intended that the developed methodologies could be further adapted in artistic practices, philosophy, anthropology, and environmental studies.
Die vorliegende Dissertation zeigt am Beispiel der Entwicklung eines modernen geriatrischen Zentrums, dass Architektur einen eigenständigen Beitrag dazu leisten kann, die Probleme des Alters in der heutigen Gesellschaft anzunehmen und zu bewältigen. Die Arbeit setzt zum einen an stadtplanerischen Defiziten der vergangenen Jahrzehnte an und verdeutlicht, wie ein bedürfnisgerechtes, innerstädtisches geriatrisches Zentrum dem Leitbild der „humanen Stadt“ zu entsprechen vermag, um damit die Stadt wieder zu einem multifunktionalen Erlebnisraum für alle Bevölkerungsgruppen werden zu lassen. Zum anderen greift sie die aktuelle gesundheitspolitische Debatte auf und weist nach, dass ein solches Zentrum als integrierte Verbundlösung, die alle Versorgungsstrukturen unter einem Dach anbietet, ideal dazu geeignet ist, die Anforderungen unserer Zeit auf geriatrischem und pflegerischem Gebiet zu erfüllen. Die Anforderungen an eine derartige Einrichtung sind umfangreich und differenziert. Sie werden unter Heranziehung aktueller Forschungsergebnisse aus stadtsoziologischer, psychologischer, gerontologischer und sozialökologischer Sicht hergeleitet und in praktische architektonische bzw. baukonstruktive Handlungsanweisungen umgesetzt. Als zentrale, übergeordnete Anforderungen neben optimaler medizinischer und pflegerischer Betreuung werden herausgearbeitet: 1. Erhöhung der Lebenszufriedenheit der Bewohner 2. Stärkung der Autonomie und Selbstständigkeit der älteren Menschen 3. Befolgung des Grundsatzes >Prävention vor Rehabilitation, Rehabilitation vor Pflege< 4. Förderung eines selbstbestimmten Lebens in vertrauter Umgebung bis ins hohe Alter 5. Gewährung von Geborgenheits- und Heimatgefühlen 6. Gemeinwesenorientierung und enge Anbindung an die Strukturen des Quartiers 7. Erhaltung bzw. Stärkung der sozialen Integration der älteren und kranken Menschen 8. Förderung eines hohen Aktivitätsniveaus und einer anspruchsvollen Freizeitgestaltung 9. Bereitstellung einer anregenden sowie sicheren, weil Orientierung gebenden Umgebung Das vorgestellte geriatrische Zentrum bildet die architektonische Entsprechung zum gesellschaftlichen Strukturwandel des Alters und zu den gesundheits- und pflegepolitischen Entwicklungen unserer Zeit und leistet damit einen eigenständigen Beitrag, die gesundheitlichen und sozialen Probleme alter Menschen in unserer Gesellschaft zu lindern, in dem nutzerorientierte Gebäudestrukturen geschaffen werden, die einem integrativen Netzwerk aus Wohn-, Therapie- und Pflegeformen Raum geben. Damit steht das geriatrische Zentrum beispielhaft für eine Architektur, die stets von den Bedürfnissen der Menschen ausgeht und mit baulichen Lösungen auf die sozialen Herausforderungen unserer Zeit reagiert.
The Gated Community (GC) phenomenon in Latin American cities has become an inherent element of their urban development, despite academical debate, their approach thrives within the housing market; not surprisingly, as some of the premises on which GCs are based, namely safety, control and supervision intersperse seamlessly with the insecure conditions of the contexts from which they arise. The current security crisis in Mexico, triggered in 2006 by the so-called war on drugs, has reached its peak with the highest insecurity rates in decades, representing a unique chance to study these interactions. Although the leading term of this research, Urban Agoraphobia, implies a causal dichotomy between the rise in the sense of fear amongst citizens and housing confinement as lineal consequence, I acknowledge that GCs represent a complex phenomenon, a hub of diverse factors and multidimensional processes held on four fundamental levels: global, social, individual and state-related. The focus of this dissertation is set on the individual plane and contributes, from the analysis of the GC’s resident’s perspective, experiences and perceptions, to a debate that has usually been limited to the scrutiny of other drivers, disregarding the role of dweller’s underlying fears, motivations and concerns. Assuming that the current ruling security model in Mexico tends to empower its commodification rather than its collective quality, this research draws upon the use of a methodological triangulation, along conceptual and contextual analyses, to test the hypothesis that insecurity plays an increasingly major role, leading citizens into the belief that acquiring a household in a controlled and surveilled community represents a counterweight against the feared environment of the open city. The focus of the analysis lies on the internal hatch of community ties as potential palliative for the provision of a sense of security, aiming to transcend the unidimensional discourse of GCs as defined mainly by their defensive apparatus. Residents’ perspectives acquired through ethnographical analyses may provide the chance to gain an essential view into a phenomenon that further consolidates without a critical study of its actual implications, not only for Mexican cities, but also for the Latin American and global contexts.
Audiovisuelles Cut-Up
(2023)
Diese Forschungsarbeit bezeichnet und analysiert ein ästhetisches Phänomen audiovisueller Arbeiten mit sehr schnellen Schnitten. Der Begriff „Audiovisuelles Cut-Up“ wird vorgeschlagen um dieses Phänomen zu bezeichnen. Verschiedenste audiovisuelle Arbeiten aus unterschiedlichen Kontexten werden analysiert, welche das formale Kriterium von extrem kurzen Einstellungen erfüllen – einschließlich der eigenen Forschungsbeiträge des Autors. Die Werkzeuge und Technologien, welche die neuartige Ästhetik ermöglichten werden vorgestellt. Oftmals wurden diese von den Künstlern selbst geschaffen, da fertige Lösungen nicht verfügbar waren. Audiovisuelle Cut-Ups werden nach Kontext und Medium systematisiert und verortet. Es schließen sich Beobachtungen an, in wie fern Audio und Video sich in ihrem Charakter unterscheiden, was die kleinste wahrnehmbare Einheit ist und welche Rolle Latenzen und Antizipation bei der Wahrnehmung von audiovisuellen Medien spielen.
Drei Hauptthesen werden Aufgestellt: 1. Audiovisuelles Cut-Up hat die Kraft winzige, vormals überdeckte Details deutlich zu machen. Damit kann es das Quellmaterial verdichten aber auch den Inhalt manipulieren. 2. Technische Entwicklungen haben das audiovisuelle Cut-Up hervorgerufen. 3. Heute ist die Ästhetik als ein Stilmittel im Werkzeugkasten audiovisueller Gestaltung etabliert.
Bauablaufplänen kommt bei der Realisierung von Bauprojekten eine zentrale Rolle zu. Sie dienen der Koordination von Schnittstellen und bilden für die am Projekt Beteiligten die Grundlage für ihre individuelle Planung. Eine verlässliche Terminplanung ist daher von großer Bedeutung, tatsächlich sind aber gerade Bauablaufpläne für ihre Unzuverlässigkeit bekannt.
Aufgrund der langen Vorlaufzeiten bei der Planung von Bauprojekten sind zum Zeitpunkt der Planung viele Informationen nur als Schätzwerte bekannt. Auf der Grundlage dieser geschätzten und damit mit Unsicherheiten behafteten Daten werden im Bauwesen deterministische Terminpläne erstellt. Kommt es während der Realisierung zu Diskrepanzen zwischen Schätzungen und Realität, erfordert dies die Anpassung der Pläne. Aufgrund zahlreicher Abhängigkeiten zwischen den geplanten Aktivitäten können einzelne Planänderungen vielfältige weitere Änderungen und Anpassungen nach sich ziehen und damit einen reibungslosen Projektablauf gefährden.
In dieser Arbeit wird ein Vorgehen entwickelt, welches Bauablaufpläne erzeugt, die im Rahmen der durch das Projekt definierten Abhängigkeiten und Randbedingungen in der Lage sind, Änderungen möglichst gut zu absorbieren. Solche Pläne, die bei auftretenden Änderungen vergleichsweise geringe Anpassungen des Terminplans erfordern, werden hier als robust bezeichnet.
Ausgehend von Verfahren der Projektplanung und Methoden zur Berücksichtigung von Unsicherheiten werden deterministische Terminpläne bezüglich ihres Verhaltens bei eintretenden Änderungen betrachtet. Hierfür werden zunächst mögliche Unsicherheiten als Ursachen für Änderungen benannt und mathematisch abgebildet. Damit kann das Verhalten von Abläufen für mögliche Änderungen betrachtet werden, indem die durch Änderungen erzwungenen angepassten Terminpläne simuliert werden. Für diese Monte-Carlo-Simulationen der angepassten Terminpläne wird sichergestellt, dass die angepassten Terminpläne logische Weiterentwicklungen des deterministischen Terminplans darstellen. Auf der Grundlage dieser Untersuchungen wird ein stochastisches Maß zur Quantifizierung der Robustheit erarbeitet, welches die Fähigkeit eines Planes, Änderungen zu absorbieren, beschreibt. Damit ist es möglich, Terminpläne bezüglich ihrer Robustheit zu vergleichen.
Das entwickelte Verfahren zur Quantifizierung der Robustheit wird in einem Optimierungsverfahren auf Basis Genetischer Algorithmen angewendet, um gezielt robuste Terminpläne zu erzeugen. An Beispielen werden die Methoden demonstriert und ihre Wirksamkeit nachgewiesen.
There is a continuous exacerbation of environmental problems in big cities of today’s world, thereby, diminishing the quality of life in them. Of particular concern is the fact that today’s megacities are evolving in the developing world without corresponding growth in the economy, infrastructure and other human development indices. As urban population continues to grow in these cities of the Global South, governing institutions are usually unable to keep pace with their social responsibilities, thus, making the issue of urban governance very critical. This is because effective and efficient urban governance is highly essential for the creation, strengthening and sustenance of governing institutions.
Lagos, a mega-city of over 15.45 million people and the most populous metropolitan area on the African continent epitomizes the fundamental grave characteristics of the emerging megacities of the Global South, thereby, constituting an apt choice in understanding the emerging megacities of the next generation. Two out of every three Lagos residents live in slums and de-humanizing physical and social conditions. Many of them sleep, work, eat and cook under highway bridges, at the mercy of weather elements.
This research, therefore, evaluated urban governance through housing administration in Africa’s largest megacity. It examines the extent of housing problems in the city, the causal factors and the culpability of government agencies statutorily responsible for the provision, control and management of housing development in Lagos - the tenth largest city in the world. A representative geographic part of the city which manifests classic characteristics of slum life, listed by Mike Davis as the largest slum in Africa and the 6th largest in the world – Ajegunle - was adopted for case study. The research design combined rigorous literature search (desk research) with quantitative and, especially, qualitative approaches to data collection. The qualitative approach was more intensely adopted because government officials often respond to enquiries with ‘official answers and data’ which may not be reliable and the study had to rely on keen observation of physical traces, social interaction and personal investigation. The cross-sectional research method was adopted. Information was solicited from house-owners, building industry professionals, sociologists and officials of relevant government agencies, through research tools like questionnaires, interviews, focused group discussions and personal observations.
The analysis and discussion of these field data, in conjunction with the information from the desk research gave a better understanding of the status-quo, which informed the recommendations proposed in the dissertation for mitigating the problems. The research discovered that many of the statutory housing agencies have the capacity to effectively discharge their responsibilities. However, it was also shown that corruption and abdication of responsibilities by the staff of these agencies constitute primary causes of the chasm between the anticipated lofty outcome from the laudable building regulations/bye-laws and the appalling reality. It also discovered that lack of political will and apathy on the part of successive Governments of Lagos State to the improvement of housing conditions of the poor masses are major causes of the housing debacle in Lagos.
Several germane and realistic recommendations for redressing the situation were subsequently proffered. These include amongst others, the conduction of an accurate census for Lagos, in conjunction with credible international agencies, as a requisite basis for effective planning of any sort. The process of obtaining legal titles for land should also be made less cumbersome, while the housing administration process should be computerized; in order to reduce inter-personal contacts between applicants and government officials to the barest minimum, as a means of curbing the wide spread corruption in the system.
Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics.
An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries.
Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations.
The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels.
Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target
displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams.
Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed .
This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines).
In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required.
The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost.
Container sind nicht nur das bei weitem wichtigste Transportmittel für die allermeisten Waren, mit denen wir tagtäglich zu tun haben. Container sind, vielleicht wegen ihrer schlichten, klaren Ausdruckskraft, zu dem Symbol der Globalisierung geworden und vieler Phänomene, die man mit dieser Entwicklung in Zusammenhang bringt. Dabei handelt es sich um ein durch und durch ambivalentes Symbol. Container stehen genauso für die beeindruckende Dynamik des modernen Kapitalismus und den ihm trotz aller Krisen zugrunde liegenden Optimismus wie für die Ängste und Einwände dagegen; gegen die Indifferenz eines rein auf Optimierung ausgelegten logistischen Organisationshandelns und gegen die zwangsweise Annäherung und Angleichung ehedem entfernter Weltgegenden durch die exponentielle Vermehrung der Transport- und Kommunikationsvorgänge. Der Schwerpunkt der Arbeit liegt im 20. Jahrhundert. Sie untersucht die (Vor)Geschichte und Theorie des Containers als moderner Kulturtechnik und zentralem Bestandteil eines weltumspannenden logistischen Systems. Und sie zeigt ihn als Element eines Denkens und Organisationshandelns in modularen, beweglichen Raumeinheiten, das sich auch auf viele andere Bereiche außerhalb des Warentransports übertragen lässt. Dafür beschreibt und analysiert sie "Containersituationen" in so unterschiedlichen Feldern wie Handel und Transport, Architektur, Wissenschaften, Kunst und den sozialen Realitäten von Migranten und Seeleuten.
Gaze based human-computer-interaction has been a research topic for over a quarter century. Since then, the main scenario for gaze interaction has been helping handicapped people to communicate an interact with their environment. With the rapid development of mobile and wearable display technologies, a new application field for gaze interaction has appeared, opening new research questions.
This thesis investigates the feasibility of mobile gaze based interaction, studying deeply the use of pie menus as a generic and robust widget for gaze interaction as well as visual and perceptual issues on head mounted (wearable) optical see-through displays.
It reviews conventional gaze-based selection methods and investigates in detail the use of pie menus for gaze control. It studies and discusses layout issues, selection methods and applications. Results show that pie menus can allocate up to six items in width and multiple depth layers, allowing a fast and accurate navigation through hierarchical levels by using or combining multiple selection methods. Based on these results, several text entry methods based on pie menus are proposed. Character-by-character text entry, text entry with bigrams and with text entry with bigrams derived by word prediction, as well as possible selection methods, are examined in a longitudinal study. Data showed large advantages of the bigram entry methods over single character text entry in speed and accuracy. Participants preferred the novel selection method based on saccades (selecting by borders) over the conventional and well established dwell time method.
On the one hand, pie menus showed to be a feasible and robust widget, which may enable the efficient use of mobile eye tracking systems that may not be accurate enough for controlling elements on conventional interface. On the other hand, visual perception on mobile displays technologies need to be examined in order to deduce if the mentioned results can be transported to mobile devices.
Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load on the respective systems. We investigated visual performance on participants with a visual search tasks and dual tasks presenting visual stimuli on the optical see-through device, only on a computer screen, and simultaneously on both devices. Results showed that switching between the presentation devices (i.e. perceiving information simultaneously from both devices) produced costs in visual performance. The implications of these costs and of further perceptual and technical factors for mobile gaze-based interaction are discussed and solutions are proposed.
Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.
Texts from the web can be reused individually or in large quantities. The former is called text reuse and the latter language reuse. We first present a comprehensive overview of the different ways in which text and language is reused today, and how exactly information retrieval technologies can be applied in this respect. The remainder of the thesis then deals with specific retrieval tasks. In general, our contributions consist of models and algorithms, their evaluation, and for that purpose, large-scale corpus construction.
The thesis divides into two parts. The first part introduces technologies for text reuse detection, and our contributions are as follows: (1) A unified view of projecting-based and embedding-based fingerprinting for near-duplicate detection and the first time evaluation of fingerprint algorithms on Wikipedia revision histories as a new, large-scale corpus of near-duplicates. (2) A new retrieval model for the quantification of cross-language text similarity, which gets by without parallel corpora. We have evaluated the model in comparison to other models on many different pairs of languages. (3) An evaluation framework for text reuse and particularly plagiarism detectors, which consists of tailored detection performance measures and a large-scale corpus of automatically generated and manually written plagiarism cases. The latter have been obtained via crowdsourcing. This framework has been successfully applied to evaluate many different state-of-the-art plagiarism detection approaches within three international evaluation competitions.
The second part introduces technologies that solve three retrieval tasks based on language reuse, and our contributions are as follows: (4) A new model for the comparison of textual and non-textual web items across media, which exploits web comments as a source of information about the topic of an item. In this connection, we identify web comments as a largely neglected information source and introduce the rationale of comment retrieval. (5) Two new algorithms for query segmentation, which exploit web n-grams and Wikipedia as a means of discerning the user intent of a keyword query. Moreover, we crowdsource a new corpus for the evaluation of query segmentation which surpasses existing corpora by two orders of magnitude. (6) A new writing assistance tool called Netspeak, which is a search engine for commonly used language. Netspeak indexes the web in the form of web n-grams as a source of writing examples and implements a wildcard query processor on top of it.
Development of a Sustainability-based Sanitation Planning Tool (SusTA) for Developing Countries
(2014)
Background and Research Goal
Despite all the efforts in the sanitation sector, it is acknowledged that the world is not on track to meet the MDG sanitation target to reduce the number of people without access to sanitation by 2015. Furthermore, a large number of existing sanitation facilities in developing countries is out of order. This leads to the conclusion that, besides technical failures, the planning process in the sanitation sector was ineffective. This ineffectiveness may be attributed to the lack of knowledge of the sanitation planners about the local conditions of the sanitation project. In addition, sustainability of a technology is often approached from a fragmented perspective that often leads to an unsustainable solution.
The dissertation is conducted within the framework of the Integrated Water Resources Management (IWRM) Indonesia project. The goal of this work is to contribute to the development of a methodology of a planning tool for sustainable sanitation technology. The tool is designed for sanitation planners in developing countries, where a top-down planning approach is common practice. The proposed tool enables comprehensive sustainability assessments (using the Helmholtz Concept of Sustainability as reference), taking into account local conditions.
State of the Science
In the planning practice, many sanitation planning tools focus on technology selection. However, it has become evident that the selection criteria for sustainable technologies are not always considered in the tools’ framework. In other cases, when the criteria are provided by the tool, there is no clear indication of the conditions to be fulfilled in order to meet these criteria. Specifically, there is no reference to what is meant by sustainable technology in a particular context and how to comprehensively assess the sustainability of different technology options.
Research Methodology
Developing a planning tool is an empirical process, combining theory and practical experience. Hence, the development process of such a tool requires extensive observations, particularly on the interaction between stakeholders in the sanitation sector as well as between technology and its environment. For this purpose, a case study within the project area was carried out. Pucanganom, a village representing common strategic problems in developing countries (e.g. top-down planning approaches, lack of involvement of beneficiaries in the planning process, lack of sustainability assessments) was finally selected as the case study area. After the in-depth case study, an analytical generalisation was developed to enable the tool’s application to a broader context.
Results
The result of this research is a new tool – the Sustainability-based Sanitation Planning Tool (SusTA). SusTA enables comprehensive sustainability assessment in its five generic steps, namely: (1) analysis of stakeholders and sanitation policy in the region, (2) distance-to-target analysis on sanitation conditions in the region, (3) examination of physical and socio-economic conditions in the project area, (4) contextualisation of the technology assessment process in the project area, and (5) sustainability-oriented technology assessment at the project level. These steps are conducted at two levels of planning – the region and the project area – in order to identify the specific problems and interests which influence the selection of a sanitation system. Each planning step is equipped with tool elements (e.g. set of indicators, household questionnaires, technology assessment matrices) to support the analysis.
From the development of SusTA, it can be concluded that four elements are required for an effective and widely applicable sanitation planning tool: sustainability concept, participatory approach, contextualisation framework and modification framework. SusTA provides both a theoretical and a practical basis for assessing the sustainability of sanitation technologies in developing countries. The tool’s main advantages for decision makers in these countries are: It is simple and transparent in its steps, does not require vast amounts of data and does not need a sophisticated computer program.
Objektorientierte Bauwerksmodelle sind derzeit Gegenstand umfangreicher Foschungsaktivitäten zur rechnerinternen Verwaltung bauwerksbezogener Informationen. Ein in diesem Rahmen diskutierter Ansatz ist die Realisierung eines virtuellen rechnerinternen Bauwerks in Form eines variablen Verbundes fachspezifischer objektorientierter Modelle. Diese Organisationsform der rechnerinternen Repräsentation eignet sich einerseits aufgrund ihrer Flexibilität sehr gut für eine lebensphasenübergreifende Fortschreibung als digitale Bauwerksakte. Eine solche Bauwerksakte bildet eine wichtige Informationsgrundlage für die Planung von Instandhaltungs-, Modernisierungs-, \Umbau- oder Erweiterungsmaßnahmen in späteren Lebensphasen des Bauwerks. Andererseits erschwert die dezentrale Organisationsform des Modellverbundes jedoch die Informationssuche, was in erster Linie durch das Vorhandensein multipler Repräsentationen einzelner Realweltobjekte sowie die Komplexität der am Verbund beteiligten Modellschemata bedingt ist. Gegenstand der vorliegenden Arbeit ist die Entwicklung eines generischen Systemkerns als Basis für eine projektbezogen konfigurierbare Informationsumgebung, die insbesondere in frühen Projektphasen einen handhabbaren Zugang zu den im Modellverbund verwalteten Informationen bereitstellen kann. Der vorgeschlagene Lösungsansatz erweitert den Modellverbund um eine Erschließungsstruktur, in der die einzelnen Elemente der individuellen baulich-räumlichen Struktur durch eindeutige Identifikatoren vertreten werden. Mit den Identifikatoren werden jeweils die fachspezifischen Repräsentationen des Objektes verknüpft und sind somit von einem zentralen Einstiegspunkt aus erreichbar. Der generische Systemkern definiert eine objektorientierte Datenstruktur zur Verwaltung der jeweils projektbezogen auszuprägenden Erschließungsstruktur. Die Interaktion mit dem erschlossenen Informationsraum erfordert eine entsprechende Nutzerschnittstelle, die in der Lage ist, nicht vorhersehbaren, spontan entstehenden Informationsbedarf zu bedienen. Darüber hinaus soll sie konfigurierbar sein bezüglich der unterstützten Suchstrategien. Der Lösungsansatz sieht eine hierarchische Organisationstruktur der Nutzerschnittstelle vor, die eine modulare Erweiterung ermöglicht. Ein entsprechender Kern der Nutzerschnittstelle wird als objektorientiertes Framework spezifiziert. Die Erschließungsstruktur und die Nutzerschnittstelle werden unter Anwendung des objektorientierten Paradigmas entwickelt und mit Hilfe einer formalen Notation auf implementierungsunabhängiger Ebene beschrieben. Anhand exemplarischer Umsetzungen kritischer Systemteile wird die prinzipielle Realisierbarkeit des beschriebenen Systems nachgewiesen.
Settlement is human place to live and do various activities (Finch, 1980). Concept of settlement layout is closely associated with human and a set of thoughts and behaviors. In this case, idea of pattern of activities in a society that is core of a culture becomes main factor in process of formation of houses and environment in a settlement. Factors which affecting form (physical) of architecture in a settlement environment are socio-cultural, economic, and religious determinant factor that manifested architectural realization (Rapoport, 1969).
Yogyakarta as the continuation of kingdom city in the Java Island finally exists as an Islamic kingdom that still remain to survive up to now. Impacts of this issue is appearance of various Moslem settlements to support typical character of an Islamic Kingdom.
Mlangi is an area of oldest Moslem settlements in Yogyakarta has not been explored in details for progress especially in physical glasses recently. Everything basic group and individual who arrange houses and residences, starts from how it has spatial concept alone. Although concept is a very abstract thing to explain in details, but its existence can be detected by how they created their physical environment.
This research conducted by these research questions: (1) What are spatial concepts owned by people in Mlangi and (2) How do spatial concepts owned by the people affect the settlements pattern?
Process to search spatial concept owned by the people in Moslem residence, making Mlangi as study area, was approached by using phenomenological research method. The researcher have to self-involved directly in unstructured interviews, but remained in guideline framework of in interviews to make research process effective. Fistly, the researcher interviewed the key person, they are the head of Mlangi administration (pak Dukuh) in Mlangi and Sawahan. They were then give advices to who was capable person that could draw the spatial concept and had many story and knew the history of the settlements. Step by step of interview guided from one informant to next informant when the information had been told repeatedly. The next informant based on the last informant advice or who had close relationship with the last theme appeared. To complete the narration and draw the result of interview, researcher have to add additional information with photograph and descriptive picture that can be draw the settlement empirically.
In process, 17 information units which found in field were consistent with sequence of interview events and flowing of theme to theme associated with Moslem residence of residence. Finally the interviews succeeded in abstracting 16 themes that may be classified into historic, socio-cultural, and spatial-concept dimensions in Mlangi. Process of analysis to find spatial concept owned by the people in Moslem settlements was carried out by dialogue of themes to find available substantive relationship.
Four concepts successfully analyzed consist of concepts of personage, concept of religious implementation, concept of Jero-Jaba and concept of Interest. The four concepts are really associated with one and others in understanding how spatial concept owned by the people affects residence they occupy. Yet, concept of Jero-Jaba bases all concepts of people in Mlangi . This concept can be used to draw red yarn on how they utilize communal spaces in residence and layout rooms of their individual houses. This concept also eternalize residence patterns existing in Mlangi now where residence does not experience many changes from starting of this residence existence (from detection of generation currently still living), namely residence patterns concentrate on orientation to Masjid Pathok Negoro of Mlangi.
This research was opening the potential research area, at least for the sociology, anthropology and demography research interest. So many unique character in Mlangi if looked at from how they maintain their spatial concept and manifested in their daily activities. How the people will concern only for the religious activities and the economic concern only for survival aspect in live.
Keywords: spatial concept, moslem settlements, phenomenology method, Indonesia,
On the mechanisms of shrinkage reducing admixtures in self con-solidating mortars and concretes
(2010)
Self Consolidating Concrete – a dream has come true!(?) Self Consolidating Concrete (SCC) is mainly characterised by its special rheological properties. With-out any vibration this concrete can be placed and compacted under its own weight, without segrega-tion or bleeding. The use of such concrete can increase the productivity on construction sites and en-able the use of a higher degree of well distributed reinforcement for thin walled structural members. This new technology also reduces health risks since in contrast to the traditional handling of concrete, the emission of noise and vibration are substantially decreased. The specific mix design for self consolidating concretes was introduced around the 1980s in Japan. In comparison to normal vibrated concrete an increased paste volume enables a good distribution of aggregates within the paste matrix, minimising the influence of aggregates friction on the concrete flow property. The introduction of inert and/or pozzolanic additives as part of the paste provides the required excess paste volume without using disproportionally high amounts of plain cement. Due to further developments of concrete admixtures such as superplasticizers, the cement paste can gain self levelling properties without causing segregation of aggregates. Whereas SCC differs from normal vibrated concrete in its fresh attributes, it should reach similar properties in the hardened state. Due to the increased paste volume it usually shows higher shrinkage. Furthermore, owing to strength requirements, SCC is often produced at low water to cement ratios and hence may additionally suffer from autogenous shrinkage. This means that cracking caused by drying or autogenous shrinkage is a real risk for SCC and can compromise its durability as cracks may serve as ingression paths for gases and salts or might permit leaching. For the time being SCC still exhibits increased shrinkage and cracking probability and hence may be discarded in many practical applications. This can be overcome by a better understanding of those mechanisms and the ways to mitigate them. It is a target of this thesis to contribute to this. How to cope with increased shrinkage of SCC? In general, engineers are facing severe problems related to shrinkage and cracking. Even for normal and high performance concrete, containing moderate amounts of binder, a lot of effort was put on counteracting shrinkage and avoiding cracking. For the time being these efforts resulted in the knowledge of how to distribute cracks rather to avoid them. The most efficient way to decrease shrinkage turned out to be to decrease the cement content of concrete down to a minimum but still sufficient amount. For SCC this obviously seems to be contradictory with the requirement of a high paste volume. Indeed, the potential for shrinkage reduction is limited to some small range modifications in the mix design following two major concepts. The first one is the reduction of the required paste volume by optimising the aggregate grading curve. The second one involves high volume substitution of cement, preferentially using inert mineral additives. The optimization of grading curves is limited by several severe practical issues. Problems start with the availability of sufficiently fractionated aggregates. Usually attempts fail because of the enormous effort in composing application-optimized grading curves or mix designs. Due to durability reasons, the substitution rate for cement is limited depending on the application purpose and on environmental exposure of the hardened concrete. In the early 1980s Shrinkage Reducing Admixtures (SRA) were introduced to counteract drying shrinkage of concrete. The first publications explicitly dealing with SRA go back to Goto and Sato (Japan). They were published in 1983, which is also the time when the SCC concept was introduced. SRA modified concretes showed a substantial reduction of free drying shrinkage contributing to crack prevention or at least a significant decrease of crack width in situations of restrained drying shrinkage. Will shrinkage reducing admixtures contribute to a broader application of SCC? Within the last three decades performance tests on several types of concrete proved the efficiency of shrinkage reducing admixtures. So, at least in terms of shrinkage and cracking, concretes in general and SCC in particular can benefit from SRA application. But "One man's meat is another man's poison" and with respect to long term performance of SRA modified concretes there are still several issues to be clarified. One of these concerns the impact of SRAs on cement hydration. It is therefore an issue to know if changes in the hydrated phase composition, induced by SRA, result in undesired properties or decreased durability. Another issue is that the long term shrinkage reduction has to be evaluated. For example, one can wonder if SRA leaching may diminish or even eliminate long term shrinkage reduction and if the release of admixtures could be a severe environmental issue. It should also be noted that the basic mechanism or physical impact of SRA as well as its implementation in recent models for shrinkage of concrete is still being discussed. The present thesis tries to shed light on the role of SRA in self consolidating concrete focusing on the three questions outlined above: basic mechanisms of cement hydration, physical impact on shrinkage and the sustainability of SRA-application. Which contributions result from this study? Based on an extensive patent search, commercial SRAs could be identified to be synergistic mixtures of non-ionic surfactants and glycols. This turns out to be most important information for more than one reason and is the subject of chapter 4. An abundant literature focuses on properties of these non-ionic surfactants. Moreover, from this rich pool of information, the behaviour of SRAs and their interactions in cementitious systems were better understood through this thesis. For example, it could be anticipated how SRAs behave in strong electrolytes and how surface activity, i.e. surface tension, and interparticle forces might be affected. The synergy effect regarding enhanced performance induced by the presence of additional glycol in SRAs could be derived from the literature on the co-surfactant nature of glycols. Generally it now can be said that glycols ensure that the non-ionic surfactant is properly distributed onto the paste interfaces to efficiently reduce surface tension. In literature, the impact of organic matter on cement hydration was extensively studied for other admixtures like superplasticizer. From there, main impact factors related to the nature of these molecules could be identified. In addition, here again, the literature on non-ionic surfactants provides sufficient information to anticipate possible interactions of SRA with cement hydration based on the nature of non-ionic surfactants. All in all, the extensive study on the nature of non-ionic surfactants, presented in chapter 4, provides fundamental understanding of the behaviour of SRAs in cement paste. Taking a step further to relate this to the impact on drying and shrinkage required to review recent models for drying and shrinkage of cement paste as presented in chapter 3. There, it is shown that macroscopic thermodynamics of the open pore systems can be successfully applied to predict drying induced deformation, but that surface activity of SRA still has to be implemented to explain the shrinkage reduction it causes. Because of severe issues concerning the importance of capillary pressure on shrinkage, a new macroscopic thermodynamic model was derived in a way that meets requirements to properly incorporate surface activity of SRA. This is the subject of chapter 5. Based on theoretical considerations, in chapter 5 the broader impact of SRA on drying cementitious matter could be outlined. In a next step, cement paste was treated as a deformable, open drying pore system. Thereby, the drying phenomena of SRA modified mortars and concrete observed by other authors could be retrieved. This phenomenological consistency of the model constitutes an important contribution towards the understanding of SRA mechanisms. Another main contribution of this work came from introducing an artificial pore system, denominated the normcube. Using this model system, it could be shown how the evolution of interfacial area and its properties interact in presence of SRAs and how this impacts drying characteristics. In chapter 7, the surface activity of commercial SRAs in aqueous solution and synthetic pore solution was investigated. This shows how the electrolyte concentration of synthetic pore solution impacts the phase behaviour of SRA and conversely, how the presence of SRA impacts the aqueous electrolyte solution. Whilst electrolytes enhance self-aggregation of SRAs into micelles and liquid crystals, the presence of SRAs leads to precipitation of minerals as syngenite and mirabilite. Moreover, electrolyte solutions containing SRAs comprise limited miscibility or rather show miscibility gaps, where the liquid separates into isotropic micellar solutions and surfactant rich reverse micellar solutions. The investigation of surface activity and phase behaviour of SRA unravelled another important contribution. From macroscopic surface tension measurements, a relationship between excess surface concentration of SRA, bulk concentration of SRA and exposed interfacial area could be derived. Based on this, it is now possible to predict the actual surface tension of the pore fluid in the course of drying once the evolution of internal interfacial area is known. This is used later in this thesis to describe the specific drying and shrinkage behaviour of SRA modified pastes and mortars. Calorimetric studies on normal Portland cement and composite binders revealed that SRA alone show only minor impact on hydration kinetics. In presence of superplasticizer however the cement hydration can be significantly decelerated. The delaying impact of SRA could be related to a selective deceleration of silicate phase hydration. Moreover, it could be shown that portlandite precipitation in presence of SRA is changed, turning the compact habitus into more or less layered structures. Thereby, the specific surface increases, causing the amount of physically bound water to increase, which in turn reduces the maximum degree of hydration achievable for sealed systems. Extensive phase analysis shows that the hydrated phase composition of SRA modified binders re-mains almost unaffected. The appearance of a temporary mineral phase could be detected by environmental scanning electron microscopy. As could be shown for synthetic pore solutions, syngenite precipitates during early hydration stages and is later consumed in the course of aluminate hydration, i.e. when sulphates are depleted. Moreover, for some SRAs, the salting out phenomena supposed to be enhanced in strong electrolytes could also be shown to take place. The resulting organic precipitates could be identified by SEM-EDX in cement paste and by X-ray diffraction on solid residues of synthetic pore solution. The presence of SRAs could also be identified to impact microstructure of well cured cement paste. Based on nitrogen adsorption measurements and mercury intrusion porosimetry the amount of small pores is seen to increase with SRA dosage, whilst the overall porosity remains unchanged. The question regarding sustainability of SRA application is the subject of chapter 10. By means of leaching studies it could be shown that SRA can be leached significantly. The mechanism could be identified as a diffusion process and a range of effective diffusion coefficients could be estimated. Thereby, the leaching of SRA can now be estimated for real structural members. However, while the admixture can be leached to high extents in tank tests, the leaching rates in practical applications can be assumed to be low because of much reduced contact with water. This could be proven by quantifying admixture loss during long term drying and rewetting cycles. Despite a loss of admixture shrinkage reduction is hardly impacted. Moreover, the cyclic tests revealed that the total deformations in presence of SRA remain low due to a lower extent of irreversibly shrinkage deformations. Another important contribution towards the better understanding of the working mechanism of SRA for drying and shrinkage came from the same leaching tests. A significant fraction of SRA is found to be immobile and does not diffuse in leaching. This fraction of SRA is probably strongly associated to cement phases as the calcium-silicate-hydrates or portlandite. Based on these findings, it is now also possible to quantify the amount of admixture active at the interfaces. This means that, the evolution of surface tension in the course of drying can be approximated, which is a fundamental requirement for modeling shrinkage in presence of SRA. The last experimental chapter of this study focuses on the working mechanism and impact of SRA on drying and shrinkage. Based on the thermodynamics of the open deformable pore system introduced in chapter 5, energy balances are set up using desorption and shrinkage isotherms of actual samples. Information on distribution of SRA in the hydrated paste is used to estimate the actual surface tensions of the pore solution. In other words, this is the first time that the surface activity of the SRA in the course of the drying is fully accounted for. From the energy balances the evolution and properties of the internal interface are then obtained. This made it possible to explain why SRAs impact drying and shrinkage and in what specific range of relative humidity they are active. Summarising the findings of this thesis it can be said that the understanding of the impact of SRAs on hydration, drying and shrinkage was brought forward. Many of the new insights came from the careful investigation of the theory of non-ionic surfactants, something that the cement community had generally overlooked up to now.
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration.
An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof.
The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback.
The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested.
Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model.
The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed.
When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure.
Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources.
In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts.
Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques.
The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches.
Multi-Frame Rate Rendering
(2008)
Multi-frame rate rendering is a parallel rendering technique that renders interactive parts of a scene on one graphics card while the rest of the scene is rendered asynchronously on a second graphics card. The resulting color and depth images of both render processes are composited, by optical superposition or digital composition, and displayed. The results of a user study confirm that multi-frame rate rendering can significantly improve the interaction performance. Multi-frame rate rendering is naturally implemented on a graphics cluster. With the recent availability of multiple graphics cards in standalone systems the method can also be implemented on a single computer system where memory bandwidth is much higher compared to off-the-shelf networking technology. This decreases overall latency and further improves interactivity. Multi-frame rate rendering was also investigated on a single graphics processor by interleaving the rendering streams for the interactive elements and the rest of the scene. This approach enables the use of multi-frame rate rendering on low-end graphics systems such as laptops, mobile phones, and PDAs. Advanced multi-frame rate rendering techniques reduce the limitations of the basic approach. The interactive manipulation of light sources and their parameters affects the entire scene. A multi-GPU deferred shading method is presented that splits the rendering task into a rasterization and lighting pass and assigns the passes to the appropriate image generators such that light manipulations at high frame rates become possible. A parallel volume rendering technique allows the manipulation of objects inside a translucent volume at high frame rates. This approach is useful for example in medical applications, where small probes need to be positioned inside a computed-tomography image. Due to the asynchronous nature of multi-frame rate rendering artifacts may occur during migration of objects from the slow to the fast graphics card, and vice versa. Proper state management allows to almost completely avoid these artifacts. Multi-frame rate rendering significantly improves the interactive manipulation of objects and lighting effects. This leads to a considerable increase of the size for 3D scenes that can be manipulated compared to conventional methods.
The most fundamental understating of hybridization methodology takes the form of stable but dynamic notions, accumulated over time in the memory of individuals. Schematized and abstracted, the hybrids representation needs to be reproduced and reused in order to reconstruct and bring back other memories. Reinvented, or reused hybrids can support getting access to social, traditional, religious understanding of nations. In this manner, they take the form of the messenger / the mediator an innate, equivalent to the use of mental places in the art of memory. We remember mythology in order to remember other things.
From individual memory perspective, or group collective memory, the act of recollection is assumed to be an individual act, biologically based in the brain, but by definition conditioned by social collectives. Following Halbwachs, this thesis does not recognize a dichotomy between individual and collective memory as two different types of remembering. Conversely, the collective is thought of as inherent to individual thought, questioning perspectives that regard individual recollection as isolated from social settings. The individual places himself in relation to the group and makes use of the collective frameworks of thought when he localizes and reconstructs the past, whether in private or in social settings. The frameworks of social relations, of time, and of space are constructs originating in social interaction and distributed in the memory of the group members. The individual has his own perspective on the collective frameworks of the group, and the group’s collective frameworks can be regarded as a common denominator of the individual outlooks on the framework.
In acts of remembering, the individual may actualize the depicted symbols in memory, but he could also employ precepts from the environment. The latter have been referred to as material or external frameworks of memory, suggesting their similar role as catalysts for processes of remembrance such as that of the hybrids in my paintings. It is only with reference to the hybrids, who work as messengers / mediators with a dual nature, that communicate between the past and the present, the internal and external space, that individual memory and group memory is in focus.
The exhibition at the Egyptian museum in Leipzig is my practical method to create a communicative memory, using hybrids as mediators in cultural transimission, as when the act refers to informal and everyday situations in which group members informally search for the past, it takes place in the communicative
162
memory. As explained in chapter one, the exhibition at the Egyptian museum in Leipzig is an act of remembering in search for the past with support of my paintings, which then can considered as part of the cultural memory.
In addition to the theoretical framework summarized above, I have applied my hypothesis practically in the form of the public exhibition, and shared the methodology with public audience from Cairo / Egypt and Leipzig / German in the form of visual art workshops and open discussions. I have also suggested an analyzed description of the meaning of hybrids in my artwork as mediators and messengers for the purpose of cultural transmission, as well as in relation to other artists’ work and use of a similar concept.
By using my hybrid creatures in my visual artwork, I am creating a bridge, mediators to represent both the past and the present, what we remember of the past, and how we understand the past. It is as explained in chapter two; that the hybridization methodology in terms of double membership represented in different cultures –Cairo / Egypt and Leipzig / Germany- can provide a framework which allows artistic discussions and could be individually interpreted, so individual cultures / individual memory can become transparent without losing their identities and turn into communicative memory. This transmission through the hybridization theoretical approech was explicitly clarified with the support of Krämer’s hypothesis. The practical attempt was examined by creating a relationship between the witness –me as an artist– and the audience –the exhibition visitors–, to cross space and time, not to bridge differences, rather to represent the contrasts transparently.
The Kin-making proposition is adopted by many academics and scholars in modern society and theoretical research; the topic was represented in the roots of the ancient Egyptian mindset and supported theoretically by similar understandings such as Haraway’s definition of kin-making. The practical implementation of kin- making can be observed in many of my artwork and was analyzed visually and artistically in chapter three.
My practical project outcome tested success by using hybrids in my paintings as mediators, it opened a communicative artistic discussion. This methodology gave a possible path of communication through paintings / visual analyses, and offered relativity through image self-interpretation.
Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low.
Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs.
Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite.
It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs.
System identification is often associated with the evaluation of damage for existing structures. Usually, dynamic test data are utilized to estimate the parameter values for a given structural model. This requires the solution of an inverse problem. Unfortunately, inverse problems in general are ill-conditioned, particularly with a large number of parameter to be determined. This means that the accuracy of the estimated parameter values is not sufficiently high in order to enable a damage identification. The goal of this study was to develop an experimental procedure which allows to identify the system parameters in substructures with high reliability. For this purpose, the method of selective sensitivity was employed to define special dynamic excitations, namely selectively sensitive excitation. Two different approaches have been introduced, which are the quasi-static approach and the iteratively experimental procedure. The former approach is appropriate for statically determinate structures and excitation frequencies below the structure's fundamental frequency. The latter method, which uses a-priori information about the parameters to be identified to set up an iterative experiment, can be applied to statically indeterminate structures. The viability of the proposed iterative procedure in detection of small changes of structure's stiffness was demonstrated by a simple laboratory experiment. The applicability of the strategy, however, depends largely on experimental capacity. It was also experienced that such a test is associate with expensive cost of equipments and time-consuming work.
Der architektonische Raum wird als ein Medium der Kommunikation im Kontext der >neuen< Medien begriffen, aus der Erkenntnis, dass er schon immer ein Medium war und aus einer komplexen Medienstruktur in Abhängigkeit von anderen Medien besteht. Im Prozess von Handlung und Kommunikation ist der architektonische Raum das Medium, das räumliche Nähe von Individuen über alle Sinne und das Bewusstsein gleichzeitig intensiv ermöglicht. Der architektonische Raum als immersives Kommunikationsmedium erreicht im Zeitalter der >neuen< Medien eine neue Dimension, indem mehr und andere Wirklichkeitsalternativen der Kommunikation zur Verfügung stehen. N. Luhmann folgend, wird die Architektur aus der Sicht der Form/Medium-Differenz systemtheoretisch als strukturierter Möglichkeitsraum betrachtet. Der Raum ist das Medium für Formen des architektonischen Raumes, in dem Architektur überhaupt erst wirksam wird. Umgekehrt sind die Formen des architektonischen Raumes Medien für die Wahrnehmung einer Vielzahl von räumlichen Wirklichkeiten. Eine Fassade aus Stein oder Glas ist gebaute Form und kann als Medium Information kommunizieren. Medien werden ihrer Bestimmung um so besser gerecht, je mehr sie sich der Aufmerksamkeit entziehen und wie transparente Fenster hinter der Oberfläche der sinnlichen Wahrnehmung zurücktreten. Als >unwahrnehmbares< Medium ist der architektonische Raum damit eine hintergründige >Wirkungsmacht<, eine Bühne für die Entfaltung von Wirkung, Atmosphäre und Bewegung. Seine physische Wirklichkeit war schon immer durch virtuelle Wirklichkeiten oder Realitäten entgrenzt, die durch Techniken und Technologien der Simulation als künstliche Welten wahrnehmbar und kommunizierbar werden. Dies kann an tradierten Beispielen der gotischen Kathedrale, dem Panorama, den panoptischen Räumen, dem Theater, Kino oder den kontinuierlichen Räumen von der Moderne bis heute aufgezeigt weren. Virtuelle Räume gotischer Glasbilder oder barocker Decken- und Wandbilder im Medium des architektonischen Raumes sind uns geläufig. Die Immersion, das Eintauchen in diese virtuellen Wirklichkeitsspären löst die Wahrnehmung der eigenen körperlichen Präsenz in ihnen aus. Das Potential des virtuellen Raumes der Architektur besteht im Vergleich zu anderen virtuellen Realitäten von Text, Bild oder digitalen Medien in seiner Gebundenheit an die physische, räumliche Reizstruktur, der er die Eindringlichkeit und Komplexität seiner Wirkung verdankt. Es werden unterschiedliche Wechselwirkungen und gemeinsame Entwicklungen von zeitgenössischen Beispielen der Architektur mit den >neuen< Medien aufgezeigt. In der »sensitiven Wand« wird die physische Raumgrenze durch die Integration neuer Techniken und Technologien digitaler, elektronischer Medien etwas extrem Flexibles und Formbares in Interaktion mit dem Benutzer. Der H2O Pavillon (Oosterhuis und NOX, 1997) ist ein Beispiel dafür. Der ausgeprägt polysensorische Immersionsraum steht für die Einheit von digitaler und architektonischer Simulation. Die metaphorische Welt von Höhle und Quelle des Thermalbades Vals (P.Zumthor, 1996) ist die räumliche Reflexion auf die metaphorische Struktur virtueller Räume der >neuen< Medien. Die simulierte Wirklichkeit in den Medien Wasser, Stein und architektonischer Raum produziert schöpferisch den polysensorischen immersiven Zugang in die virtuellen Welten >authentischer< physischer Umgebung. Das >Sichtbare< im Medium Raum der Architektur ist ohne das >Unsichtbare< nicht zu begreifen bzw. das sinnlich Wahrnehmbare nicht ohne das Unwahrnehmbare. Das Erkennen dieser Relation von Form und Medium ermöglicht die Formulierung des neuen Begriffes des medialen Raumes der Architektur, der zur Basis für eine Medientheorie der Architektur wird, als Sichtweise der Entgrenzung des physischen Raumes durch den virtuellen Raum für die subjektive Wahrnehmung, Handlung und Kommunikation.
Gegenstand der Arbeit ist die Untersuchung der bei der Herstellung von Branntkalk-Boden-Säulen auftretenden thermischen Effekte und ihres Einflusses auf Wasser- und Wasserdampftransporte im Boden. Die Erwärmung beruht vorrangig auf einer chemischen Reaktion, bei der das dem Boden zugemischte Calciumoxid mit Bodenwasser unter Freisetzung von Wärmeenergie zu Calciumhydroxid reagiert. Hierzu wurden zunächst die thermischen Eigenschaften feinkörniger Böden und ihre Beeinflussung durch das Herstellen des Bindemittel-Boden-Gemisches in situ untersucht. Weiterhin wurden Untersuchungen zum zeitlichen Verlauf der chemischen Reaktion und zur Größe der dabei freigesetzten Reaktionswärme vorgenommen. Mit dem Vorhaben, die mit der Säulenherstellung einhergehenden Temperaturfeldänderungen zu erfassen, wurden danach die thermischen Anfangs- und Randbedingungen des Bodens und der Bodenoberfläche untersucht und festgelegt. Anschließend wurden die zeitabhängigen Temperaturfeldänderungen auf der Grundlage der Wärmeübertragung durch Wärmeleitung mit Hilfe des Finite-Elemente-Methode Programms Ansys® 6.1 numerisch simuliert. Das Finite-Elemente-Modell wurde durch die Nachrechnung von Feldversuchen verifiziert. Im Rahmen der Finite-Elemente-Berechnungen wurde die infolge der Hydratation des Branntkalkes stattfindende Erwärmung des Bindemittel-Boden-Gemisches und des angrenzenden Bodens simuliert und hinsichtlich relevanter Einflussgrößen überprüft. Untersucht wurde der Einfluss herstellungsbedingter Faktoren wie Bindemittelkonzentration, Säulendurchmesser und Säulenanordnung sowie der Einfluss natürlicher Faktoren wie Trockendichte und Sättigungsgrad des Bodens. Die mit Hilfe der Finite-Elemente-Methode ermittelten zeitabhängigen, im Boden auftretenden Temperaturgefälle bilden die Grundlage für die Untersuchung der thermisch bedingten Wassertransportvorgänge in der Stabilisierungssäule und deren Umfeld. Zu diesem Zweck wurde die durch die Temperaturfeldbeeinflussung geänderte energetische Situation des Bodenwassers analysiert. Auch nicht-thermische, infolge der Säulenherstellung auftretende Effekte wie die durch den >Stopfeffekt< bedingte lokale Sättigungsänderung und die Beeinflussung des osmotischen Potentials einschließlich der daraus resultierenden Wasserbewegungen wurden berücksichtigt. Alle thermisch verursachten Wasser- und Dampfflüsse bewirken ein Abströmen von Porenwasser aus dem stabilisierten Erdkörper in den umliegenden Boden. Baupraktisch bleiben die durch thermische Einflüsse hervorgerufenen Wassertransportvorgänge aufgrund ihres geringen Betrages jedoch unbedeutend. In abschließenden Temperaturfeldberechnungen wurden die thermischen Bodenkennwerte an die sich zeitlich verändernde Wassersättigung des Bodens angepasst. Anhand der ermittelten Temperaturverläufe wurde aufgezeigt, dass der Einfluss der Sättigungsänderung auf die Berechnungsergebnisse sehr gering ist, und damit die Voraussetzung für die vorangegangene entkoppelte Betrachtung des Wärme- und Massestromes erfüllt ist. Aufgrund dieser Ergebnisse muss der mehrfach in der Literatur zitierte, auch mit der tiefgründigen Bodenstabilisierung in Zusammenhang gebrachte, Einfluss der Erwärmung auf die Verdunstung des Bodenwassers kritisch betrachtet und in Frage gestellt werden. Voraussetzung hierfür ist der Transport von Wasser an die Bodenoberfläche. Nennenswerte, auf Temperatureinflüssen beruhende Wasserbewegungen sind, wie die Berechnungsergebnisse gezeigt haben, nicht zu erwarten. Weitere Untersuchungen zur Festigkeitsentwicklung von Branntkalk-Boden-Säulen und deren Vorhersage sollten sich daher auf die mechanischen Effekte und auf die mineralogisch-chemischen Prozesse, wie die puzzolanischen Reaktionen, und die Möglichkeiten ihrer Prognose konzentrieren. Die Berechnungen haben gezeigt, dass die Temperaturentwicklung in der Stabilisierungssäule im Wesentlichen durch die Bindemittelkonzentration, und ihr Auskühlungsverhalten vorrangig durch ihre geometrischen Abmessungen bestimmt wird. Diese Sachverhalte sind von den Bodenparametern, der für die Stabilisierung in Frage kommenden Böden, weitestgehend unabhängig. Temperaturmessungen stellen daher ein geeignetes Mittel zur Qualitätssicherung bei der Herstellung von Branntkalk-Boden-Säulen dar, mit deren Hilfe sich Inhomogenitäten bei der Bindemittelverteilung oder Störungen beim Hydratationsvorgang (Ablöschen des Branntkalkes) nachweisen lassen. Entsprechende Hilfsmittel wurden angegeben.
One of the main criteria determining the thermal comfort of occupants is the air temperature. To monitor this parameter, a thermostat is traditionally mounted in the indoor environment for instance in office rooms in the workplaces, or directly on the radiator or in another location in a room. One of the drawbacks of this conventional method is the measurement at a certain location instead of the temperature distribution in the entire room including the occupant zone. As a result, the climatic conditions measured at the thermostat point may differ from those at the user's location. This not only negatively impacts the thermal comfort assessment but also leads to a waste of energy due to unnecessary heating and cooling. Moreover, for measuring the distribution of the air temperature under laboratory conditions, multiple thermal sensors should be installed in the area under investigation. This requires high effort in both installation and expense.
To overcome the shortcomings of traditional sensors, Acoustic travel-time TOMography (ATOM) offers an alternative based on measuring the transmission sound velocity signals. The basis of the ATOM technique is the first-order dependency of the sound velocity on the medium's temperature. The average sound velocity, along the propagation paths, can be determined by travel-times estimation of a defined acoustic signal between transducers. After the travel-times collection, the room is divided into several volumetric grid cells, i.e. voxels, whose sizes are defined depending on the dimension of the room and the number of sound paths. Accordingly, the spatial air temperature in each voxel can be determined using a suitable tomographic algorithm. Recent studies indicate that despite the great potential of this technique to detect room climate, few experiments have been conducted.
This thesis aims to develop the ATOM technique for indoor climatic applications while coupling the analysis methods of tomography and room acoustics. The method developed in this thesis uses high-energy early reflections in addition to the direct paths between transducers for travel time estimation. In this way, reflections can provide multiple sound paths that allow the room coverage to be maintained even when a few or even only one transmitter and receiver are used.
In the development of the ATOM measurement system, several approaches have been employed, including the development of numerical methods and simulations and conducting experimental measurements, each of which has contributed to the improvement of the system's accuracy. In order to effectively separate the early reflections and ensure adequate coverage of the room with sound paths, a numerical method was developed based on the optimization of the coordinates of the sound transducers in the test room. The validation of the optimal positioning method shows that the reconstructed temperatures were significantly improved by placing the transducers at the optimal coordinates derived from the developed numerical method. The other numerical method developed is related to the selection of the travel times of the early reflections. Accordingly, the detection of the travel times has been improved by adjusting the lengths of the multiple analysis time-windows according to the individual travel times in the reflectogram of the room impulse response. This can reduce the probability of trapping faulty travel times in the analysis time-windows.
The simulation model used in this thesis is based on the image source model (ISM) method for simulating the theoretical travel times of early reflection sound paths. The simulation model was developed to simulate the theoretical travel times up to third-order reflections.
The empirical measurements were carried out in the climate lab of the Chair of Building Physics under different boundary conditions, i.e., combinations of different room air temperatures under both steady-state and transient conditions, and different measurement setups. With the measurements under controllable conditions in the climate lab, the validity of the developed numerical methods was confirmed.
In this thesis, the performance of the ATOM measurement system was evaluated using two measurement setups. The setup for the initial investigations consists of an omnidirectional receiver and a near omnidirectional sound source, keeping the number of transducers as few as possible. This has led to accurately identify the sources of error that could occur in each part of the measuring system. The second measurement setup consists of two directional sound sources and one omnidirectional receiver. This arrangement of transducers allowed a higher number of well-detected travel times for tomography reconstruction, a better travel time estimation due to the directivity of the sound source, and better space utilization. Furthermore, this new measurement setup was tested to determine an optimal selection of the excitation signal. The results showed that for the utilized setup, a linear chirp signal with a frequency range of 200 - 4000 Hz and a signal duration of t = 1 s represents an optimal selection with respect to the reliability of the measured travel times and higher signal-to-noise ratio (SNR).
To evaluate the performance of the measuring setups, the ATOM temperatures were always compared with the temperatures of high-resolution NTC thermistors with an accuracy of ±0.2 K. The entire measurement program, including acoustic measurements, simulation, signal processing, and visualization of measurement results are performed in MATLAB software.
In addition, to reduce the uncertainty of the positioning of the transducers, the acoustic centre of the loudspeaker was determined experimentally for three types of excitation signals, namely MLS (maximum length sequence) signals with different lengths and duration, linear and logarithmic chirp signals with different defined frequency ranges. For this purpose, the climate lab was converted into a fully anechoic chamber by attaching absorption panels to the entire surfaces of the room. The measurement results indicated that the measurement of the acoustic centre of the sound source significantly reduces the displacement error of the transducer position.
Moreover, to measure the air temperature in an occupied room, an algorithm was developed that can convert distorted signals into pure reference signals using an adaptive filter. The measurement results confirm the validity of the approach for a temperature interval of 4 K inside the climate lab.
Accordingly, the accuracy of the reconstructed temperatures indicated that ATOM is very suitable for measuring the air temperature distribution in rooms.
In recent years increasingly consideration has been given to the lifetime extension of existing structures. This is based on the fact that a growing percentage of civil infrastructure as well as buildings is threatened by obsolescence and that due to simple monetary reasons this can no longer be countered by simply re-building everything anew. Hence maintenance interventions are required which allow partial or complete structural rehabilitation. However, maintenance interventions have to be economically reasonable, that is, maintenance expenditures have to be outweighed by expected future benefits. Is this not the case, then indeed the structure is obsolete - at least in its current functional, economic, technical, or social configuration - and innovative alternatives have to be evaluated. An optimization formulation for planning maintenance interventions based on cost-benefit criteria is proposed herein. The underlying formulation is as follows: (a) between maintenance interventions structural deterioration is described as a random process; (b) maintenance interventions can take place anytime throughout lifetime and comprise the rehabilitation of all deterioration states above a certain minimum level; and (c) maintenance interventions are optimized by taking into account all expected life-cycle costs (construction, failure, inspection and state-dependent repair costs) as well as state- or time-dependent benefit rates. The optimization is performed by an evolutionary algorithm. The proposed approach also allows to determine optimal lifetimes and acceptable failure rates. Numerical examples demonstrate the importance of defining benefit rates explicitly. It is shown, that the optimal solution to maintenance interventions requires to take action before reaching the acceptable failure rate or the zero expected net benefit rate level. Deferring decisions with respect to maintenance not only results, in general, in higher losses, but also results in overly hazardous structures.
This thesis deals with the basic design and rigorous analysis of cryptographic schemes and primitives, especially of authenticated encryption schemes, hash functions, and password-hashing schemes.
In the last decade, security issues such as the PS3 jailbreak demonstrate that common security notions are rather restrictive, and it seems that they do not model the real world adequately. As a result, in the first part of this work, we introduce a less restrictive security model that is closer to reality. In this model it turned out that existing (on-line) authenticated encryption schemes cannot longer beconsidered secure, i.e. they can guarantee neither data privacy nor data integrity. Therefore, we present two novel authenticated encryption scheme, namely COFFE and McOE, which are not only secure in the standard model but also reasonably secure in our generalized security model, i.e. both preserve full data inegrity. In addition, McOE preserves a resonable level of data privacy.
The second part of this thesis starts with proposing the hash function Twister-Pi, a revised version of the accepted SHA-3 candidate Twister. We not only fixed all known security issues
of Twister, but also increased the overall soundness of our hash-function design.
Furthermore, we present some fundamental groundwork in the area of password-hashing schemes. This research was mainly inspired by the medial omnipresence of password-leakage incidences. We show that the password-hashing scheme scrypt is vulnerable against cache-timing attacks due to the existence of a password-dependent memory-access pattern. Finally, we introduce Catena the first password-hashing scheme that is both memory-consuming and resistant against cache-timing attacks.
Modern cryptography has become an often ubiquitous but essential part of our daily lives. Protocols for secure authentication and encryption protect our communication with various digital services, from private messaging, online shopping, to bank transactions or exchanging sensitive information. Those high-level protocols can naturally be only as secure as the authentication or encryption schemes underneath. Moreover, on a more detailed level, those schemes can also at best inherit the security of their underlying primitives. While widespread standards in modern symmetric-key cryptography, such as the Advanced Encryption Standard (AES), have shown to resist analysis until now, closer analysis and design of related primitives can deepen our understanding.
The present thesis consists of two parts that portray six contributions: The first part considers block-cipher cryptanalysis of the round-reduced AES, the AES-based tweakable block cipher Kiasu-BC, and TNT. The second part studies the design, analysis, and implementation of provably secure authenticated encryption schemes.
In general, cryptanalysis aims at finding distinguishable properties in the output distribution. Block ciphers are a core primitive of symmetric-key cryptography which are useful for the construction of various higher-level schemes, ranging from authentication, encryption, authenticated encryption up to integrity protection. Therefore, their analysis is crucial to secure cryptographic schemes at their lowest level. With rare exceptions, block-cipher cryptanalysis employs a systematic strategy of investigating known attack techniques. Modern proposals are expected to be evaluated against these techniques. The considerable effort for evaluation, however, demands efforts not only from the designers but also from external sources.
The Advanced Encryption Standard (AES) is one of the most widespread block ciphers nowadays. Therefore, it is naturally an interesting target for further analysis. Tweakable block ciphers augment the usual inputs of a secret key and a public plaintext by an additional public input called tweak. Among various proposals through the previous decade, this thesis identifies Kiasu-BC as a noteworthy attempt to construct a tweakable block cipher that is very close to the AES. Hence, its analysis intertwines closely with that of the AES and illustrates the impact of the tweak on its security best. Moreover, it revisits a generic tweakable block cipher Tweak-and-Tweak (TNT) and its instantiation based on the round-reduced AES.
The first part investigates the security of the AES against several forms of differential cryptanalysis, developing distinguishers on four to six (out of ten) rounds of AES. For Kiasu-BC, it exploits the additional freedom in the tweak to develop two forms of differential-based attacks: rectangles and impossible differentials. The results on Kiasu-BC consider an additional round compared to attacks on the (untweaked) AES. The authors of TNT had provided an initial security analysis that still left a gap between provable guarantees and attacks. Our analysis conducts a considerable step towards closing this gap. For TNT-AES - an instantiation of TNT built upon the AES round function - this thesis further shows how to transform our distinguisher into a key-recovery attack.
Many applications require the simultaneous authentication and encryption of transmitted data. Authenticated encryption (AE) schemes provide both properties. Modern AE schemes usually demand a unique public input called nonce that must not repeat. Though, this requirement cannot always be guaranteed in practice. As part of a remedy, misuse-resistant and robust AE tries to reduce the impact of occasional misuses. However, robust AE considers not only the potential reuse of nonces. Common authenticated encryption also demanded that the entire ciphertext would have to be buffered until the authentication tag has been successfully verified. In practice, this approach is difficult to ensure since the setting may lack the resources for buffering the messages. Moreover, robustness guarantees in the case of misuse are valuable features.
The second part of this thesis proposes three authenticated encryption schemes: RIV, SIV-x, and DCT. RIV is robust against nonce misuse and the release of unverified plaintexts. Both SIV-x and DCT provide high security independent from nonce repetitions. As the core under SIV-x, this thesis revisits the proof of a highly secure parallel MAC, PMAC-x, revises its details, and proposes SIV-x as a highly secure authenticated encryption scheme. Finally, DCT is a generic approach to have n-bit secure deterministic AE but without the need of expanding the ciphertext-tag string by more than n bits more than the plaintext.
From its first part, this thesis aims to extend the understanding of the (1) cryptanalysis of round-reduced AES, as well as the understanding of (2) AES-like tweakable block ciphers. From its second part, it demonstrates how to simply extend known approaches for (3) robust nonce-based as well as (4) highly secure deterministic authenticated encryption.
Die Mahlung als Zerkleinerungsprozess stellt seit den Anfängen der Menschheit eine der wichtigsten Verarbeitungsformen von Materialien aller Art dar - von der Getreidemahlung, über das Aufschließen von Heilkräutern in Mörsern bis hin zur Herstellung von Tonern für Drucker und Kopierer. Besonders die Zementmahlung ist in modernen Gesellschaften sowohl ein wirtschaftlicher als auch ein ökologischer Faktor. Mehr als zwei Drittel der elektrischen Energie der Zementproduktion werden für Rohmehl- und Klinker- bzw. Kompositmaterialmahlung verbraucht. Dies ist nur ein Grund, warum der Mahlprozess zunehmend in den Fokus vieler Forschungs- und Entwicklungsvorhaben rückt. Die Komplexität der Zementmahlung steigt im zunehmenden Maße an. Die simple „Mahlung auf Zementfeinheit“ ist seit langem obsolet. Zemente werden maßgeschneidert, mit verschiedensten Kombinationsprodukten, getrennt oder gemeinsam, in unterschiedlichen Mahlaggregaten oder mit ganz neuen Ansätzen gefertigt. Darüber hinaus gewinnt auch der Sektor des Baustoffrecyclings, mit allen damit verbundenen Herausforderungen, immer mehr an Bedeutung. Bei der Fragestellung, wie der Mahlprozess einerseits leistungsfähige Produkte erzeugen kann und andererseits die zunehmenden Anforderungen an Nachhaltigkeit erfüllt, steht das Mahlaggregat im Mittelpunkt der Betrachtungen. Dementsprechend gliedert sich, neben einer eingehenden Literaturrecherche zum Wissensstand, die vorliegende Arbeit in zwei übergeordnete Teile:
Im ersten Teil werden Untersuchungen an konventionellen Mahlaggregaten mit in der Zementindustrie verwendeten Kernprodukten wie Portlandzementklinker, Kalkstein, Flugasche und Hüttensand angestellt. Um eine möglichst effektive Mahlung von Zement und Kompositmaterialien zu gewährleisten, ist es wichtig, die Auswirkung von Mühlenparametern zu kennen. Hierfür wurde eine umfangreiche Versuchsmatrix aufgestellt und
abgearbeitet. Das Spektrum der Analysemethoden war ebenfalls umfangreich und wurde sowohl auf die gemahlenen Materialien als auch auf die daraus hergestellten Zemente und Betone angewendet. Es konnte gezeigt werden, dass vor allem die Unterscheidung zwischen Mahlkörpermühlen und mahlkörperlosen Mühlen entscheidenden Einfluss auf die Granulometrie und somit auch auf die Zementperformance hat. Besonders stark wurden die Verarbeitungseigenschaften, insbesondere der Wasseranspruch und damit auch das Porengefüge und schließlich Druckfestigkeiten sowie Dauerhaftigkeitseigenschaften der aus diesen Zementen hergestellten Betone, beeinflusst. Bei Untersuchungen zur gemeinsamen Mahlung von Kalkstein und Klinker führten ungünstige Anreicherungseffekte des gut mahlbaren Kalksteins sowie tonigen Nebenbestandteilen zu einer schlechteren Performance in allen Zementprüfungen.
Der zweite Teil widmet sich der Hochenergiemahlung. Die dahinterstehende Technik wird seit Jahrzehnten in anderen Wirtschaftsbranchen, wie der Pharmazie, Biologie oder auch Lebensmittelindustrie angewendet und ist seit einiger Zeit auch in der Zementforschung anzutreffen. Beispielhaft seien hier die Planeten- und Rührwerkskugelmühle als Vertreter genannt. Neben grundlegenden Untersuchungen an Zementklinker
und konventionellen Kompositmaterialien wie Hüttensand und Kalkstein wurde auch die Haupt-Zementklinkerphase Alit untersucht. Die Hochenergiemahlung von konventionellen Kompositmaterialien generierte zusätzliche Reaktivität bei gleicher Granulometrie gegenüber der herkömmlichen Mahlung. Dies wurde vor allem bei per se reaktivem Zementklinker als auch bei latent-hydraulischem Hüttensand beobachtet. Gemahlene Flugaschen konnten nur im geringen Maße weiter aktiviert werden. Der generelle Einfluss von Oberflächenvergrößerung, Strukturdefekten und Relaxationseffekten eines Mahlproduktes wurden eingehend untersucht und gewichtet. Die Ergebnisse bei der Hochenergiemahlung von Alit zeigten, dass die durch Mahlung eingebrachten Strukturdefekte eine Erhöhung der Reaktivität zur Folge haben. Hierbei konnte festgestellt werden, das maßgeblich Oberflächendefekte, strukturelle (Volumen-)defekte und als Konterpart Selbstheilungseffekte die reaktivitätsbestimmenden Faktoren sind. Weiterhin wurden Versuche zur Mahlung von Altbetonbrechsand durchgeführt. Im Speziellen wurde untersucht, inwieweit eine Rückführung von Altbetonbrechsand, als unverwertbarer Teil des Betonbruchs, in Form eines Zement-Kompositmaterials in den Baustoffkreislauf möglich ist. Die hierfür verwendete Mahltechnik umfasst sowohl konventionelle Mühlen als auch Hochenergiemühlen. Es wurden Kompositzemente mit variiertem Recyclingmaterialanteil hergestellt und auf grundlegende Eigenschaften untersucht. Zur Bewertung der Produktqualität wurde der sogenannte „Aktivierungskoeffizient“ eingeführt. Es stellte sich heraus, dass die Rückführung von Altbetonbrechsand als potentielles Kompositmaterial wesentlich vom Anteil des Zementsteins abhängt. So konnte beispielsweise reiner Zementstein als aufgemahlenes Kompositmaterial eine bessere Performance gegenüber dem mit Gesteinskörnung beaufschlagtem Altbetonbrechsand ausweisen. Bezogen auf die gemessenen Hydratationswärmen und Druckfestigkeiten nahm der Aktivierungskoeffzient mit fallendem Abstraktionsgrad ab. Ebenfalls sank der Aktivierungskoeffizient mit steigendem Substitutionsgrad. Als Vergleich wurden dieselben Materialien in konventionellen Mühlen aufbereitet. Die hier erzielten Ergebnisse können teilweise der Hochenergiemahlung als gleichwertig beurteilt werden. Folglich ist bei der Aktivierung von Recyclingmaterialien weniger die Mahltechnik als der Anteil an aktivierbarem Zementstein ausschlaggebend.
During the previous decades, the upcoming demand for security in the digital world, e.g., the Internet, lead to numerous groundbreaking research topics in the field of cryptography. This thesis focuses on the design and analysis of cryptographic primitives and schemes to be used for authentication of data and communication endpoints, i.e., users. It is structured into three parts, where we present the first freely scalable multi-block-length block-cipher-based compression function (Counter-bDM) in the first part. The presented design is accompanied by a thorough security analysis regarding its preimage and collision security. The second and major part is devoted to password hashing. It is motivated by the large amount of leaked password during the last years and our discovery of side-channel attacks on scrypt – the first modern password scrambler that allowed to parameterize the amount of memory required to compute a password hash. After summarizing which properties we expect from a modern password scrambler, we (1) describe a cache-timing attack on scrypt based on its password-dependent memory-access pattern and (2) outline an additional attack vector – garbage-collector attacks – that exploits optimization which may disregard to overwrite the internally used memory. Based on our observations, we introduce Catena – the first memory-demanding password-scrambling framework that allows a password-independent memory-access pattern for resistance to the aforementioned attacks. Catena was submitted to the Password Hashing Competition (PHC) and, after two years of rigorous analysis, ended up as a finalist gaining special recognition for its agile framework approach and side-channel resistance. We provide six instances of Catena suitable for a variety of applications. We close the second part of this thesis with an overview of modern password scramblers regarding their functional, security, and general properties; supported by a brief analysis of their resistance to garbage-collector attacks. The third part of this thesis is dedicated to the integrity (authenticity of data) of nonce-based authenticated encryption schemes (NAE). We introduce the so-called j-IV-Collision Attack, allowing to obtain an upper bound for an adversary that is provided with a first successful forgery and tries to efficiently compute j additional forgeries for a particular NAE scheme (in short: reforgeability). Additionally, we introduce the corresponding security notion j-INT-CTXT and provide a comparative analysis (regarding j-INT-CTXT security) of the third-round submission to the CAESAR competition and the four classical and widely used NAE schemes CWC, CCM, EAX, and GCM.
This cumulative dissertation investigates aspects of consumer decision making in hedonic contexts and its implications for the marketing of media goods through a series of three empirical studies. All three studies take place within a common theoretical framework of decision making models, applying parts of the framework in novel ways to solve real-world marketing research problems (study 1 and 2), and examining theoretical relationships between variables within of the framework (study 3). One notable way in which the studies differ is their theoretical treatment of the hedonic component of decision making, i.e. the role and conceptualization of emotions.
Abstract Developing and emerging tropical Asian countries have encountered fast urban development due to the migration of farmers seeking a better life in the city. This resulted in a lack of appro-priate infrastructure and inappropriate social services in many cities. Municipal solid waste management is no exception and is in fact often placed at the bottom of the list of priorities for the cities’ appropriate urban management plans since laws and regulations must first be for-mulated and implemented. The problem of unmanaged municipal solid waste certainly leads to air pollution, disease, and to soil and water contamination. These problems in tropical climates are compounded with high temperature, high-level humidity, heavy rainfall and frequent flooding. Stagnant water and leachate from waste quickly become the breeding grounds of in-sects, rodents and bacteria, thus creating a health hazard for workers and local populations. Moreover, water and groundwater contamination may lead to serious environmental degrada-tion with direct impacts on water supplies, and in the fast degradation of agricultural products, the backbone of most tropical Asian countries. Many cities still allow or tolerate dumping of waste in uncontrolled sites, and open burning that disperses particulates that most likely contain dioxins and furans. Even with increasingly scarce land availability within or in proximity of the cities, sanitary landfill is still the most often cho-sen disposal method around Asia because of its lower cost when compared to modern treatment systems. Yet, most of these landfill sites do not have proper lining, daily covering, methane recovery devices, leachate control systems, nor do they have long-term closure and monitoring plans, which implies short and long-term hazards. Some municipalities opted for incineration, which usually entails high operation and maintenance costs because of the need for supple-mental fuel and often-inappropriate running conditions. Although tropical conditions appear to favor certain disposal systems such as composting, appropriate technology needs to be identi-fied in order to reduce operation and maintenance costs while ensuring good quality outputs; compost plants have often been closed because of poor quality products due to the high content of plastic and glass particulates in the finished product. Tropical Asian cities are now required to identify affordable and sustainable solutions for the management of their increasing amount of waste generated daily, while ensuring minimal environmental impact, social acceptance and minimal land use. The purpose of this dissertation was to develop a user-friendly decision-making tool for public administrators and government officials in tropical Asian developing and emerging cities. This tool was developed based on a list of selected decision-making issues necessary in making an informed decision. The decision-making tool is to be used by decision-makers in making a pre-liminary assessment of a most appropriate waste management and treatment system for their municipality. Tropical Asian cities must consider a number of issues when deciding on their waste management plan such as the continuously changing quantum and composition of waste associated with the increasing population and income per capita, the high humidity levels, and the often-limited financial resources. Other determinant factors include legal, political, institu-tional, social and technical issues. Furthermore, administrators must realize the importance of each stage involved in waste management, which includes waste generation, collection, trans-port, waste characteristics, disposal and treatment. To better understand the complexity of the issues involved in tropical Asian municipalities, the city of Bangkok, Thailand’s largest city and capital, was selected as a case study for the management of its 9,000 tonnes of waste gen-erated daily. Numerous interviews, meetings along with the review of documents, reports and site visits offered an inside view of the tropical city’s various decision-making issues towards its waste management plan, and examine specific problems encountered by the city’s decision-makers. The review and analysis of the decision-making issues involved in Bangkok’s waste management plan showed how the decision-making tool can be used in various Asian tropical cities. In conclusion, waste management in an emerging tropical country involves specific challenges that need to be addressed. Economical, technical and social criteria need to be fully understood as to capacitate government officials in the selection of the most appropriate urban waste man-agement system. Limited budgets, lack of public awareness and poor systems’ management often cloud decision-makers in choosing what appears to be the best solution in the short term, but more costly over the years. Weather conditions and scarcity of land in proximity of the city make waste management especially challenging. The decision-making framework offers a tool to decision-makers, as to facilitate the understanding and identification of key issues necessary in the formulation of a sustainable urban waste management plan and in the selection of a tech-nically, economically and socially acceptable integrated MSW management system. A detailed feasibility study and master plan will follow the preliminary study as to define the plant´s specifications, its location and its financing.
The main objective of the present work is to establish a link between the scientific fields of engineering seismology and structural engineering. Substantially it deals with the application and enhancements of methods coming from engineering seismology as well as their junctions to the fields of structural engineering respectively earthquake engineering. Based on real occurred earthquake damage inflicted to multistoried reinforced-concrete frame buildings, the influence of local site effects on the grade of structural damage is worked out. This relying on comprehensive investigations conducted during numerous field missions of German TaskForce after damaging earthquakes in Venezuela and Türkiye. Instrumental investigations on both the structure and its local subsoil in order to identify the damage potential of seismic ground motion take center stage of the thesis. Thereby it is examined whether or not an estimated seismic demand representative in amplitude level and frequency characteristics is able to cause structural damage considering the vulnerability of the structure itself as well as the local site and subsoil conditions. Investigations are concentrated on selected RC frame structures with or without masonry infill walls.
The specific socio-political frame and context in Federal Republic of Yugoslavia (SRJ) was in many ways unique in Europe. The way social space was produced, starting from mid eighties in the former Socialist Federal Republic of Yugoslavia (SFRJ) in the period of severe economic and political crises, and later in the new independent republics formed after it’s disintegration, was extremely harsh. The new SRJ had an especially peculiar context due to the sanctions of UN that were introduced in 1992 after the clashes in Bosnia and cases of ethnic cleansing. One of the causes for the production of such a drastic social space could be seen in the strongest wave of ethnonationalism recorded in recent European history, accompanied with the equally strong wave of populism, that were interestingly enough conceived as a program of Serbian national and cultural renaissance in the highest cultural institutions in Serbia like the Serbian Academy of Sciences and Arts and Association of Writers of Serbia, and supported by the Serbian Orthodox Church. After being recognized as a powerful homogenizing force by the communist elite that came to power, Slobodan Miloševi's being its strongest representative, these ideological matrixes thus induced their reproduction in all spheres of the society. On the other side the sanctions by UN and isolation of the country caused the "economy of destruction", economic collapse with the highest rate of inflation ever recorded. The effects of these phenomena were devastating for the new SRJ, where thus produced social milieu was dominated by patriarchalism, authoritarianism, a warlike spirit, xenophobia, and national-chauvinism. In Miloševi's Serbia of the 90’s after introduction of the multi party parliamentary democracy, two public spheres have functioned in autonomous way: one official having all the monopolistic instruments from the former communist ideological structures, and the other alternative and oppositional having just support from a few alternative media houses and mainly the streets for public address and speech. When the wave of ethnonationalism and populism came back from the political realm to the sphere of culture and contaminated it, the highest national institutions of culture started to reproduce this ideological matrix. The task of the artworks was to glorify the history of Serbian people and they could be read as symptoms of the social pathology of the milieu where they originated. Their performative role was to contribute to the production of such a social space and reproduce the hate speech so present in all the media. For the artists who didn't want to conform to the dominant ideological matrix the trauma experienced had different effect and caused strong reaction. One aspect was the withdrawal from the social sphere into the closed, hermetic artistic circles and the strategy defined as active escapism; another was gathering into groups and associations with the aim to criticize, oppose, and face the social reality with engaged artworks. Finally, I focus on different artistic strategies towards the produced social space and analyze both the art practices that reproduced the dominant ideological matrix in the use of the regime, as well as the ones that tried to enter the publics sphere in the critical way and offer the alternative model of the (cultural) public sphere. The paradigm for the analysis of the Serbian art scene or community in the period of sanctions and isolation, mostly in the first half of the nineties, but also encompassing the whole decade, was the one of the “art in the closed society”. As much as this formulation was explanatory for the situation in Serbia under the sanctions, my perspective on the problem is that self-isolation by the artists was more important that the outer wall of barriers, and what mattered was the decision of the majority of the artists to stay out of the public and social spheres. In the global age of informational society where Internet was providing all necessary information on the actual happenings in art, the paradigm of the closed society could be more used as a psychological feature of self-isolation and withdrawal from the reality as it was too hard to bare it. I am therefore focusing mainly on art practices that were trying to deconstruct the dominant ideological matrix, create platforms and arenas where artists could engage in cultural activity and raise different critical issues, and eventually construct the alternative cultural public sphere where many >marginal< voices could be heard, many micro-social spaces could be visible.
Phosphorus enrichment in the treatment of pig manure in China using anaerobic digestion technology
(2008)
Phosphorus (P) is a key irreplaceable nutrient element in all life forms. Almost all phosphorus used by society is mined from non-renewable phosphate rock. Approximately 80% of global phosphate rock consumption is used for fertilizer production. However, as a finite resource, the world phosphate reserve could be exhausted within the next 100-250 years. The phosphate resource in China is also limited. The exploitable deposits could be exhausted within 70 years. Investigations show that the largest recoverable phosphate resource in China is found in animal manure. It was estimated that the potential phosphate resource in intensive-scale animal plants accounts for 47% of the total consumption of phosphate rock of the country each year. Pig manure contains phosphorus and nitrogen in high concentration. The objective of this study is to investigate forced P-precipitation in pig manure combined with anaerobic digestion; when biogas is generated, an enriched P-containing digested manure sludge can be obtained. Anaerobic digestion experiments indicated that total concentrations of phosphorus (TP) and kjeldahl nitrogen (TKN) remained basically constant before and after anaerobic digestion. However, the composition of nitrogen and phosphorus in digested manure was quite different; 37.7% of phosphorus existed as PO4-P in the raw pig manure, whilst 20.8% of PO4-P was present in the digested pig manure. NH4-N accounted for 50.4% of the total TKN in raw pig manure, while most of the TKN in digested manure (79.3%) was composed of NH4-N. The pH value of pig manure rose by 0.88 units after anaerobic digestion. PO4-P was reduced by 45% during anaerobic digestion. The average molar ratios of Mg/P and Ca/P achieved were 1.3 and 1.7. It was found that solid/liquid separation has little influence on the change in the molar ratios. The optimal position for P-precipitation is after anaerobic digestion. P-precipitation should be conducted in homogeneous digested pig manure. The ideal pH range for P-precipitation is between 8.0 and 9.5. In the pH range of 8.8-9.5, struvite precipitation dominates the precipitation reaction. The existence of calcium ions results in competitive reaction with magnesium ions. In the pH range of 8.0-8.8, calcium phosphate was apt to form. Both MgCl2•6H2O and MgO can be adopted as a magnesium source. MgO is suitable for supplementation in raw manure. Without the addition of other alkali, the pH value rose to 8.5. Nearly 85% of soluble phosphorus (PO4-P) could be removed from liquid portion. MgCl2•6H2O has good solubility. When MgCl2•6H2O was used at a pH value of 9.0, the equilibrium time required was 30 minutes. The appropriate Mg2+/PO4-P molar ratio was 1.3. Under these conditions, whether with raw or digested manure, 90% of PO4-P could be removed. Forced P-precipitation combined with anaerobic digestion is suitable for application in China. More than 90% of the soluble phosphorus could be removed from the liquid portion of pig manure through forced P-precipitation. With the aid of flocculants, 95.7% of the total phosphorus could be precipitated in the final manure solid.
Die Dissertation untersucht Ideen von Virtualität im Hinblick auf mobile Medientechnologien. Es verbinden sich eine medienphilosophische und eine technikhistorische Perspektive: Das Virtuelle wird philosophiehistorisch ergründet und damit verbunden die Emergenz von mobilen Medientechnologien wie dem Mobiltelefon rekonstruiert. Zentral ist dabei die Frage, wie sich Weltverständnisse durch mobile Telekommunikation wandeln und wie das Virtuelle bislang im Kontext von Realitätsvorstellungen gedacht wurde.
This dissertation concerns the changing role of fashion in the context of modern cities. In approaching this process, the research investigates the media discourse based on representations of fashion by cities and of cities by fashion. Moreover, this research focuses on fashion understood as a multidimensional phenomenon that aims to provide an explanation of urban spaces through fashion terms, actions, and garments. Additionally, cities are considered from the cultural geography approach that highlights the cultural component of urban spaces expressed in social and cultural practices in physical reality. Following this idea, it is suggested here that fashion today not only participates in the urban life as its significant component but also creates city images and representations of urban lifestyle through the fashion paradigm. In other words, fashion redefines urban spaces; at the same time, urban spaces are interpreted as a stage for fashion processes.
By integrating in social research the fields of urban studies and fashion studies, this dissertation offers the discussion considering the fashion phenomenon not only as an urban phenomenon of modern reality. On the one hand, such discussion concerns the re-conceptualization of urban phenomena by the fashion influence; on the other hand, it relates the re-contextualization of fashion in a city. The empirical focus is based on the media context of fashion magazines in which variety of possibilities to represent fashion and cities lead to promising interpretations and analysis. The idea of representation specifies the ways of constructing the notion of urban space as fashionable space and the notion of fashion as placed in the urban context.
Search engines are very good at answering queries that look for facts. Still, information needs that concern forming opinions on a controversial topic or making a decision remain a challenge for search engines. Since they are optimized to retrieve satisfying answers, search engines might emphasize a specific stance on a controversial topic in their ranking, amplifying bias in society in an undesired way. Argument retrieval systems support users in forming opinions about controversial topics by retrieving arguments for a given query. In this thesis, we address challenges in argument retrieval systems that concern integrating them in search engines, developing generalizable argument mining approaches, and enabling frame-guided delivery of arguments.
Adapting argument retrieval systems to search engines should start by identifying and analyzing information needs that look for arguments. To identify questions that look for arguments we develop a two-step annotation scheme that first identifies whether the context of a question is controversial, and if so, assigns it one of several question types: factual, method, and argumentative. Using this annotation scheme, we create a question dataset from the logs of a major search engine and use it to analyze the characteristics of argumentative questions. The analysis shows that the proportion of argumentative questions on controversial topics is substantial and that they mainly ask for reasons and predictions. The dataset is further used to develop a classifier to uniquely map questions to the question types, reaching a convincing F1-score of 0.78.
While the web offers an invaluable source of argumentative content to respond to argumentative questions, it is characterized by multiple genres (e.g., news articles and social fora). Exploiting the web as a source of arguments relies on developing argument mining approaches that generalize over genre. To this end, we approach the problem of how to extract argument units in a genre-robust way. Our experiments on argument unit segmentation show that transfer across genres is rather hard to achieve using existing sequence-to-sequence models.
Another property of text which argument mining approaches should generalize over is topic. Since new topics appear daily on which argument mining approaches are not trained, argument mining approaches should be developed in a topic-generalizable way. Towards this goal, we analyze the coverage of 31 argument corpora across topics using three topic ontologies. The analysis shows that the topics covered by existing argument corpora are biased toward a small subset of easily accessible controversial topics, hinting at the inability of existing approaches to generalize across topics. In addition to corpus construction standards, fostering topic generalizability requires a careful formulation of argument mining tasks. Same side stance classification is a reformulation of stance classification that makes it less dependent on the topic. First experiments on this task show promising results in generalizing across topics.
To be effective at persuading their audience, users of an argument retrieval system should select arguments from the retrieved results based on what frame they emphasize of a controversial topic. An open challenge is to develop an approach to identify the frames of an argument. To this end, we define a frame as a subset of arguments that share an aspect. We operationalize this model via an approach that identifies and removes the topic of arguments before clustering them into frames. We evaluate the approach on a dataset that covers 12,326 frames and show that identifying the topic of an argument and removing it helps to identify its frames.
This thesis focuses on the analysis and design of hash functions and authenticated encryption schemes that are blockcipher based. We give an introduction into these fields of research – taking in a blockcipher
based point of view – with special emphasis on the topics of double length, double call blockcipher based compression functions. The first main topic (thesis parts I - III) is on analysis and design of
hash functions. We start with a collision security analysis of some well known double length blockcipher based compression functions and hash functions: Abreast-DM, Tandem-DM and MDC-4. We also propose new double length compression functions that have elevated collision security guarantees. We complement the collision analysis with a preimage analysis by stating (near) optimal security results for Abreast-DM, Tandem-DM, and Hirose-DM. Also, some generalizations are discussed. These are the first preimage security results for blockcipher based double length hash functions that go beyond the birthday barrier.
We then raise the abstraction level and analyze the notion of ’hash function indifferentiability from a random oracle’. So we not anymore focus on how to obtain a good compression function but, instead, on how to obtain a good hash function using (other) cryptographic primitives. In particular we give some examples when this strong notion of hash function security might give questionable advice for building a practical hash function. In the second main topic (thesis part IV), which is on authenticated encryption schemes, we present an on-line authenticated encryption scheme, McOEx, that simultaneously achieves privacy and confidentiality and is secure against nonce-misuse. It is the first dedicated scheme that achieves high standards of security and – at the same time – is on-line computable.