### Refine

#### Document Type

- Conference Proceeding (48)
- Article (39)
- Doctoral Thesis (22)
- Master's Thesis (3)
- Bachelor Thesis (2)
- Report (2)
- Book (1)
- Contribution to a Periodical (1)

#### Institute

- Institut für Strukturmechanik (40)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (18)
- Professur Angewandte Mathematik (9)
- Bauhaus-Institut für Geschichte und Theorie der Architektur und Planung (8)
- Juniorprofessur Stochastik und Optimierung (7)
- Juniorprofessur Computational Architecture (6)
- Institut für Europäische Urbanistik (5)
- Graduiertenkolleg 1462 (4)
- Professur Informatik im Bauwesen (4)
- F. A. Finger-Institut für Baustoffkunde (3)

#### Keywords

- Angewandte Mathematik (65)
- Building Information Modeling (36)
- Angewandte Informatik (35)
- Computerunterstütztes Verfahren (35)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (34)
- Strukturmechanik (31)
- Architektur (9)
- Städtebau (7)
- Stochastik (5)
- Architecture (2)

#### Year of publication

- 2015 (118) (remove)

Der vorliegende Text beschreibt die intensive Erforschung von Wabenplatten aus Papierwerkstoffen, die durch Faltprozesse neue räumliche Zustände einnehmen können und somit ihr ursprüngliches Anwendungsspektrum erweitern. Die gezeigten Lösungsansätze bewegen sich dabei im Spannungsfeld von Architektur und Ingenieurbau, denn die gefalteten Bauteile sind nicht nur äußerst tragfähig sondern besitzen auch eine ästhetische Form. Die entwickelten Verfahren und Konstruktionen werden auf einem hohen architektonischen Niveau präsentiert und mit einfachen ingenieurtechnischen Methoden verifiziert. Zur Lösungsfindung werden geometrische Verfahren ebenso angewendet wie konstruktive Faustformeln und Recherchen aus Architektur und Forschung.
Der Fokus der Arbeit liegt auf der Untersuchung von Faltungen in Wabenplatten. Während der Auseinandersetzung mit der Thematik erschienen jedoch viele weitere Aspekte als sehr interessant und bearbeitungswürdig. Als theoretische Grundlage dieser Arbeit werden deshalb die geschichtliche Entwicklung und die gesellschaftliche Bedeutung von Papier und Papierwerkstoffen analysiert und deren Produktionsprozesse beleuchtet. Diese Vorgehensweise ermöglicht eine Einordnung des Potentials und der Bedeutung des Werkstoffs Papier. Der Kontext der Arbeit wird dadurch gestärkt und führt zu interessanten zukünftigen Forschungsansätzen.
Intensive Untersuchungen widmen sich der geometrischen Bestimmung von Faltungen in Wabenplatten aus Papierwerkstoffen sowie deren Manifestation als konstruktive Bauteile. Auch die statischen Eigenschaften der Elemente und ihr Konstruktionspotential werden erforscht und aufbereitet. Wichtige Impulse aus Forschung und Technik fließen in die Recherche der Arbeit ein und erlauben die Verortung der Ergebnisse im architektonischen Kontext. Versuchsreihen und Materialstudien an Prototypen belegen die Ergebnisse virtueller und rechnerischer Studien. Konzepte zur parametrischen Berechnung und Visualisierung der Forschungsergebnisse werden präsentiert und zeigen zukunftsfähige Planungshilfen für die Industrie auf. Etliche Testreihen zu unterschiedlichsten Abdichtungskonzepten führen zur Realisierung eines sehenswerten Experimentalbaus. Er erlaubt die dauerhafte Untersuchung der entwickelten Bauteile unter realistischen Bedingungen und bestätigt deren Leistungsfähigkeit. Dadurch wird nicht nur ein dauerhaftes Monitoring und eine Evaluierung der Leistungsdaten möglich sondern es wird auch der sichtbare Beweis erbracht, dass mit Papierwerkstoffen effiziente und hochwertige Architekturen zu realisieren sind, welche das enorme gestalterische Potential von gefalteten Wabenplatten ausnutzen.

The distinguishing structural feature of single-layered black phosphorus is its puckered structure, which leads to many novel physical properties. In this work, we first present a new parameterization of the Stillinger–Weber potential for single-layered black phosphorus. In doing so, we reveal the importance of a cross-pucker interaction term in capturing its unique mechanical properties, such as a negative Poisson's ratio. In particular, we show that the cross-pucker interaction enables the pucker to act as a re-entrant hinge, which expands in the lateral direction when it is stretched in the longitudinal direction. As a consequence, single-layered black phosphorus has a negative Poisson's ratio in the direction perpendicular to the atomic plane. As an additional demonstration of the impact of the cross-pucker interaction, we show that it is also the key factor that enables capturing the edge stress-induced bending of single-layered black phosphorus that has been reported in ab initio calculations.

Etwa ein Viertel des gesamten Endenergieverbrauchs (26%) in Deutschland entfällt auf den Wohnungssektor, wodurch dieser Sektor einen erheblichen Anteil am möglichen Einsparpotenzial an Energie hat. Im Hinblick auf das Klimaschutzziel der Europäischen Union, die Energieeffizienz im Vergleich zu 1990 um 20% zu erhöhen, stellt sich daher die Frage, welche Einsparpotenziale es im Wohnungssektor tatsächlich gibt und wie diese quantifiziert werden können. In dieser Arbeit wird der Einfluss der Parameter, die den Endenergieverbrauch beeinflussen, mit Hilfe einer Sensitivitätsanalyse bestimmt. Die Ergebnisse der Sensitivitätsanalyse zeigen, dass die einflussreichsten Parameter auf den Endenergieverbrauch der Innentemperaturbedarf, die Länge der Heizperiode, die Außentemperatur (Gradtagzahl) und die Anzahl der Wohnungen sind. Dies sind Variablen, die nicht durch Verordnungen reguliert werden können. Der einzige Parameter, der regulierbar ist und einen bedeutenden Einfluss auf den Endenergieverbrauch hat, ist der Nutzungsgrad der Anlagen/Geräte für Raumwärme, Warmwasser und Kochen (sowie zu einem geringen Teil der Wirkungsgrad der eingesetzten Beleuchtung). Zur Quantifizierung des Energieeinsparpotentials im deutschen Wohnungssektor bezüglich des Nutzungsgrades wurden in dieser Arbeit Daten zur Bestimmung der langfristigen Entwicklung (Zeitraum 1990-2010) des Nutzungsgrades von Anlagen und Geräten analysiert. Mit verschiedenen Angaben aus der Literatur und mit Hilfe von Sättigungskurven wurde die Entwicklung der Nutzugsgrade der Anlagen/Geräte entsprechend der Energiequellen zwischen 1990 und 2010 ermittelt. Die erhaltenden Sättigungskurven ermöglichen die Bestimmung der Entwicklung des Nutzenergieverbrauchs im deutschen Wohnungssektor. Hierbei wurde festgestellt, dass die Differenz zwischen Nutzenergieverbrauch und Endenergieverbrauch einen Rückgang von 12 % im betrachtenden Zeitraum verzeichnete und dass das Energieeinsparpotenzial in Abhängigkeit von der Energiequelle beträchtlich variieren kann (um derzeit mehr als 35%-Punkte). Im Hinblick auf das oben genannte Klimaschutzziel werden in dieser Arbeit verschiedene Entwicklungsszenarien auf Basis des Nutzungsgrades der Anlagen und der Energiequellen analysiert. Hierbei wird deutlich, dass das theoretische Energieeinsparpotenzial im deutschen Wohnungssektor bezüglich des durchschnittlichen Nutzungsgrades nur zwischen 4 und 15 % liegt. Dies bedeutet, dass eine deutliche Reduktion des Endenergiebedarfs im Wohnungssektor nur stattfinden kann, wenn andere Energieeinsparmaßnahmen betrachtet werden. Basierend auf den Ergebnissen der Sensitivitätsanalyse werden hierzu Empfehlungen gegeben.

Entwicklung eines Sommerreferenzjahres zur Bestimmung der sommerlichen Überhitzung von Gebäuden
(2015)

Die Ableitung von sommer-fokussierten warmen Referenzjahren aus langjährigen Klimadaten erfolgt in Europa bisher nach unterschiedlichen, länderspezifischen Methoden, die sich in der Regel allein auf die Trockentemperatur beziehen und in der Auswahl eines zusammenhängenden realen Sommerhalbjahres resultieren. Simulationsergebnisse zur sommerlichen Überhitzung von natürlich belüfteten Gebäuden in Deutschland und Großbritannien zeigen jedoch für einige Wetterstationen weniger Überhitzung für Simulationen mit dem sommer-fokussierten Referenzjahr als für solche mit dem entsprechenden Testreferenzjahr (TRY) für den gleichen Ort. Dies gilt insbesondere dann, wenn einzelne Monate miteinander verglichen werden. Neben der Wahl eines kompletten Halbjahres, das sowohl extrem warme als auch vergleichsweise kühle Monate beinhalten kann, liegt dies vor allem begründet in der fehlenden Berücksichtigung der Solarstrahlung bei der Auswahl eines warmen Referenzjahres, die jedoch eine wichtige Rolle für sommerliche Überhitzungserscheinungen in Gebäuden spielt. Eine verlässliche, allgemein anerkannte Methode zur Erstellung von sommer-fokussierten Referenzjahren erscheint daher auch im Hinblick auf die rechtlichen Rahmenbedingungen in der Europäischen Union, die Strategien zur natürlichen Belüftung von Neubauten und Sanierungen begünstigen, erforderlich. Diese Arbeit präsentiert einen Ansatz zur Erstellung eines Sommerreferenzjahres (Summer Reference Year – SRY) aus dem TRY eines gegebenen Ortes und langjährigen Klimadaten. Die existierenden TRY-Daten werden hierbei skaliert, um den Bedingungen für Trockentemperatur und Solarstrahlung von nah-extremen Kandidatenjahren zu entsprechen, die separat über einen statistischen Ansatz ausgewählt werden. Anschließend werden Feuchttemperatur, Windgeschwindigkeit und Luftdruck des TRY durch lineare Korrelationen mit der Trockentemperatur angepasst, um die entsprechenden SRY-Daten zu erhalten. Der Vorteil dieser Methode liegt darin, dass das grundlegende Wettermuster des TRY erhalten bleibt und somit eine klare Relation zwischen SRY und TRY besteht, die eine Vergleichbarkeit von Simulationsergebnissen gewährleistet. Über vergleichende Gebäudesimulationen mit dem zugrundeliegenden TRY und langjährigen Klimadatensätzen kann nachgewiesen werden, dass sich das SRY zur Ermittlung sommerlicher Überhitzungserscheinungen in natürlich belüfteten Gebäuden eignet. Weiterhin kann gezeigt werden, dass das SRY im Gegensatz zur direkten Nutzung eines Kandidatenjahres für einen nah-extremen Sommer die Möglichkeit eines monatsscharfen Vergleichs mit dem TRY erlaubt und frei von wenig repräsentativen Besonderheiten ist, die in den entsprechenden Kandidatenjahren vorhanden sein können.

The Carbon journal is pleased to introduce a themed collection of recent articles in the area of computational carbon nanoscience. This virtual special issue was assembled from previously published Carbon articles by Guest Editors Quan Wang and Behrouz Arash, and can be accessed as a set in the special issue section of the journal website homepage: www.journals.elsevier.com/carbon. The article below by our guest editors serves as an introduction to this virtual special issue, and also a commentary on the growing role of computation as a tool to understand the synthesis and properties of carbon nanoforms and their behavior in composite materials.

Media anthropology is a new and interdisciplinary field of research with very different subjects and methods that seems to be already heavily informed by a comparatively narrow understanding of media as mass media (e.g. TV, Internet, social web, etc.). Therefore, most theories in this field, at least implicitly, employ a hierarchical and often dichotomic preconception of the two poles of media-human relations, by analysing the operationalities and ontologies of the human and the media independently from one another. This article deviates from this line of thought by advocating an expanded, symmetrical and relational understanding of the terms media and human, taking them as always already intermingled facets of a broader dynamic configuration. Starting from a consideration of the historically powerful, yet overlooked media of the so-called habitat diorama, the heuristic concept of “anthropomediality” is to be developed. Eventually, this relational approach may open up a new, interesting field for interrogation of (media-)anthropological analysis in general.

The article presents preliminary results and qualitative analysis obtained from the doctoral research provisory entitled “How do Brazilian ‘battlers’ reside?”, which is in progress at the Institute for European Urban Studies, Bauhaus Univer-sity Weimar. It critically discusses the contradictions of the production of residences in Brazil made by an emerging so-cial group, lately called the Brazilian new middle class. For the last ten years, a number of government policies have provoked a general improvement of the purchasing power of the poor. Between those who completely depend on the government to survive and the upper middle class, there is a wide (about 100 million people) and economically stable lower middle group, which has found its own ways of dealing with its demand for housing. The conventional models of planning, building and buying are not suitable for their technical, financial and personal needs. Therefore, they are con-currently planners, constructors and residents, building and renovating their own properties themselves, but still with very limited education and technical knowledge and restricted access to good building materials and constructive ele-ments, formal technicians, architects or engineers. On the one hand, the result is an informal and more or less autono-mous self-production, with all sorts of technical problems and very interesting and creative spatial solutions to every-day domestic situations. On the other hand, the repercussions for urban space are questionable: although basic infrastructure conditions have improved, building densities are high and green areas are few. Lower middle class neigh-bourhoods present a restricted collective everyday life. They look like storage spaces for manpower; people who live to work in order to be able to consume—and build—what they could not before. One question is, to what extent the lat-est economic rise of Brazil has really resulted in social development for lower middle income families in the private sphere regarding their residences, and in the collective sphere, regarding the neighbourhoods they inhabit and the ur-ban space in general.

We conducted extensive molecular dynamics simulations to investigate the thermal conductivity of polycrystalline hexagonal boron-nitride (h-BN) films. To this aim, we constructed large atomistic models of polycrystalline h-BN sheets with random and uniform grain configuration. By performing equilibrium molecular dynamics (EMD) simulations, we investigated the influence of the average grain size on the thermal conductivity of polycrystalline h-BN films at various temperatures. Using the EMD results, we constructed finite element models of polycrystalline h-BN sheets to probe the thermal conductivity of samples with larger grain sizes. Our multiscale investigations not only provide a general viewpoint regarding the heat conduction in h-BN films but also propose that polycrystalline h-BN sheets present high thermal conductivity comparable to monocrystalline sheets.

We present StarWatch, our application for real-time analysis of radio astronomical data in Virtual Environment. Serving as an interface to radio astronomical databases or being applied to live data from the radio telescopes, the application supports various data filters measuring signal-to-noise ratio (SNR), Doppler's drift, degree of signal localization on celestial sphere and other useful tools for signal extraction and classification. Originally designed for the database of narrow band signals from SETI Institute (setilive.org), the application has been recently extended for the detection of wide band periodic signals, necessary for the search of pulsars. We will also address the detection of week signals possessing arbitrary waveforms and present several data filters suitable for this purpose.

In this study, an application of evolutionary multi-objective optimization algorithms on the optimization of sandwich structures is presented. The solution strategy is known as Elitist Non-Dominated Sorting Evolution Strategy (ENSES) wherein Evolution Strategies (ES) as Evolutionary Algorithm (EA) in the elitist Non-dominated Sorting Genetic algorithm (NSGA-II) procedure. Evolutionary algorithm seems a compatible approach to resolve multi-objective optimization problems because it is inspired by natural evolution, which closely linked to Artificial Intelligence (AI) techniques and elitism has shown an important factor for improving evolutionary multi-objective search. In order to evaluate the notion of performance by ENSES, the well-known study case of sandwich structures are reconsidered. For Case 1, the goals of the multi-objective optimization are minimization of the deflection and the weight of the sandwich structures. The length, the core and skin thicknesses are the design variables of Case 1. For Case 2, the objective functions are the fabrication cost, the beam weight and the end deflection of the sandwich structures. There are four design variables i.e., the weld height, the weld length, the beam depth and the beam width in Case 2. Numerical results are presented in terms of Paretooptimal solutions for both evaluated cases.

Building Information Modeling is a powerful tool for the design and for a consistent set of data in a virtual storage. For the application in the phases of realization and on site it needs further development. The paper describes main challenges and main features, which will help the development of software to better service the needs of construction site managers

Modern distributed engineering applications are based on complex systems consisting of various subsystems that are connected through the Internet. Communication and collaboration within an entire system requires reliable and efficient data exchange between the subsystems. Middleware developed within the web evolution during the past years provides reliable and efficient data exchange for web applications, which can be adopted for solving the data exchange problems in distributed engineering applications. This paper presents a generic approach for reliable and efficient data exchange between engineering devices using existing middleware known from web applications. Different existing middleware is examined with respect to the suitability in engineering applications. In this paper, a suitable middleware is shown and a prototype implementation simulating distributed wind farm control is presented and validated using several performance measurements.

What is nowadays called (classic) Clifford analysis consists in the establishment of a function theory for functions belonging to the kernel of the Dirac operator. While such functions can very well describe problems of a particle with internal SU(2)-symmetries, higher order symmetries are beyond this theory. Although many modifications (such as Yang-Mills theory) were suggested over the years they could not address the principal problem, the need of a n-fold factorization of the d’Alembert operator. In this paper we present the basic tools of a fractional function theory in higher dimensions, for the transport operator (alpha = 1/2 ), by means of a fractional correspondence to the Weyl relations via fractional Riemann-Liouville derivatives. A Fischer decomposition, fractional Euler and Gamma operators, monogenic projection, and basic fractional homogeneous powers are constructed.

The stress state of a piecewise-homogeneous elastic body, which has a semi-infinite crack along the interface, under in-plane and antiplane loads is considered. One of the crack edges is reinforced by a rigid patch plate on a finite interval adjacent to the crack tip. The crack edges are loaded with specified stresses. The body is stretched at infinity by specified stresses. External forces with a given principal vector and moment act on the patch plate. The problem reduces to a Riemann-Hilbert boundary-value matrix problem with a piecewise-constant coefficient for two complex potentials in the plane case and for one in the antiplane case. The complex potentials are found explicitly using a Gaussian hypergeometric function. The stress state of the body close to the ends of the patch plate, one of which is also simultaneously the crack tip, is investigated. Stress intensity factors near the singular points are determined.

IFC-BASED MONITORING INFORMATION MODELING FOR DATA MANAGEMENT IN STRUCTURAL HEALTH MONITORING
(2015)

This conceptual paper discusses opportunities and challenges towards the digital representation of structural health monitoring systems using the Industry Foundation Classes (IFC) standard. State-of-the-art sensor nodes, collecting structural and environmental data from civil infrastructure systems, are capable of processing and analyzing the data sets directly on-board the nodes. Structural health monitoring (SHM) based on sensor nodes that possess so called “on-chip intelligence” is, in this study, referred to as “intelligent SHM”, and the infrastructure system being equipped with an intelligent SHM system is referred to as “intelligent infrastructure”. Although intelligent SHM will continue to grow, it is not possible, on a well-defined formalism, to digitally represent information about sensors, about the overall SHM system, and about the monitoring strategies being implemented (“monitoring-related information”). Based on a review of available SHM regulations and guidelines as well as existing sensor models and sensor modeling languages, this conceptual paper investigates how to digitally represent monitoring-related information in a semantic model. With the Industry Foundation Classes, there exists an open standard for the digital representation of building information; however, it is not possible to represent monitoring-related information using the IFC object model. This paper proposes a conceptual approach for extending the current IFC object model in order to include monitoring-related information. Taking civil infrastructure systems as an illustrative example, it becomes possible to adequately represent, process, and exchange monitoring-related information throughout the whole life cycle of civil infrastructure systems, which is referred to as monitoring information modeling (MIM). However, since this paper is conceptual, additional research efforts are required to further investigate, implement, and validate the proposed concepts and methods.

The paper introduces a systematic construction management approach, supporting expansion of a specified construction process, both automatically and semi-automatically. Throughout the whole design process, many requirements must be taken into account in order to fulfil demands defined by clients. In implementing those demands into a design concept up to the execution plan, constraints such as site conditions, building code, and legal framework are to be considered. However, complete information, which is needed to make a sound decision, is not yet acquired in the early phase. Decisions are traditionally taken based on experience and assumptions. Due to a vast number of appropriate available solutions, particularly in building projects, it is necessary to make those decisions traceable. This is important in order to be able to reconstruct considerations and assumptions taken, should there be any changes in the future project’s objectives. The research will be carried out by means of building information modelling, where rules deriving from standard logics of construction management knowledge will be applied. The knowledge comprises a comprehensive interaction amongst bidding process, cost-estimation, construction site preparation as well as specific project logistics – which are usually still separately considered. By means of these rules, favourable decision taking regarding prefabrication and in-situ implementation can be justified. Modifications depending on the available information within current design stage will consistently be traceable.

From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.

The sizing of simple resonators like guitar strings or laser mirrors is directly connected to the wavelength and represents no complex optimisation problem. This is not the case with liquid-filled acoustic resonators of non-trivial geometries, where several masses and stiffnesses of the structure and the fluid have to fit together. This creates a scenario of many competing and interacting resonances varying in relative strength and frequency when design parameters change. Hence, the resonator design involves a parameter-tuning problem with many local optima. As its solution evolutionary algorithms (EA) coupled to a forced-harmonic FE simulation are presented. A new hybrid EA is proposed and compared to two state-of-theart EAs based on selected test problems. The motivating background is the search for better resonators suitable for sonofusion experiments where extreme states of matter are sought in collapsing cavitation bubbles.

The Laguerre polynomials appear naturally in many branches of pure and applied mathematics and mathematical physics. Debnath introduced the Laguerre transform and derived some of its properties. He also discussed the applications in study of heat conduction and to the oscillations of a very long and heavy chain with variable tension. An explicit boundedness for some class of Laguerre integral transforms will be present.

In photogrammetry and computer vision the trifocal tensor is used to describe the geometric relation between projections of points in three views. In this paper we analyze the stability and accuracy of the metric trifocal tensor for calibrated cameras. Since a minimal parameterization of the metric trifocal tensor is challenging, the additional constraints of the interior orientation are applied to the well-known projective 6-point and 7-point algorithms for three images. The experimental results show that the linear 7-point algorithm fails for some noise-free degenerated cases, whereas the minimal 6-point algorithm seems to be competitive even with realistic noise.

This study contributes to the identification of coupled THM constitutive model parameters via back analysis against information-rich experiments. A sampling based back analysis approach is proposed comprising both the model parameter identification and the assessment of the reliability of identified model parameters. The results obtained in the context of buffer elements indicate that sensitive parameter estimates generally obey the normal distribution. According to the sensitivity of the parameters and the probability distribution of the samples we can provide confidence intervals for the estimated parameters and thus allow a qualitative estimation on the identified parameters which are in future work used as inputs for prognosis computations of buffer elements. These elements play e.g. an important role in the design of nuclear waste repositories.

Low-skilled labor makes a significant part of the construction sector, performing daily production tasks that do not require specific technical knowledge or confirmed skills. Today, construction market demands increasing skill levels. Many jobs that were once considered to be undertaken by low or un-skilled labor, now demand some kind of formal skills. The jobs that require low skilled labor are continually decreasing due to technological advancement and globalization. Jobs that previously required little or no training now require skilful people to perform the tasks appropriately. The study aims at ameliorating employability of less skilled manpower by finding ways to instruct them for performing constructions tasks. A review of exiting task instruction methodologies in construction and the underlying gaps within them warrants an appropriate way to train and instruct low skilled workers for the tasks in construction. The idea is to ensure the required quality of construction with technological and didactic aids seeming particularly purposeful to prepare potential workers for the tasks in construction without exposing them to existing communication barriers. A BIM based technology is considered promising along with the integration of visual directives/animations to elaborate the construction tasks scheduled to be carried on site.

This article presents the Rigid Finite Element Method in the calculation of reinforced concrete beam deflection with cracks. Initially, this method was used in the shipbuilding industry. Later, it was adapted in the homogeneous calculations of the bar structures. In this method, rigid mass discs serve as an element model. In the flat layout, three generalized coordinates (two translational and one rotational) correspond to each disc. These discs are connected by elastic ties. The genuine idea is to take into account a discrete crack in the Rigid Finite Element Method. It consists in the suitable reduction of the rigidity in rotational ties located in the spots, where cracks occurred. The susceptibility of this tie results from the flexural deformability of the element and the occurrence of the crack. As part of the numerical analyses, the influence of cracks on the total deflection of beams was determined. Furthermore, the results of the calculations were compared to the results of the experiment. Overestimations of the calculated deflections against the measured deflections were found. The article specifies the size of the overestimation and describes its causes.

In this paper, we present an empirical approach for objective and quantitative benchmarking of optimization algorithms with respect to characteristics induced by the forward calculation. Due to the professional background of the authors, this benchmarking strategy is illustrated on a selection of search methods in regard to expected characteristics of geotechnical parameter back calculation problems. Starting from brief introduction into the approach employed, a strategy for optimization algorithm benchmarking is introduced. The benchmarking utilizes statistical tests carried out on well-known test functions superposed with perturbations, both chosen to mimic objective function topologies found for geotechnical objective function topologies. Here, the moved axis parallel hyper-ellipsoid test function and the generalized Ackley test function in conjunction with an adjustable quantity of objective function topology roughness and fraction of failing forward calculations is analyzed. In total, results for 5 optimization algorithms are presented, compared and discussed.

Portugal is one of the European countries with higher spatial and population freeway network coverage. The sharp growth of this network in the last years instigates the use of methods of analysis and the evaluation of their quality of service in terms of the traffic performance, typically performed through internationally accepted methodologies, namely that presented in the Highway Capacity Manual (HCM). Lately, the use of microscopic traffic simulation models has been increasingly widespread. These models simulate the individual movement of the vehicles, allowing to perform traffic analysis. The main target of this study was to verify the possibility of using microsimulation as an auxiliary tool in the adaptation of the methodology by HCM 2000 to Portugal. For this purpose, were used the microscopic simulators AIMSUN and VISSIM for the simulation of the traffic circulation in the A5 Portuguese freeway. The results allowed the analysis of the influence of the main geometric and traffic factors involved in the methodology by HCM 2000. In conclusion, the study presents the main advantages and limitations of the microsimulators AIMSUN and VISSIM in modelling the traffic circulation in Portuguese freeways. The main limitation is that these microsimulators are not able to simulate explicitly some of the factors considered in the HCM 2000 methodology, which invalidates their direct use as a tool in the quantification of those effects and, consequently, makes the direct adaptation of this methodology to Portugal impracticable.

Steel profiles with slender cross-sections are characterized by their high susceptibility to instability phenomena, especially local buckling, which are intensified under fire conditions. This work presents a study on numerical modelling of the behaviour of steel structural elements in case of fire with slender cross-sections. To accurately carry out these analyses it is necessary to take into account those local instability modes, which normally is only possible with shell finite elements. However, aiming at the development of more expeditious methods, particularly important for analysing complete structures in case of fire, recent studies have proposed the use of beam finite elements considering the presence of local buckling through the implementation of a new effective steel constitutive law. The objective of this work is to develop a study to validate this methodology using the program SAFIR. Comparisons are made between the results obtained applying the referred new methodology and finite element analyses using shell elements. The studies were made to laterally restrained beams, unrestrained beams, axially compressed columns and columns subjected to bending plus compression.

In this paper we present some rudiments of a generalized Wiman-Valiron theory in the context of polymonogenic functions. In particular, we analyze the relations between different notions of growth orders and the Taylor coefficients. Our main intention is to look for generalizations of the Lindel¨of-Pringsheim theorem. In contrast to the classical holomorphic and the monogenic setting we only obtain inequality relations in the polymonogenic setting. This is due to the fact that the Almansi-Fischer decomposition of a polymonogenic function consists of different monogenic component functions where each of them can have a totally different kind of asymptotic growth behavior.

VARIATIONAL POSITING AND SOLUTION OF COUPLED THERMOMECHANICAL PROBLEMS IN A REFERENCE CONFIGURATION
(2015)

Variational formulation of a coupled thermomechanical problem of anisotropic solids for the case of non-isothermal finite deformations in a reference configuration is shown. The formulation of the problem includes: a condition of equilibrium flow of a deformation process in the reference configuration; an equation of a coupled heat conductivity in a variational form, in which an influence of deformation characteristics of a process on the temperature field is taken into account; tensor-linear constitutive relations for a hypoelastic material; kinematic and evolutional relations; initial and boundary conditions. Based on this formulation several axisymmetric isothermal and coupled problems of finite deformations of isotropic and anisotropic bodies are solved. The solution of coupled thermomechanical problems for a hollow cylinder in case of finite deformation showed an essential influence of coupling on distribution of temperature, stresses and strains. The obtained solutions show the development of stressstrain state and temperature changing in axisymmetric bodies in the case of finite deformations.

Known as a sophisticated phenomenon in civil engineering problems, soil structure interaction has been under deep investigations in the field of Geotechnics. On the other hand, advent of powerful computers has led to development of numerous numerical methods to deal with this phenomenon, resulting in a wide variety of methods trying to simulate the behavior of the soil stratum. This survey studies two common approaches to model the soil’s behavior in a system consisting of a structure with two degrees of freedom, representing a two-storey frame structure made of steel, with the column resting on a pile embedded into sand in laboratory scale. The effect of soil simulation technique on the dynamic behavior of the structure is of major interest in the study. Utilized modeling approaches are the so-called Holistic method, and substitution of soil with respective impedance functions.

A central issue for the autonomous navigation of mobile robots is to map unknown environments while simultaneously estimating its position within this map. This chicken-eggproblem is known as simultaneous localization and mapping (SLAM). Asctec’s quadrotor Pelican is a powerful and flexible research UAS (unmanned aircraft system) which enables the development of new real-time on-board algorithms for SLAM as well as autonomous navigation. The relative UAS pose estimation for SLAM, usually based on low-cost sensors like inertial measurement units (IMU) and barometers, is known to be affected by high drift rates. In order to significantly reduce these effects, we incorporate additional independent pose estimation techniques using exteroceptive sensors. In this article we present first pose estimation results using a stereo camera setup as well as a laser range finder, individually. Even though these methods fail in few certain configurations we demonstrate their effectiveness and value for the reduction of IMU drift rates and give an outlook for further works towards SLAM.

With the advances of the computer technology, structural optimization has become a prominent field in structural engineering. In this study an unconventional approach of structural optimization is presented which utilize the Energy method with Integral Material behaviour (EIM), based on the Lagrange’s principle of minimum potential energy. The equilibrium condition with the EIM, as an alternative method for nonlinear analysis, is secured through minimization of the potential energy as an optimization problem. Imposing this problem as an additional constraint on a higher cost function of a structural property, a bilevel programming problem is formulated. The nested strategy of solution of the bilevel problem is used, treating the energy and the upper objective function as separate optimization problems. Utilizing the convexity of the potential energy, gradient based algorithms are employed for its minimization and the upper cost function is minimized using the gradient free algorithms, due to its unknown properties. Two practical examples are considered in order to prove the efficiency of the method. The first one presents a sizing problem of I steel section within encased composite cross section, utilizing the material nonlinearity. The second one is a discrete shape optimization of a steel truss bridge, which is compared to a previous study based on the Finite Element Method.

SELECTION AND SCALING OF GROUND MOTION RECORDS FOR SEISMIC ANALYSIS USING AN OPTIMIZATION ALGORITHM
(2015)

The nonlinear time history analysis and seismic performance based methods require a set of scaled ground motions. The conventional procedure of ground motion selection is based on matching the motion properties, e.g. magnitude, amplitude, fault distance, and fault mechanism. The seismic target spectrum is only used in the scaling process following the random selection process. Therefore, the aim of the paper is to present a procedure to select a sets of ground motions from a built database of ground motions. The selection procedure is based on running an optimization problem using Dijkstra’s algorithm to match the selected set of ground motions to a target response spectrum. The selection and scaling procedure of optimized sets of ground motions is presented by examining the analyses of nonlinear single degree of freedom systems.

A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.

Sensor faults can affect the dependability and the accuracy of structural health monitoring (SHM) systems. Recent studies demonstrate that artificial neural networks can be used to detect sensor faults. In this paper, decentralized artificial neural networks (ANNs) are applied for autonomous sensor fault detection. On each sensor node of a wireless SHM system, an ANN is implemented to measure and to process structural response data. Structural response data is predicted by each sensor node based on correlations between adjacent sensor nodes and on redundancies inherent in the SHM system. Evaluating the deviations (or residuals) between measured and predicted data, sensor faults are autonomously detected by the wireless sensor nodes in a fully decentralized manner. A prototype SHM system implemented in this study, which is capable of decentralized autonomous sensor fault detection, is validated in laboratory experiments through simulated sensor faults. Several topologies and modes of operation of the embedded ANNs are investigated with respect to the dependability and the accuracy of the fault detection approach. In conclusion, the prototype SHM system is able to accurately detect sensor faults, demonstrating that neural networks, processing decentralized structural response data, facilitate autonomous fault detection, thus increasing the dependability and the accuracy of structural health monitoring systems.

One of the most promising and recent advances in computer-based planning is the transition from classical geometric modeling to building information modeling (BIM). Building information models support the representation, storage, and exchange of various information relevant to construction planning. This information can be used for describing, e.g., geometric/physical properties or costs of a building, for creating construction schedules, or for representing other characteristics of construction projects. Based on this information, plans and specifications as well as reports and presentations of a planned building can be created automatically. A fundamental principle of BIM is object parameterization, which allows specifying geometrical, numerical, algebraic and associative dependencies between objects contained in a building information model. In this paper, existing challenges of parametric modeling using the Industry Foundation Classes (IFC) as a federated model for integrated planning are shown, and open research questions are discussed.

It is well-known that the solution of the fundamental equations of linear elasticity for a homogeneous isotropic material in plane stress and strain state cases can be equivalently reduced to the solution of a biharmonic equation. The discrete version of the Theorem of Goursat is used to describe the solution of the discrete biharmonic equation by the help of two discrete holomorphic functions. In order to obtain a Taylor expansion of discrete holomorphic functions we introduce a basis of discrete polynomials which fulfill the so-called Appell property with respect to the discrete adjoint Cauchy-Riemann operator. All these steps are very important in the field of fracture mechanics, where stress and displacement fields in the neighborhood of singularities caused by cracks and notches have to be calculated with high accuracy. Using the sum representation of holomorphic functions it seems possible to reproduce the order of singularity and to determine important mechanical characteristics.

Performing parameter identification prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental setup, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and to reduce the number of monitoring points. This Paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability of soft clays. With an initial setup of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the afore mentioned parameters. Starting from these identified parameters, the optimal measurement setup is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity, and the results are discussed.

In construction engineering, a schedule’s input data, which is usually not exactly known in the planning phase, is considered deterministic when generating the schedule. As a result, construction schedules become unreliable and deadlines are often not met. While the optimization of construction schedules with respect to costs and makespan has been a matter of research in the past decades, the optimization of the robustness of construction schedules has received little attention. In this paper, the effects of uncertainties inherent to the input data of construction schedules are discussed. Possibilities are investigated to improve the reliability of construction schedules by considering alternative processes for certain tasks and by identifying the combination of processes generating the most robust schedule with respect to the makespan of a construction project.

The theory of regular quaternionic functions of a reduced quaternionic variable is a 3-dimensional generalization of complex analysis. The Moisil-Theodorescu system (MTS) is a regularity condition for such functions depending on the radius vector r = ix+jy+kz seen as a reduced quaternionic variable. The analogues of the main theorems of complex analysis for the MTS in quaternion forms are established: Cauchy, Cauchy integral formula, Taylor and Laurent series, approximation theorems and Cauchy type integral properties. The analogues of positive powers (inner spherical monogenics) are investigated: the set of recurrence formulas between the inner spherical monogenics and the explicit formulas are established. Some applications of the regular function in the elasticity theory and hydrodynamics are given.

Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.

Recently there has been a surge of interest in PDEs involving fractional derivatives in different fields of engineering. In this extended abstract we present some of the results developedin [3]. We compute the fundamental solution for the three-parameter fractional Laplace operator Δ by transforming the eigenfunction equation into an integral equation and applying the method of separation of variables. The obtained solutions are expressed in terms of Mittag-Leffer functions. For more details we refer the interested reader to [3] where it is also presented an operational approach based on the two Laplace transform.

Over the last decade, the technology of constructing buildings has been dramatically developed especially with the huge growth of CAD tools that help in modeling buildings, bridges, roads and other construction objects. Often quality control and size accuracy in the factory or on construction site are based on manual measurements of discrete points. These measured points of the realized object or a part of it will be compared with the points of the corresponding CAD model to see whether and where the construction element fits into the respective CAD model. This process is very complicated and difficult even when using modern measuring technology. This is due to the complicated shape of the components, the large amount of manually detected measured data and the high cost of manual processing of measured values. However, by using a modern 3D scanner one gets information of the whole constructed object and one can make a complete comparison against the CAD model. It gives an idea about quality of objects on the whole. In this paper, we present a case study of controlling the quality of measurement during the constructing phase of a steel bridge by using 3D point cloud technology. Preliminary results show that an early detection of mismatching between real element and CAD model could save a lot of time, efforts and obviously expenses.

In order to minimize the probability of foundation failure resulting from cyclic action on structures, researchers have developed various constitutive models to simulate the foundation response and soil interaction as a result of these complex cyclic loads. The efficiency and effectiveness of these model is majorly influenced by the cyclic constitutive parameters. Although a lot of research is being carried out on these relatively new models, little or no details exist in literature about the model based identification of the cyclic constitutive parameters. This could be attributed to the difficulties and complexities of the inverse modeling of such complex phenomena. A variety of optimization strategies are available for the solution of the sum of least-squares problems as usually done in the field of model calibration. However for the back analysis (calibration) of the soil response to oscillatory load functions, this paper gives insight into the model calibration challenges and also puts forward a method for the inverse modeling of cyclic loaded foundation response such that high quality solutions are obtained with minimum computational effort. Therefore model responses are produced which adequately describes what would otherwise be experienced in the laboratory or field.

The p-Laplace equation is a nonlinear generalization of the Laplace equation. This generalization is often used as a model problem for special types of nonlinearities. The p-Laplace equation can be seen as a bridge between very general nonlinear equations and the linear Laplace equation. The aim of this paper is to solve the p-Laplace equation for 2 < p < 3 and to find strong solutions. The idea is to apply a hypercomplex integral operator and spatial function theoretic methods to transform the p-Laplace equation into the p-Dirac equation. This equation will be solved iteratively by using a fixed point theorem.

In this paper we introduce LUCI, a Lightweight Urban Calculation Interchange system, designed to bring the advantages of a calculation and content co-ordination system to small planning and design groups by the means of an open source middle-ware. The middle-ware focuses on problems typical to urban planning and therefore features a geo-data repository as well as a job runtime administration, to coordinate simulation models and its multiple views. The described system architecture is accompanied by two exemplary use cases that have been used to test and further develop our concepts and implementations.

Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex.
The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials.
This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties
of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major
goal, the following tasks are carried out:
At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs.
At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length
scales. In particular, we homogenized the RVE into an equivalent fiber.
The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale.
Stochastic modeling and uncertainty quantification consist of the following ingredients:
- Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively.
- Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data.
- Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance.
In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided.
The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results.

Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.

Some caad packages offer additional support for the optimization of spatial configurations, but the possibilities for applying optimization are usually limited either by the complexity of the data model or by the constraints of the underlying caad system. Since we missed a system that allows to experiment with optimization techniques for the synthesis of spatial configurations, we developed a collection of methods over the past years. This collection is now combined in the presented open source library for computational planning synthesis, called CPlan. The aim of the library is to provide an easy to use programming framework with a flat learning curve for people with basic programming knowledge. It offers an extensible structure that allows to add new customized parts for various purposes. In this paper the existing functionality of the CPlan library is described.

Methods based on B-splines for model representation, numerical analysis and image registration
(2015)

The thesis consists of inter-connected parts for modeling and analysis using newly developed isogeometric methods. The main parts are reproducing kernel triangular B-splines, extended isogeometric analysis for solving weakly discontinuous problems, collocation methods using superconvergent points, and B-spline basis in image registration applications.
Each topic is oriented towards application of isogeometric analysis basis functions to ease the process of integrating the modeling and analysis phases of simulation.
First, we develop reproducing a kernel triangular B-spline-based FEM for solving PDEs. We review the triangular B-splines and their properties. By definition, the triangular basis function is very flexible in modeling complicated domains. However, instability results when it is applied for analysis. We modify the triangular B-spline by a reproducing kernel technique, calculating a correction term for the triangular kernel function from the chosen surrounding basis. The improved triangular basis is capable to obtain the results with higher accuracy and almost optimal convergence rates.
Second, we propose an extended isogeometric analysis for dealing with weakly discontinuous problems such as material interfaces. The original IGA is combined with XFEM-like enrichments which are continuous functions themselves but with discontinuous derivatives. Consequently, the resulting solution space can approximate solutions with weak discontinuities. The method is also applied to curved material interfaces, where the inverse mapping and the curved triangular elements are considered.
Third, we develop an IGA collocation method using superconvergent points. The collocation methods are efficient because no numerical integration is needed. In particular when higher polynomial basis applied, the method has a lower computational cost than Galerkin methods. However, the positions of the collocation points are crucial for the accuracy of the method, as they affect the convergent rate significantly. The proposed IGA collocation method uses superconvergent points instead of the traditional Greville abscissae points. The numerical results show the proposed method can have better accuracy and optimal convergence rates, while the traditional IGA collocation has optimal convergence only for even polynomial degrees.
Lastly, we propose a novel dynamic multilevel technique for handling image registration. It is application of the B-spline functions in image processing. The procedure considered aims to align a target image from a reference image by a spatial transformation. The method starts with an energy function which is the same as a FEM-based image registration. However, we simplify the solving procedure, working on the energy function directly. We dynamically solve for control points which are coefficients of B-spline basis functions. The new approach is more simple and fast. Moreover, it is also enhanced by a multilevel technique in order to prevent instabilities. The numerical testing consists of two artificial images, four real bio-medical MRI brain and CT heart images, and they show our registration method is accurate, fast and efficient, especially for large deformation problems.

When working on urban planning projects there are usually multiple aspects to consider. Often these aspects are contradictory and it is not possible to choose one over the other; instead, they each need to be fulfilled as well as possible. In this situation ideal solutions are not always found because they are either not sought or the problems are regarded as being too complex for human capabilities.To improve this situation we propose complementing traditional design approaches with a design synthesis process based on evolutionary many-criteria optimization methods that can fulfill formalizable design requirements. In addition we show how self-organizing maps can be used to visualize many-dimensional solution spaces in an easily analyzable and comprehensible form.The system is presented using an urban planning scenario for the placement of building volumes.

Aerodynamic Analysis of Slender Vertical Structure and Response Control with Tuned Mass Damper
(2015)

Analysis of vortex induced vibration has gained more interest in practical held of civil engineering. The phenomenon often occurs in long and slender vertical structure like high rise building, tower, chimney or bridge pylon, which resulting in unfavorable responses and might lead to the collapse of the structures. The phenomenon appears when frequency of vortex shedding produced in the wake area of body meet the natural frequency of the structure. Even though this phenomenon does not necessarily generate a divergent amplitude response, the structure still may fail due to fatigue damage.
To reduce the effect of vortex induced vibration, engineers widely use passive vibration response control system. In this case, the thesis studies the effect of tuned mass damper. The objective of this thesis is to simulate the effect of tuned mass damper in reducing unfavorable responses due to vortex induced vibration and initiated by numerical model validation with respect to wind tunnel test report. The reference structure that being used inside the thesis is Stonecutter Bridge, Hongkong.
A numerical solver for computational uid dynamics named VX ow which developed by Morgenthal [6] is utilized for wind and structure simulation. The comparison between numerical model and wind tunnel result shows 10% maximum tip displacement diference in the model of full erection freestanding tower. The tuned mass damper (TMD) model itself built separately in finite element software SOFiSTiK, and the efective damping obtained from this model then applied inside input modal data of VX ow simulation. A single TMD with mass ratio of TMD 0.5% to the mass of first bending frequency, the maximum tip displacement is measured to be average 67% reduced.
Considering construction limitation and robustness of TMD, the effects of multiple TMD inside a structure are also studied. An uncoupled procedure of applying aeroelastic loads obtained from VX
ow inside finite element software SOFiSTiK is also done to observe the optimum distribution and optimum mass ratio of multiple tuned mass damper. The rest of the properties of TMD are calculated with Den Hartog's formula. The results are as follows: peak displacement in the case of multiple TMD that distributed with polynomial spacing achieve 7.8% more reduction performance than
the one that distributed with equal spacing. Optimum mass of tuned mass damper achieved with ratio 1.25% mass of first bending frequency corresponds to across wind direction.

Granite on the Ground: Former Nazi Party Rally Grounds, Nuremberg/Germany. A brief introduction
(2015)

For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).

The refurbishment of old buildings often goes hand in hand with an increase in both the dead and live loads. The latter, combined with the higher safety factors, often make the reinforcement of the old structures necessary. Most reinforcement methods involve transforming a structural timber member into a composite beam. Composite sections have a long tradition in timber construction. In the early days, multiple timber beams were connected with interlocking tooth and wooden shear connecters, which resulted in an elastic connection. Although historical timber structures are frequently upgraded, no method has yet been established and fully accepted by all stakeholders such as owners, builders, architects, engineers and cultural heritage organisations. Carbon fibre-reinforced polymers (CFRP) have already shown their efficiency in structural reinforcement especially in concrete structures. Moreover, previous studies have shown that CFRP has the potential to meet the expectations of all parties involved.
In order to reach the service-limit state, a high amount of carbon fibres has to be used, or considering the cost of reinforcement, prestress has to be applied. However, prestressing often goes hand-in-hand with delaminating issues. The camber method presented here offers an efficient solution for prestressing timber bending members and overcoming the known obstacles.
In the method proposed, the timber beam is cambered using an adjustable prop at midspan during the bonding of the CRFP-lamella to the lower side of the bending member. After curing the adhesive, the prop is removed and the prestressed composite beam is ready to be used. The prestress introduced in the system is not constant, but has a triangular shape and peaks at midspan, where it is used the most. The prestress force, which declines towards the end of the beam, leads to a constant shear stress over the whole length of the reinforcement,avoiding a concentrated anchorage zone.
An analytical calculation model has been developed to calculate and design prestressed timber-
bending members using the camber method. Numerical modelling, using a multi-surface plasticity model for timber, confirmed the results from the analytical model, and clearly reduced delaminating issues, comparing very favourably to traditional prestressing methods. The experimental parametric study, including the determination of the short-term loadbearing capacity of structural-sized beams, showed agreement with the analytical and numerical calculation. The prestressed reinforcement showed a benefit of nearly 50% towards the ultimate-limit state and up to 70% towards the service-limit state. Calculations revealed that the use of high modulus CFRP allows even higher benefits, depending on the configurations and requirements. The long-term design of the prestressed composite beam was investigated by extending the analytical model. The creep of the timber leads to a load transfer from the timber towards the CFRP, and therefore increases the benefit towards the ultimatelimit design. Applying high modulus CFRP-lamellas allows for a complete utilisation of the
design capacity of timber and carbon fibre-reinforced polymer.
The thorough investigation conducted demonstrated that the camber method is an efficient technique for prestressing and reinforcing timber-bending members. Furthermore, the calculation model presented allows for a safe design and estimation of long-term behaviour.

This thesis presents new interactive visualization techniques and systems intended to support users with real-world decisions such as selecting a product from a large variety of similar offerings, finding appropriate wording as a non-native speaker, and assessing an alleged case of plagiarism.
The Product Explorer is a significantly improved interactive Parallel Coordinates display for facilitating the product selection process in cases where many attributes and numerous alternatives have to be considered. A novel visual representation for categorical and ordered data with only few occurring values, the so-called extended areas, in combination with cubic curves for connecting the parallel axes, are crucial for providing an effective overview of the entire dataset and to facilitate the tracing of individual products. The visual query interface supports users in quickly narrowing down the product search to a small subset or even a single product. The scalability of the approach towards a large number of attributes and products is enhanced by the possibility of setting some constraints on final attributes and, therefore, reducing the number of considered attributes and data items. Furthermore, an attribute repository allows users to focus on the most important attributes at first and to bring in additional criteria for product selection later in the decision process. A user study confirmed that the Product Explorer is indeed an excellent tool for its intended purpose for casual users.
The Wordgraph is a layered graph visualization for the interactive exploration of search results for complex keywords-in-context queries. The system relies on the Netspeak web service and is designed to support non-native speakers in finding customary phrases. Uncertainties about the commonness of phrases are expressed with the help of wildcard-based queries. The visualization presents the alternatives for the wildcards in a multi-column layout: one column per wildcard with the other query fragments in between. The Wordgraph visualization displays the sorted results for all wildcards at once by appropriately arranging the words of each column. A user study confirmed that this is a significant advantage over simple textual result lists. Furthermore, visual interfaces to filter, navigate, and expand the graph allow interactive refinement and expansion of wildcard-containing queries.
Furthermore, this thesis presents an advanced visual analysis tool for assessing and presenting alleged cases of plagiarism and provides a three-level approach for exploring the so-called finding spots in their context. The overview shows the relationship of the entire suspicious document to the set of source documents. An intermediate glyph-based view reveals the structural and textual differences and similarities of a set of finding spots and their corresponding source text fragments. Eventually, the actual fragments of the finding spot can be shown in a side-by-side view with a novel structured wrapping of both the source, as well as the suspicious text. The three different levels of detail are tied together by versatile navigation and selection operations. Reviews with plagiarism experts confirm that this tool can effectively support their workflow and provides a significant improvement over existing static visualizations for assessing and presenting plagiarism cases.
The three main contributions of this research have a lot in common aside from being carefully designed and scientifically grounded solutions to real-world decision problems. The first two visualizations facilitate the decision for a single possibility out of many alternatives, whereas the latter ones deal with text at varying levels of detail. All visual representations are clearly structured based on horizontal and vertical layers contained in a single view and they all employ edges for depicting the most important relationships between attributes, words, or different levels of detail. A detailed analysis considering the context of the established decision-making literature reveals that important steps of common decision models are well-supported by the three visualization systems presented in this thesis.

The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!

This thesis applies the theory of \psi-hyperholomorphic functions dened in R^3 with values in the set of paravectors, which is identified with the Eucledian space R^3, to tackle some problems in theory and practice: geometric mapping properties, additive decompositions of harmonic functions and applications in the theory of linear elasticity.

Die Bau-Ausstellung zu Beginn des 20. Jahrhunderts oder „Die Schwierigkeit zu wohnen“, so lautet der Titel der Arbeit und legt damit ihren Fokus offen. Sie betrachtet ein um 1900 aufkommendes Ausstellungsphänomen in seinen ersten Dekaden; in einer Zeit, in der das Wohnen problematisiert wurde.
Der Praxis des Wohnens ist Anfang des 20. Jahrhunderts seine Selbstverständlichkeit abgesprochen worden. Die Bau-Ausstellung wird als ein Symptom dieses Verlusts interpretiert und zugleich als Versuch, ihm entgegenzuwirken. Statt sie innerdisziplinär als eine architekturgeschichtliche Episode der Moderne rein positivistisch zu beschreiben, legt die vorliegende Arbeit im Typus der Bau-Ausstellung eine weit reichende Problematik frei, die kulturphilosophische ebenso wie medientheoretische Dimensionen hat: Inwieweit lässt sich das Wohnen überhaupt ausstellen? Wie kann man eine alltägliche Praxis, die kaum eine Definition erlaubt, zeigen? Den Umgang mit dieser Aporie untersucht die Arbeit.

Abstract
In this research, based on socio-spatiality as the starting point, it has conducted extensive city space analysis to advance a new urban social space theory. Resting upon the basis of traditional continent philosophy, this social space theory has adopted the structuration methods, at the same time trying to build certain combination between theoretical frame work establishment and empirical observations. Therefore, the socio-spatial transition study is neither a macro theory of traditional structuralism nor a typology of urban planning theory, or a positivism social geography, but an operative theory on practical purpose. Firstly, what’s distinct from the traditional structuralism is that this study examines the endless transiting structural relations, not macroscopic narrations of absolute definition and structure. In fact, any city and space are always co-existed in their structurational transiting relationship, thus research in transition has become the main body of this study. And case study is a must for research in transition, as part of efforts to apply the structuration concept into practice reason. Secondly, this study first establishes the fundamental structuration concept of socio-spatial transition, which, as an operative tool, is applied to conduct transition analysis on specific case about the City of Beijing. Therefore, as a social space theory, referring to as science, remains criticism of traditional continent philosophy. However, this criticism did not working on the level of ideology or conceptions, but on transiting under structural relations, keeping it from incompetent ideology criticism of continental critical theory. Unfortunately contemporary urban and space development have now gone extremely unbalanced under a background of globalization; yet traditional macro theories are incapable of either producing significant impact on practice or helping people identify practical problems. While facing general issues, particularly the Chinese urban issue category established on a meta-structured city mode, the micro-case study has plunged into dilemma for unknowing either to ask questions or to answer questions. Therefore, this study is set to identify dilemma and find direction for future relevant research. In this dissertation, Beijing is used as a model, and structuration methods as tools. It has extensively analyzed the social-spatial transition of the city space of Beijing, acquiring brand-new knowledge of its urban space development. It is helpful to an in-depth understanding of the city space development not only in Beijing, but also in many other cities that were influenced by the capital model of Beijing. Since the start of reform and opening-up, China has created a unique development mode of the new-styled metropolitan and urbanization in history. This research is expected to analyze or decode what China’s urban development in between communal space and associative space.

The polymeric clay nanocomposites are a new class of materials of which recently have become the centre of attention due to their superior mechanical and physical properties. Several studies have been performed on the mechanical characterisation of these nanocomposites; however most of those studies have neglected the effect of the interfacial region between the clays and the matrix despite of its significant influence on the mechanical performance of the nanocomposites.
There are different analytical methods to calculate the overall elastic material properties of the composites. In this study we use the Mori-Tanaka method to determine the overall stiffness of the composites for simple inclusion geometries of cylinder and sphere. Furthermore, the effect of interphase layer on the overall properties of composites is calculated. Here, we intend to get ounds for the effective mechanical properties to compare with the analytical results. Hence, we use linear displacement boundary conditions (LD) and uniform traction boundary conditions (UT) accordingly. Finally, the analytical results are compared with numerical results and they are in a good agreement.
The next focus of this dissertation is a computational approach with a hierarchical multiscale method on the mesoscopic level. In other words, in this study we use the stochastic analysis and computational homogenization method to analyse the effect of thickness and stiffness of the interfacial region on the overall elastic properties of the clay/epoxy nanocomposites. The results show that the increase in interphase thickness, reduces the stiffness of the clay/epoxy naocomposites and this decrease becomes significant in higher clay contents. The results of the sensitivity analysis prove that the stiffness of the interphase layer has more significant effect on the final stiffness of nanocomposites. We also validate the results with the available experimental results from the literature which show good agreement.

Im Rahmen der Forschung an Bauteil- und Fügestellendämpfung wurden die Schwingungen der Bauteile bisher mit 1D-Laser-Vibrometern gemessen. Nun steht ein 3D-Laser-Scanner zur Verfügung. Diese Arbeit beschäftigt sich mit der Frage, ob mit dem 3D-Laser-Scanner bessere und weitere relevante Daten bei der Schwingungsmessung gewonnen werden können.

In der vorliegenden Arbeit werden Manifestäußerungen des Design- und Architekturdiskurses im situativen, medialen, institutionellen und historischen Kontext untersucht. Auf Grundlage einer Analyse der Etymologie und der Ideengeschichte des Manifestes wird anhand ausgewählter Beispiele die These belegt, dass das Medium vor allem der gezielten Rezeptionssteuerung im Diskurs dient. Darüber hinaus wird die Fähigkeit und Bedeutung des Manifestes als Spiegel vergangener und gegenwärtiger, aber auch als Wegbereiter zukünftiger Diskurse analysiert und Entwicklungen des Manifestes hinsichtlich Stilistik, Inhalt, Funktion und Bedeutung im Design- und Architekturdiskurs dargestellt.

As human thought was developing, likewise, the technology used for illumination was growing. But a haul through history, reviewing its pages and analyzing it, inherently brings up old and new question, like: Is it possible to alter negatively the image of historic buildings and monuments through inadequate lighting to the degree of distorting the perception that people have of the work? and if so, what are the causes that generate it? Do the light designers take into consideration criteria to protect not only historic buildings and monuments, but also the environment? What are the consequences that may generate the inadequate lighting of urban heritage to the environment? What are the factors to consider for a proper illumination of urban heritage? The answers to these questions will help lay the foundation for proper illumination of the urban heritage, avoiding at the maximum the light pollution and the effects that it generates, seeking a balance and harmonious reconciliation between the technology, urban heritage and environment, taking as a framework and the case study the urban heritage of a city from the colonial era in southern Mexico, with pre-Hispanic roots and where today you can still see through its streets and buildings an atmosphere of mysticism reflection of their folklore and traditions, this city is known as Chiapa de Corzo, Chiapas.