### Refine

#### Document Type

- Conference Proceeding (951) (remove)

#### Institute

- Professur Informatik im Bauwesen (338)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (174)
- Institut für Strukturmechanik (133)
- Professur Baubetrieb und Bauverfahren (104)
- Professur Theorie und Geschichte der modernen Architektur (94)
- Graduiertenkolleg 1462 (32)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (12)
- Juniorprofessur Computational Architecture (11)
- Bauhaus-Institut für Geschichte und Theorie der Architektur und Planung (9)

#### Keywords

- Computerunterstütztes Verfahren (288)
- Angewandte Mathematik (257)
- Architektur <Informatik> (202)
- CAD (159)
- Angewandte Informatik (146)
- Strukturmechanik (111)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (74)
- Bauhaus (68)
- Arbeitsschutz (61)
- Baustelle (53)

Das Innovationsmanagement von Medienorganisationen unterliegt derzeit erheblichen Veränderungen: Im veränderten Marktumfeld erweisen sich Flexibilität, schnelle Richtungswechsel und Anpassungsfähigkeit als zentral. Darauf muss auch die Medienmanagement-Forschung reagieren: Um die Agilität der gegenwärtigen Unternehmenspraxis valide zu erforschen, ist eine ebenso agile, adaptive Forschung gefordert. Zu diesem Zweck schlägt der Beitrag eine praxistheoretische Perspektive auf das Innovationsmanagement von Medienorganisationen vor. Empirische Forschungsdesigns, die aus einem solchen Zugriff resultieren, werden sowohl hinsichtlich ihrer methodischen Herausforderungen als auch ihres Forschungsprojektmanagements diskutiert. Der Beitrag greift außerdem neue Möglichkeitsräume des wissenschaftlichen Publizierens, des Universitätsmanagements sowie der Forschungsorganisation auf, die praxistheoretisch gegründete, empirische Innovationsforschung in der Medienwirtschaft einfordert.

Acoustic travel-time tomography (ATOM) determines the distribution of the temperature in a propagation medium by measuring the travel-time of acoustic signals between transmitters and receivers. To employ ATOM for indoor climate measurements, the impulse responses have been measured in the climate chamber lab of the Bauhaus-University Weimar and compared with the theoretical results of its image source model (ISM). A challenging task is distinguishing the reflections of interest in the reflectogram when the sound rays have similar travel-times. This paper presents a numerical method to address this problem by finding optimal positions of transmitter and receiver, since they have a direct impact on the distribution of travel times. These optimal positions have the minimum number of simultaneous arrival time within a threshold level. Moreover, for the tomographic reconstruction, when some of the voxels remain empty of sound-rays, it leads to inaccurate determination of the air temperature within those voxels. Based on the presented numerical method, the number of empty tomographic voxels are minimized to ensure the best sound-ray coverage of the room. Subsequently, a spatial temperature distribution is estimated by simultaneous iterative reconstruction technique (SIRT). The experimental set-up in the climate chamber verifies the simulation results.

The proceedings at hand are the result of the International Master Course Module: "Nonlinear Analysis of Structures: Wind Induced Vibrations" held at the Faculty of Civil Engineering at Bauhaus-University Weimar, Germany in the summer semester 2019 (April - August). This material summarizes the results of the project work done throughout the semester, provides an overview of the topic, as well as impressions from the accompanying programme.
Wind Engineering is a particular field of Civil Engineering that evaluates the resistance of structures caused by wind loads. Bridges, high-rise buildings, chimneys and telecommunication towers might be susceptible to wind vibrations due to their increased flexibility, therefore a special design is carried for this aspect. Advancement in technology and scientific studies permit us doing research at small scale for more accurate analyses. Therefore scaled models of real structures are built and tested for various construction scenarios. These models are placed in wind tunnels where experiments are conducted to determine parameters such as: critical wind speeds for bridge decks, static wind coefficients and forces for buildings or bridges. The objective of the course was to offer insight to the students into the assessment of long-span cable-supported bridges and high-rise buildings under wind excitation. The participating students worked in interdisciplinary teams to increase their knowledge in the understanding and influences on the behaviour of wind-sensitive structures.

The design of engineering structures takes place today and in the past on the basis of static calculations. The consideration of uncertainties in the model quality becomes more and more important with the development of new construction methods and design requirements. In addition to the traditional forced-based approaches, experiences and observations about the deformation behavior of components and the overall structure under different exposure conditions allow the introduction of novel detection and evaluation criteria.
The proceedings at hand are the result from the Bauhaus Summer School Course: Forecast Engineering held at the Bauhaus-Universität Weimar, 2017. It summarizes the results of the conducted project work, provides the abstracts of the contributions by the participants, as well as impressions from the accompanying programme and organized cultural activities.
The special character of this course is in the combination of basic disciplines of structural engineering with applied research projects in the areas of steel and reinforced concrete structures, earthquake and wind engineering as well as informatics and linking them to mathematical methods and modern tools of visualization. Its innovative character results from the ambitious engineering tasks and advanced
modeling demands.

Institute of Structural Engineering, Institute of Structural Mechanics, as well as Institute for Computing, Mathematics and Physics in Civil Engineering at the faculty of civil engineering at the Bauhaus-Universität Weimar presented special topics of structural engineering to highlight the broad spectrum of civil engineering in the field of modeling and simulation.
The summer course sought to impart knowledge and to combine research with a practical context, through a challenging and demanding series of lectures, seminars and project work. Participating students were enabled to deal with advanced methods and its practical application.
The extraordinary format of the interdisciplinary summer school offers the opportunity to study advanced developments of numerical methods and sophisticated modelling techniques in different disciplines of civil engineering for foreign and domestic students, which go far beyond traditional graduate courses.
The proceedings at hand are the result from the Bauhaus Summer School course: Forecast Engineering held at the Bauhaus-Universität Weimar, 2018. It summarizes the results of the conducted project work, provides the abstracts/papers of the contributions by the participants, as well as impressions from the accompanying programme and organized cultural activities.

30. Forum Bauinformatik
(2018)

Die Bauhaus-Universität Weimar ist seit langer Zeit mit dem Forum Bauinformatik eng verbunden. So wurde die Veranstaltung 1989 hier durch den Arbeitskreis Bauinformatik ins Leben gerufen und auch das 10. und 18. Forum Bauinformatik (1998 bzw. 2006) fand in Weimar statt. In diesem Jahr freuen wir uns daher besonders, das 30. Jubiläum an der Bauhaus-Universität Weimar ausrichten zu dürfen und viele interessierte Wissenschaftler und Wissenschaftlerinnen aus dem Bereich der Bauinformatik in Weimar willkommen zu heißen.
Das Forum Bauinformatik hat sich längst zu einem festen Bestandteil der Bauinformatik im deutschsprachigen Raum entwickelt. Dabei steht es traditionsgemäß unter dem Motto „von jungen Forschenden für junge Forschende“, wodurch insbesondere Nachwuchswissenschaftlerinnen und ‑wissenschaftlern die Möglichkeit geboten wird, ihre Forschungsarbeiten zu präsentieren, Problemstellungen fachspezifisch zu diskutieren und sich über den neuesten Stand der Forschung zu informieren. Zudem wird eine ausgezeichnete Gelegenheit geboten, in die wissenschaftliche Gemeinschaft im Bereich der Bauinformatik einzusteigen und Kontakte mit anderen Forschenden zu knüpfen.
In diesem Jahr erhielten wir 49 interessante und qualitativ hochwertige Beiträge vor allem in den Themenbereichen Simulation, Modellierung, Informationsverwaltung, Geoinformatik, Structural Health Monitoring, Visualisierung, Verkehrssimulation und Optimierung. Dafür möchten wir uns ganz besonders bei allen Autoren, Co-Autoren und Reviewern bedanken, die durch ihr Engagement das diesjährige Forum Bauinformatik erst möglich gemacht haben. Wir danken zudem Professor Große und Professor Díaz für die Unterstützung bei der Auswahl der Beiträge für die Best Paper Awards.
Ein herzliches Dankeschön geht an die Kollegen an der Professur Informatik im Bauwesen der Bauhaus-Universität Weimar für die organisatorische, technische und beratende Unterstützung während der Planung der Veranstaltung.

Der vorliegende Beitrag beschreibt die Problematik bei der Prognose verkehrsbedingter Schadstoff-Immissionen. Im Mittelpunkt steht die Entwicklung und der Aufbau einer Simulationsumgebung zur Evaluation von umweltorientierten Verkehrsmanagement-Strategien. Die Simulationsumgebung wird über die drei Felder Verkehr, Emission, Immission entwickelt und findet zunächst Anwendung in der Evaluation verkehrlicher Maßnahmen für die Friedberger Landstraße in Frankfurt am Main.

Components of structural glazing have to meet different requirements and resist various impacts, depending on the field of application. Within an international research project of the EU innovation program Horizon 2020, special glass panes with a fluid circulating in capillaries are developed exploiting solar energy. Major influences to this glazing are UV irradiation and the fluidic contact, effecting the mechanical and optical durability of the bonding material within the glass setup. Regarding to visual requirements, acrylate adhesives and EVA films are analyzed as possible bonding materials by destructive and non-destructive testing methods. Two types of specimen are presented for obtaining the mechanical behavior and the surface appearances of the bonding material.

The high resource demand of the building sector clearly indicates the need to search for alternative, renewable and energy-efficient materials. This work presents paper-laminated sandwich elements with a core of corrugated paperboard that can serve as architectural components with a load-bearing capacity after a linear folding process. Conventional methods either use paper tubes or glued layers of honeycomb panels. In contrast, the folded components are extremely lightweight, provide the material strength exactly where it is statically required and offer many possibilities for design variants. After removing stripes of the paper lamination, the sandwich can be folded in a linear way at this position. Without the resistance of the missing paper, the sandwich core can be easily compressed. The final angle of the folding correlates with the width of the removed paper stripe. As such, this angle can be described by a simple geometric equation. The geometrical basis for the production of folded sandwich elements was established and many profile types were generated such as triangular, square or rectangular shapes. The method allows the easy planning and fast production of components that can be used in the construction sector. A triangle profile was used to create a load-bearing frame as supporting structure for an experimental building. This first permanent building completely made of corrugated cardboard was evaluated in a two-year test to confirm the efficiency of the developed components. In addition to the frame shown in this paper, large-scale sandwich elements with a core of folded components can be used to fabricate lightweight ceilings and large-scale sandwich components. The method enables the efficient production of linearly folded cardboard elements which can replace normal wooden components like beams, pillars or frames and bring a fully recycled material in the context of architectural construction.

Hochschulwege 2015
(2017)

Die in diesem Tagungsband zusammengeführten Beiträge beschäftigen sich mit dem Spannungsfeld, das sich zwischen externen Förderprogrammen, Veränderungsprojekten und den Zielen, Strukturen und Bedingungen der jeweiligen Hochschule ergibt. In diesem Spannungsfeld kommt es unweigerlich zu Reibungen, da vorhandene Strukturen und Ziele in Konflikt mit neuen Vorhaben und Ideen geraten. Ein Teil der Projekte stellt allein durch ihr finanzielles Volumen und die daraus resultierende Wirkkraft die tradierten Verhältnisse zwischen Lehre, Forschung und den wissenschaftsstützenden Bereichen in Frage und teils auf den Kopf. Die leitenden Fragen der Tagung und der hier versammelten Beiträge waren daher: Wie bringen Hochschulen ihre individuellen Ziele mit denen der bundesweiten Programme oder länderspezfifischer Fördermaßnahmen überein? Wie gehen Hochschulen mit ihren Projekten um? Wie vollzieht sich Veränderung an den Hochschulen? Und schließlich: Was bleibt von den Impulsen, die Projekte setzen? Die in diesem Tagungsband versammelten Beiträge geben darauf erste, auf dem bisherigen Erfahrungswissen basierende Antworten. Sie setzen sich intensiv mit den Faktoren auseinander, die den Erfolg von Veränderungsprozessen und Projekten befördern oder behindern können und leiten daraus Empfehlungen für Gestaltungsprozesse an Hochschulen ab.

It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.

This work presents a concept of interactive machine learning in a human design process. An urban design problem is viewed as a multiple-criteria optimization problem. The outlined feature of an urban design problem is the dependence of a design goal on a context of the problem. We model the design goal as a randomized fitness measure that depends on the context. In terms of multiple-criteria decision analysis (MCDA), the defined measure corresponds to a subjective expected utility of a user. In the first stage of the proposed approach we let the algorithm explore a design space using clustering techniques. The second stage is an interactive design loop; the user makes a proposal, then the program optimizes it, gets the user’s feedback and returns back the control over the application interface.

In the Space Syntax community, the standard tool for computing all kinds of spatial graph network measures is depthmapX (Turner, 2004; Varoudis, 2012). The process of evaluating many design variants of networks is relatively complicated, since they need to be drawn in a separated CAD system, exported and imported in depthmapX via dxf file format. This procedure disables a continuous integration into a design process. Furthermore, the standalone character of depthmapX makes it impossible to use its network centrality calculation for optimization processes. To overcome this limitations, we present in this paper the first steps of experimenting with a Grasshopper component (reference omitted until final version) that can access the functions of depthmapX and integrate them into Grasshopper/Rhino3D. Here the component is implemented in a way that it can be used directly for an evolutionary algorithm (EA) implemented in a Python scripting component in Grasshopper

Granite on the Ground: Former Nazi Party Rally Grounds, Nuremberg/Germany. A brief introduction
(2015)

For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).

In this paper we introduce LUCI, a Lightweight Urban Calculation Interchange system, designed to bring the advantages of a calculation and content co-ordination system to small planning and design groups by the means of an open source middle-ware. The middle-ware focuses on problems typical to urban planning and therefore features a geo-data repository as well as a job runtime administration, to coordinate simulation models and its multiple views. The described system architecture is accompanied by two exemplary use cases that have been used to test and further develop our concepts and implementations.

Some caad packages offer additional support for the optimization of spatial configurations, but the possibilities for applying optimization are usually limited either by the complexity of the data model or by the constraints of the underlying caad system. Since we missed a system that allows to experiment with optimization techniques for the synthesis of spatial configurations, we developed a collection of methods over the past years. This collection is now combined in the presented open source library for computational planning synthesis, called CPlan. The aim of the library is to provide an easy to use programming framework with a flat learning curve for people with basic programming knowledge. It offers an extensible structure that allows to add new customized parts for various purposes. In this paper the existing functionality of the CPlan library is described.

Urban design played a central role for the European dictatorships during the 20th century, it served to legitimate the regime, to produce agreement, to demonstrate power, efficiency and speed, it communicated the social, as well as design projects, of the dictatorial regimes domestically and internationally, it tied old experts, as well as new, to the regime. Dictatorial urban design also played an important role after the fall of the dictatorships: It became the object of structural and verbal handling strategies: of demolition, of transformation, of reconstruction, of forgetting, of suppressing, of re-interpretation and of glorification. The topic area is, therefore, both historical and relevant to the present day. The discussion of the topic area is, like it or not, always embedded in the present state of societal engagement with dictatorships.
In order to even be able to discuss all of these aspects, different conceptual decisions are necessary. In retrospect, these may seem to many as self-evident, although they are anything but. Our thesis is that there are three methodological imperatives, especially, which allow an expanded approach to the topic area “urban design and dictatorship”. First and above all, the tunnel view, focused on individual dictatorships and neglecting the international dimension, must be overcome. Second, the differences in urban design over the course of a dictatorship, through an appropriate periodisation, should be emphasised. Third, we must strive for an open, flexible, but complex concept of urban design. The main focus lies on the urban design of the most influential dictatorships of the first half of the 20th century: Soviet Union, Fascist Italy and Nazi Germany, including the urban design of the autarky periods in Portugal and Spain.
After all, urban design is not just a product of specific historic circumstances. It is a form that continues to have long-term effects, which demonstrates its usefulness and adaptability throughout this process. The urban design products undoubtedly still recall the dictatorial rule under which they were created. However, they are more than a memory space. They are also a living space of the present. They can and should be discussed with respect to their spatial and functional utility for today and tomorrow. Such a perspective is a given for the citizens of a city, but also for city marketing, having marvellous consequences. Only when we do not exclude this dimension a priori, even in academic discussions, can we do justice to the products of dictatorships.
And finally, the view of the urban design of dictatorships can and must contribute to the questioning of simplified and naive conceptions of dictatorships. With urban design in mind, we can observe how dictatorships work and how they were able to prevail. In Europe, these questions are of the highest actuality.

Restelo Neighbourhood: Expanding the Capital of the Empire with the First Portuguese Urban Planner
(2015)

Entwicklung eines Sommerreferenzjahres zur Bestimmung der sommerlichen Überhitzung von Gebäuden
(2015)

Die Ableitung von sommer-fokussierten warmen Referenzjahren aus langjährigen Klimadaten erfolgt in Europa bisher nach unterschiedlichen, länderspezifischen Methoden, die sich in der Regel allein auf die Trockentemperatur beziehen und in der Auswahl eines zusammenhängenden realen Sommerhalbjahres resultieren. Simulationsergebnisse zur sommerlichen Überhitzung von natürlich belüfteten Gebäuden in Deutschland und Großbritannien zeigen jedoch für einige Wetterstationen weniger Überhitzung für Simulationen mit dem sommer-fokussierten Referenzjahr als für solche mit dem entsprechenden Testreferenzjahr (TRY) für den gleichen Ort. Dies gilt insbesondere dann, wenn einzelne Monate miteinander verglichen werden. Neben der Wahl eines kompletten Halbjahres, das sowohl extrem warme als auch vergleichsweise kühle Monate beinhalten kann, liegt dies vor allem begründet in der fehlenden Berücksichtigung der Solarstrahlung bei der Auswahl eines warmen Referenzjahres, die jedoch eine wichtige Rolle für sommerliche Überhitzungserscheinungen in Gebäuden spielt. Eine verlässliche, allgemein anerkannte Methode zur Erstellung von sommer-fokussierten Referenzjahren erscheint daher auch im Hinblick auf die rechtlichen Rahmenbedingungen in der Europäischen Union, die Strategien zur natürlichen Belüftung von Neubauten und Sanierungen begünstigen, erforderlich. Diese Arbeit präsentiert einen Ansatz zur Erstellung eines Sommerreferenzjahres (Summer Reference Year – SRY) aus dem TRY eines gegebenen Ortes und langjährigen Klimadaten. Die existierenden TRY-Daten werden hierbei skaliert, um den Bedingungen für Trockentemperatur und Solarstrahlung von nah-extremen Kandidatenjahren zu entsprechen, die separat über einen statistischen Ansatz ausgewählt werden. Anschließend werden Feuchttemperatur, Windgeschwindigkeit und Luftdruck des TRY durch lineare Korrelationen mit der Trockentemperatur angepasst, um die entsprechenden SRY-Daten zu erhalten. Der Vorteil dieser Methode liegt darin, dass das grundlegende Wettermuster des TRY erhalten bleibt und somit eine klare Relation zwischen SRY und TRY besteht, die eine Vergleichbarkeit von Simulationsergebnissen gewährleistet. Über vergleichende Gebäudesimulationen mit dem zugrundeliegenden TRY und langjährigen Klimadatensätzen kann nachgewiesen werden, dass sich das SRY zur Ermittlung sommerlicher Überhitzungserscheinungen in natürlich belüfteten Gebäuden eignet. Weiterhin kann gezeigt werden, dass das SRY im Gegensatz zur direkten Nutzung eines Kandidatenjahres für einen nah-extremen Sommer die Möglichkeit eines monatsscharfen Vergleichs mit dem TRY erlaubt und frei von wenig repräsentativen Besonderheiten ist, die in den entsprechenden Kandidatenjahren vorhanden sein können.

Etwa ein Viertel des gesamten Endenergieverbrauchs (26%) in Deutschland entfällt auf den Wohnungssektor, wodurch dieser Sektor einen erheblichen Anteil am möglichen Einsparpotenzial an Energie hat. Im Hinblick auf das Klimaschutzziel der Europäischen Union, die Energieeffizienz im Vergleich zu 1990 um 20% zu erhöhen, stellt sich daher die Frage, welche Einsparpotenziale es im Wohnungssektor tatsächlich gibt und wie diese quantifiziert werden können. In dieser Arbeit wird der Einfluss der Parameter, die den Endenergieverbrauch beeinflussen, mit Hilfe einer Sensitivitätsanalyse bestimmt. Die Ergebnisse der Sensitivitätsanalyse zeigen, dass die einflussreichsten Parameter auf den Endenergieverbrauch der Innentemperaturbedarf, die Länge der Heizperiode, die Außentemperatur (Gradtagzahl) und die Anzahl der Wohnungen sind. Dies sind Variablen, die nicht durch Verordnungen reguliert werden können. Der einzige Parameter, der regulierbar ist und einen bedeutenden Einfluss auf den Endenergieverbrauch hat, ist der Nutzungsgrad der Anlagen/Geräte für Raumwärme, Warmwasser und Kochen (sowie zu einem geringen Teil der Wirkungsgrad der eingesetzten Beleuchtung). Zur Quantifizierung des Energieeinsparpotentials im deutschen Wohnungssektor bezüglich des Nutzungsgrades wurden in dieser Arbeit Daten zur Bestimmung der langfristigen Entwicklung (Zeitraum 1990-2010) des Nutzungsgrades von Anlagen und Geräten analysiert. Mit verschiedenen Angaben aus der Literatur und mit Hilfe von Sättigungskurven wurde die Entwicklung der Nutzugsgrade der Anlagen/Geräte entsprechend der Energiequellen zwischen 1990 und 2010 ermittelt. Die erhaltenden Sättigungskurven ermöglichen die Bestimmung der Entwicklung des Nutzenergieverbrauchs im deutschen Wohnungssektor. Hierbei wurde festgestellt, dass die Differenz zwischen Nutzenergieverbrauch und Endenergieverbrauch einen Rückgang von 12 % im betrachtenden Zeitraum verzeichnete und dass das Energieeinsparpotenzial in Abhängigkeit von der Energiequelle beträchtlich variieren kann (um derzeit mehr als 35%-Punkte). Im Hinblick auf das oben genannte Klimaschutzziel werden in dieser Arbeit verschiedene Entwicklungsszenarien auf Basis des Nutzungsgrades der Anlagen und der Energiequellen analysiert. Hierbei wird deutlich, dass das theoretische Energieeinsparpotenzial im deutschen Wohnungssektor bezüglich des durchschnittlichen Nutzungsgrades nur zwischen 4 und 15 % liegt. Dies bedeutet, dass eine deutliche Reduktion des Endenergiebedarfs im Wohnungssektor nur stattfinden kann, wenn andere Energieeinsparmaßnahmen betrachtet werden. Basierend auf den Ergebnissen der Sensitivitätsanalyse werden hierzu Empfehlungen gegeben.

The Laguerre polynomials appear naturally in many branches of pure and applied mathematics and mathematical physics. Debnath introduced the Laguerre transform and derived some of its properties. He also discussed the applications in study of heat conduction and to the oscillations of a very long and heavy chain with variable tension. An explicit boundedness for some class of Laguerre integral transforms will be present.

The sizing of simple resonators like guitar strings or laser mirrors is directly connected to the wavelength and represents no complex optimisation problem. This is not the case with liquid-filled acoustic resonators of non-trivial geometries, where several masses and stiffnesses of the structure and the fluid have to fit together. This creates a scenario of many competing and interacting resonances varying in relative strength and frequency when design parameters change. Hence, the resonator design involves a parameter-tuning problem with many local optima. As its solution evolutionary algorithms (EA) coupled to a forced-harmonic FE simulation are presented. A new hybrid EA is proposed and compared to two state-of-theart EAs based on selected test problems. The motivating background is the search for better resonators suitable for sonofusion experiments where extreme states of matter are sought in collapsing cavitation bubbles.

From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.

The paper introduces a systematic construction management approach, supporting expansion of a specified construction process, both automatically and semi-automatically. Throughout the whole design process, many requirements must be taken into account in order to fulfil demands defined by clients. In implementing those demands into a design concept up to the execution plan, constraints such as site conditions, building code, and legal framework are to be considered. However, complete information, which is needed to make a sound decision, is not yet acquired in the early phase. Decisions are traditionally taken based on experience and assumptions. Due to a vast number of appropriate available solutions, particularly in building projects, it is necessary to make those decisions traceable. This is important in order to be able to reconstruct considerations and assumptions taken, should there be any changes in the future project’s objectives. The research will be carried out by means of building information modelling, where rules deriving from standard logics of construction management knowledge will be applied. The knowledge comprises a comprehensive interaction amongst bidding process, cost-estimation, construction site preparation as well as specific project logistics – which are usually still separately considered. By means of these rules, favourable decision taking regarding prefabrication and in-situ implementation can be justified. Modifications depending on the available information within current design stage will consistently be traceable.

IFC-BASED MONITORING INFORMATION MODELING FOR DATA MANAGEMENT IN STRUCTURAL HEALTH MONITORING
(2015)

This conceptual paper discusses opportunities and challenges towards the digital representation of structural health monitoring systems using the Industry Foundation Classes (IFC) standard. State-of-the-art sensor nodes, collecting structural and environmental data from civil infrastructure systems, are capable of processing and analyzing the data sets directly on-board the nodes. Structural health monitoring (SHM) based on sensor nodes that possess so called “on-chip intelligence” is, in this study, referred to as “intelligent SHM”, and the infrastructure system being equipped with an intelligent SHM system is referred to as “intelligent infrastructure”. Although intelligent SHM will continue to grow, it is not possible, on a well-defined formalism, to digitally represent information about sensors, about the overall SHM system, and about the monitoring strategies being implemented (“monitoring-related information”). Based on a review of available SHM regulations and guidelines as well as existing sensor models and sensor modeling languages, this conceptual paper investigates how to digitally represent monitoring-related information in a semantic model. With the Industry Foundation Classes, there exists an open standard for the digital representation of building information; however, it is not possible to represent monitoring-related information using the IFC object model. This paper proposes a conceptual approach for extending the current IFC object model in order to include monitoring-related information. Taking civil infrastructure systems as an illustrative example, it becomes possible to adequately represent, process, and exchange monitoring-related information throughout the whole life cycle of civil infrastructure systems, which is referred to as monitoring information modeling (MIM). However, since this paper is conceptual, additional research efforts are required to further investigate, implement, and validate the proposed concepts and methods.

The stress state of a piecewise-homogeneous elastic body, which has a semi-infinite crack along the interface, under in-plane and antiplane loads is considered. One of the crack edges is reinforced by a rigid patch plate on a finite interval adjacent to the crack tip. The crack edges are loaded with specified stresses. The body is stretched at infinity by specified stresses. External forces with a given principal vector and moment act on the patch plate. The problem reduces to a Riemann-Hilbert boundary-value matrix problem with a piecewise-constant coefficient for two complex potentials in the plane case and for one in the antiplane case. The complex potentials are found explicitly using a Gaussian hypergeometric function. The stress state of the body close to the ends of the patch plate, one of which is also simultaneously the crack tip, is investigated. Stress intensity factors near the singular points are determined.

What is nowadays called (classic) Clifford analysis consists in the establishment of a function theory for functions belonging to the kernel of the Dirac operator. While such functions can very well describe problems of a particle with internal SU(2)-symmetries, higher order symmetries are beyond this theory. Although many modifications (such as Yang-Mills theory) were suggested over the years they could not address the principal problem, the need of a n-fold factorization of the d’Alembert operator. In this paper we present the basic tools of a fractional function theory in higher dimensions, for the transport operator (alpha = 1/2 ), by means of a fractional correspondence to the Weyl relations via fractional Riemann-Liouville derivatives. A Fischer decomposition, fractional Euler and Gamma operators, monogenic projection, and basic fractional homogeneous powers are constructed.

Modern distributed engineering applications are based on complex systems consisting of various subsystems that are connected through the Internet. Communication and collaboration within an entire system requires reliable and efficient data exchange between the subsystems. Middleware developed within the web evolution during the past years provides reliable and efficient data exchange for web applications, which can be adopted for solving the data exchange problems in distributed engineering applications. This paper presents a generic approach for reliable and efficient data exchange between engineering devices using existing middleware known from web applications. Different existing middleware is examined with respect to the suitability in engineering applications. In this paper, a suitable middleware is shown and a prototype implementation simulating distributed wind farm control is presented and validated using several performance measurements.

Known as a sophisticated phenomenon in civil engineering problems, soil structure interaction has been under deep investigations in the field of Geotechnics. On the other hand, advent of powerful computers has led to development of numerous numerical methods to deal with this phenomenon, resulting in a wide variety of methods trying to simulate the behavior of the soil stratum. This survey studies two common approaches to model the soil’s behavior in a system consisting of a structure with two degrees of freedom, representing a two-storey frame structure made of steel, with the column resting on a pile embedded into sand in laboratory scale. The effect of soil simulation technique on the dynamic behavior of the structure is of major interest in the study. Utilized modeling approaches are the so-called Holistic method, and substitution of soil with respective impedance functions.

VARIATIONAL POSITING AND SOLUTION OF COUPLED THERMOMECHANICAL PROBLEMS IN A REFERENCE CONFIGURATION
(2015)

Variational formulation of a coupled thermomechanical problem of anisotropic solids for the case of non-isothermal finite deformations in a reference configuration is shown. The formulation of the problem includes: a condition of equilibrium flow of a deformation process in the reference configuration; an equation of a coupled heat conductivity in a variational form, in which an influence of deformation characteristics of a process on the temperature field is taken into account; tensor-linear constitutive relations for a hypoelastic material; kinematic and evolutional relations; initial and boundary conditions. Based on this formulation several axisymmetric isothermal and coupled problems of finite deformations of isotropic and anisotropic bodies are solved. The solution of coupled thermomechanical problems for a hollow cylinder in case of finite deformation showed an essential influence of coupling on distribution of temperature, stresses and strains. The obtained solutions show the development of stressstrain state and temperature changing in axisymmetric bodies in the case of finite deformations.

In this paper we present some rudiments of a generalized Wiman-Valiron theory in the context of polymonogenic functions. In particular, we analyze the relations between different notions of growth orders and the Taylor coefficients. Our main intention is to look for generalizations of the Lindel¨of-Pringsheim theorem. In contrast to the classical holomorphic and the monogenic setting we only obtain inequality relations in the polymonogenic setting. This is due to the fact that the Almansi-Fischer decomposition of a polymonogenic function consists of different monogenic component functions where each of them can have a totally different kind of asymptotic growth behavior.

Steel profiles with slender cross-sections are characterized by their high susceptibility to instability phenomena, especially local buckling, which are intensified under fire conditions. This work presents a study on numerical modelling of the behaviour of steel structural elements in case of fire with slender cross-sections. To accurately carry out these analyses it is necessary to take into account those local instability modes, which normally is only possible with shell finite elements. However, aiming at the development of more expeditious methods, particularly important for analysing complete structures in case of fire, recent studies have proposed the use of beam finite elements considering the presence of local buckling through the implementation of a new effective steel constitutive law. The objective of this work is to develop a study to validate this methodology using the program SAFIR. Comparisons are made between the results obtained applying the referred new methodology and finite element analyses using shell elements. The studies were made to laterally restrained beams, unrestrained beams, axially compressed columns and columns subjected to bending plus compression.

Portugal is one of the European countries with higher spatial and population freeway network coverage. The sharp growth of this network in the last years instigates the use of methods of analysis and the evaluation of their quality of service in terms of the traffic performance, typically performed through internationally accepted methodologies, namely that presented in the Highway Capacity Manual (HCM). Lately, the use of microscopic traffic simulation models has been increasingly widespread. These models simulate the individual movement of the vehicles, allowing to perform traffic analysis. The main target of this study was to verify the possibility of using microsimulation as an auxiliary tool in the adaptation of the methodology by HCM 2000 to Portugal. For this purpose, were used the microscopic simulators AIMSUN and VISSIM for the simulation of the traffic circulation in the A5 Portuguese freeway. The results allowed the analysis of the influence of the main geometric and traffic factors involved in the methodology by HCM 2000. In conclusion, the study presents the main advantages and limitations of the microsimulators AIMSUN and VISSIM in modelling the traffic circulation in Portuguese freeways. The main limitation is that these microsimulators are not able to simulate explicitly some of the factors considered in the HCM 2000 methodology, which invalidates their direct use as a tool in the quantification of those effects and, consequently, makes the direct adaptation of this methodology to Portugal impracticable.

In this paper, we present an empirical approach for objective and quantitative benchmarking of optimization algorithms with respect to characteristics induced by the forward calculation. Due to the professional background of the authors, this benchmarking strategy is illustrated on a selection of search methods in regard to expected characteristics of geotechnical parameter back calculation problems. Starting from brief introduction into the approach employed, a strategy for optimization algorithm benchmarking is introduced. The benchmarking utilizes statistical tests carried out on well-known test functions superposed with perturbations, both chosen to mimic objective function topologies found for geotechnical objective function topologies. Here, the moved axis parallel hyper-ellipsoid test function and the generalized Ackley test function in conjunction with an adjustable quantity of objective function topology roughness and fraction of failing forward calculations is analyzed. In total, results for 5 optimization algorithms are presented, compared and discussed.

This article presents the Rigid Finite Element Method in the calculation of reinforced concrete beam deflection with cracks. Initially, this method was used in the shipbuilding industry. Later, it was adapted in the homogeneous calculations of the bar structures. In this method, rigid mass discs serve as an element model. In the flat layout, three generalized coordinates (two translational and one rotational) correspond to each disc. These discs are connected by elastic ties. The genuine idea is to take into account a discrete crack in the Rigid Finite Element Method. It consists in the suitable reduction of the rigidity in rotational ties located in the spots, where cracks occurred. The susceptibility of this tie results from the flexural deformability of the element and the occurrence of the crack. As part of the numerical analyses, the influence of cracks on the total deflection of beams was determined. Furthermore, the results of the calculations were compared to the results of the experiment. Overestimations of the calculated deflections against the measured deflections were found. The article specifies the size of the overestimation and describes its causes.

Low-skilled labor makes a significant part of the construction sector, performing daily production tasks that do not require specific technical knowledge or confirmed skills. Today, construction market demands increasing skill levels. Many jobs that were once considered to be undertaken by low or un-skilled labor, now demand some kind of formal skills. The jobs that require low skilled labor are continually decreasing due to technological advancement and globalization. Jobs that previously required little or no training now require skilful people to perform the tasks appropriately. The study aims at ameliorating employability of less skilled manpower by finding ways to instruct them for performing constructions tasks. A review of exiting task instruction methodologies in construction and the underlying gaps within them warrants an appropriate way to train and instruct low skilled workers for the tasks in construction. The idea is to ensure the required quality of construction with technological and didactic aids seeming particularly purposeful to prepare potential workers for the tasks in construction without exposing them to existing communication barriers. A BIM based technology is considered promising along with the integration of visual directives/animations to elaborate the construction tasks scheduled to be carried on site.

This study contributes to the identification of coupled THM constitutive model parameters via back analysis against information-rich experiments. A sampling based back analysis approach is proposed comprising both the model parameter identification and the assessment of the reliability of identified model parameters. The results obtained in the context of buffer elements indicate that sensitive parameter estimates generally obey the normal distribution. According to the sensitivity of the parameters and the probability distribution of the samples we can provide confidence intervals for the estimated parameters and thus allow a qualitative estimation on the identified parameters which are in future work used as inputs for prognosis computations of buffer elements. These elements play e.g. an important role in the design of nuclear waste repositories.

In photogrammetry and computer vision the trifocal tensor is used to describe the geometric relation between projections of points in three views. In this paper we analyze the stability and accuracy of the metric trifocal tensor for calibrated cameras. Since a minimal parameterization of the metric trifocal tensor is challenging, the additional constraints of the interior orientation are applied to the well-known projective 6-point and 7-point algorithms for three images. The experimental results show that the linear 7-point algorithm fails for some noise-free degenerated cases, whereas the minimal 6-point algorithm seems to be competitive even with realistic noise.

The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!

The theory of regular quaternionic functions of a reduced quaternionic variable is a 3-dimensional generalization of complex analysis. The Moisil-Theodorescu system (MTS) is a regularity condition for such functions depending on the radius vector r = ix+jy+kz seen as a reduced quaternionic variable. The analogues of the main theorems of complex analysis for the MTS in quaternion forms are established: Cauchy, Cauchy integral formula, Taylor and Laurent series, approximation theorems and Cauchy type integral properties. The analogues of positive powers (inner spherical monogenics) are investigated: the set of recurrence formulas between the inner spherical monogenics and the explicit formulas are established. Some applications of the regular function in the elasticity theory and hydrodynamics are given.

In construction engineering, a schedule’s input data, which is usually not exactly known in the planning phase, is considered deterministic when generating the schedule. As a result, construction schedules become unreliable and deadlines are often not met. While the optimization of construction schedules with respect to costs and makespan has been a matter of research in the past decades, the optimization of the robustness of construction schedules has received little attention. In this paper, the effects of uncertainties inherent to the input data of construction schedules are discussed. Possibilities are investigated to improve the reliability of construction schedules by considering alternative processes for certain tasks and by identifying the combination of processes generating the most robust schedule with respect to the makespan of a construction project.

Performing parameter identification prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental setup, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and to reduce the number of monitoring points. This Paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability of soft clays. With an initial setup of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the afore mentioned parameters. Starting from these identified parameters, the optimal measurement setup is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity, and the results are discussed.

It is well-known that the solution of the fundamental equations of linear elasticity for a homogeneous isotropic material in plane stress and strain state cases can be equivalently reduced to the solution of a biharmonic equation. The discrete version of the Theorem of Goursat is used to describe the solution of the discrete biharmonic equation by the help of two discrete holomorphic functions. In order to obtain a Taylor expansion of discrete holomorphic functions we introduce a basis of discrete polynomials which fulfill the so-called Appell property with respect to the discrete adjoint Cauchy-Riemann operator. All these steps are very important in the field of fracture mechanics, where stress and displacement fields in the neighborhood of singularities caused by cracks and notches have to be calculated with high accuracy. Using the sum representation of holomorphic functions it seems possible to reproduce the order of singularity and to determine important mechanical characteristics.

One of the most promising and recent advances in computer-based planning is the transition from classical geometric modeling to building information modeling (BIM). Building information models support the representation, storage, and exchange of various information relevant to construction planning. This information can be used for describing, e.g., geometric/physical properties or costs of a building, for creating construction schedules, or for representing other characteristics of construction projects. Based on this information, plans and specifications as well as reports and presentations of a planned building can be created automatically. A fundamental principle of BIM is object parameterization, which allows specifying geometrical, numerical, algebraic and associative dependencies between objects contained in a building information model. In this paper, existing challenges of parametric modeling using the Industry Foundation Classes (IFC) as a federated model for integrated planning are shown, and open research questions are discussed.

Sensor faults can affect the dependability and the accuracy of structural health monitoring (SHM) systems. Recent studies demonstrate that artificial neural networks can be used to detect sensor faults. In this paper, decentralized artificial neural networks (ANNs) are applied for autonomous sensor fault detection. On each sensor node of a wireless SHM system, an ANN is implemented to measure and to process structural response data. Structural response data is predicted by each sensor node based on correlations between adjacent sensor nodes and on redundancies inherent in the SHM system. Evaluating the deviations (or residuals) between measured and predicted data, sensor faults are autonomously detected by the wireless sensor nodes in a fully decentralized manner. A prototype SHM system implemented in this study, which is capable of decentralized autonomous sensor fault detection, is validated in laboratory experiments through simulated sensor faults. Several topologies and modes of operation of the embedded ANNs are investigated with respect to the dependability and the accuracy of the fault detection approach. In conclusion, the prototype SHM system is able to accurately detect sensor faults, demonstrating that neural networks, processing decentralized structural response data, facilitate autonomous fault detection, thus increasing the dependability and the accuracy of structural health monitoring systems.

A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.

SELECTION AND SCALING OF GROUND MOTION RECORDS FOR SEISMIC ANALYSIS USING AN OPTIMIZATION ALGORITHM
(2015)

The nonlinear time history analysis and seismic performance based methods require a set of scaled ground motions. The conventional procedure of ground motion selection is based on matching the motion properties, e.g. magnitude, amplitude, fault distance, and fault mechanism. The seismic target spectrum is only used in the scaling process following the random selection process. Therefore, the aim of the paper is to present a procedure to select a sets of ground motions from a built database of ground motions. The selection procedure is based on running an optimization problem using Dijkstra’s algorithm to match the selected set of ground motions to a target response spectrum. The selection and scaling procedure of optimized sets of ground motions is presented by examining the analyses of nonlinear single degree of freedom systems.

With the advances of the computer technology, structural optimization has become a prominent field in structural engineering. In this study an unconventional approach of structural optimization is presented which utilize the Energy method with Integral Material behaviour (EIM), based on the Lagrange’s principle of minimum potential energy. The equilibrium condition with the EIM, as an alternative method for nonlinear analysis, is secured through minimization of the potential energy as an optimization problem. Imposing this problem as an additional constraint on a higher cost function of a structural property, a bilevel programming problem is formulated. The nested strategy of solution of the bilevel problem is used, treating the energy and the upper objective function as separate optimization problems. Utilizing the convexity of the potential energy, gradient based algorithms are employed for its minimization and the upper cost function is minimized using the gradient free algorithms, due to its unknown properties. Two practical examples are considered in order to prove the efficiency of the method. The first one presents a sizing problem of I steel section within encased composite cross section, utilizing the material nonlinearity. The second one is a discrete shape optimization of a steel truss bridge, which is compared to a previous study based on the Finite Element Method.

A central issue for the autonomous navigation of mobile robots is to map unknown environments while simultaneously estimating its position within this map. This chicken-eggproblem is known as simultaneous localization and mapping (SLAM). Asctec’s quadrotor Pelican is a powerful and flexible research UAS (unmanned aircraft system) which enables the development of new real-time on-board algorithms for SLAM as well as autonomous navigation. The relative UAS pose estimation for SLAM, usually based on low-cost sensors like inertial measurement units (IMU) and barometers, is known to be affected by high drift rates. In order to significantly reduce these effects, we incorporate additional independent pose estimation techniques using exteroceptive sensors. In this article we present first pose estimation results using a stereo camera setup as well as a laser range finder, individually. Even though these methods fail in few certain configurations we demonstrate their effectiveness and value for the reduction of IMU drift rates and give an outlook for further works towards SLAM.

The p-Laplace equation is a nonlinear generalization of the Laplace equation. This generalization is often used as a model problem for special types of nonlinearities. The p-Laplace equation can be seen as a bridge between very general nonlinear equations and the linear Laplace equation. The aim of this paper is to solve the p-Laplace equation for 2 < p < 3 and to find strong solutions. The idea is to apply a hypercomplex integral operator and spatial function theoretic methods to transform the p-Laplace equation into the p-Dirac equation. This equation will be solved iteratively by using a fixed point theorem.

In order to minimize the probability of foundation failure resulting from cyclic action on structures, researchers have developed various constitutive models to simulate the foundation response and soil interaction as a result of these complex cyclic loads. The efficiency and effectiveness of these model is majorly influenced by the cyclic constitutive parameters. Although a lot of research is being carried out on these relatively new models, little or no details exist in literature about the model based identification of the cyclic constitutive parameters. This could be attributed to the difficulties and complexities of the inverse modeling of such complex phenomena. A variety of optimization strategies are available for the solution of the sum of least-squares problems as usually done in the field of model calibration. However for the back analysis (calibration) of the soil response to oscillatory load functions, this paper gives insight into the model calibration challenges and also puts forward a method for the inverse modeling of cyclic loaded foundation response such that high quality solutions are obtained with minimum computational effort. Therefore model responses are produced which adequately describes what would otherwise be experienced in the laboratory or field.

Over the last decade, the technology of constructing buildings has been dramatically developed especially with the huge growth of CAD tools that help in modeling buildings, bridges, roads and other construction objects. Often quality control and size accuracy in the factory or on construction site are based on manual measurements of discrete points. These measured points of the realized object or a part of it will be compared with the points of the corresponding CAD model to see whether and where the construction element fits into the respective CAD model. This process is very complicated and difficult even when using modern measuring technology. This is due to the complicated shape of the components, the large amount of manually detected measured data and the high cost of manual processing of measured values. However, by using a modern 3D scanner one gets information of the whole constructed object and one can make a complete comparison against the CAD model. It gives an idea about quality of objects on the whole. In this paper, we present a case study of controlling the quality of measurement during the constructing phase of a steel bridge by using 3D point cloud technology. Preliminary results show that an early detection of mismatching between real element and CAD model could save a lot of time, efforts and obviously expenses.

Recently there has been a surge of interest in PDEs involving fractional derivatives in different fields of engineering. In this extended abstract we present some of the results developedin [3]. We compute the fundamental solution for the three-parameter fractional Laplace operator Δ by transforming the eigenfunction equation into an integral equation and applying the method of separation of variables. The obtained solutions are expressed in terms of Mittag-Leffer functions. For more details we refer the interested reader to [3] where it is also presented an operational approach based on the two Laplace transform.

Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.

La ri-fondazione della Libia balbiana (1933-1939). Il poderoso racconto fotografico dei “Ventimila”
(2014)

La prima edizione di questo testo è apparsa negli atti del VI Convegno Internazionale di Studi del CIRICE – Centro Interdipartimentale di Ricerca sull’Iconografia della Città Europea − Università di Napoli Federico II, (Napoli, 13-15 marzo 2014), dal titolo: Città mediterranee in trasformazione. Identità e immagine del paesaggio urbano tra Sette e Novecento, a cura di A. Buccaro e C. de Seta (Collana: Polis, 6; Napoli: Edizioni Scientifiche Italiane, 2014; pp. 1216; ISBN 9788849528145), all’interno della sessione 7, Le trasformazioni del paesaggio urbano nella fotografia e nella cinematografia, coordinatori: F. Capano, M. Iuliano, pp. 1085-1098. Il Convegno, aperto a studiosi di ambito nazionale e internazionale, si poneva l’obiettivo di fare il punto sulla storiografia riguardante la città mediterranea in età contemporanea, con particolare riferimento alla sua identità, struttura e immagine, dall’inizio dell’industrializzazione all’età post-illuminista e borghese, fino ai temi inerenti l’evoluzione/involuzione del territorio e del paesaggio post-industriale, nonché lo sviluppo del modello turistico tra Otto e Novecento.

Reinforced concrete walls are commonly selected as the lateral resisting systems in seismic design of buildings. The design procedure requires reliable/robust models to predict the wall response. Many researchers, thus, have focused on using the available experimental data to be able to comment on the quality of models at hand. What is missing though is an uncertain attitude towards the experimental data since such data can be affected by different sources of uncertainty. In this paper, we introduce the database created for model quality evaluation purposes considering the uncertainties in the experimental data. This is the first step of a larger study on experience-based model quality evaluation of reinforced concrete walls. Here, we briefly present the database as well as six sample validations of the developed numerical model (the quality of which is to be assessed). The database contains the information on nearly 300 wall specimens from about 50 sources. Both the database and the numerical model, built for uncertainty/sensitivity analysis purposes, are mainly based on ten parameters. These include geometry, material, reinforcement layout and loading properties. The validation results prove that the model is able to predict the wall response satisfactorily. Consequently, the validated numerical model could be used in further quality evaluation studies.

The Bauhaus Summer School series provides an international forum for an exchange of methods and skills related to the interaction between different disciplines of modern engineering science.
The 2012 civil engineering course was held in August over two weeks at Bauhaus-Universität Weimar. The overall aim was the exchange
of research and modern scientific approaches in the field of model validation and simulation between well-known experts acting as lecturers
and active students. Besides these educational intentions the social and cultural component of the meeting has been in the focus. 48 graduate and doctoral students from 20 different countries and 22 lecturers from 12 countries attended this summer school. Among
other aspects, this activity can be considered successful as it raised the
sensitivity towards both the significance of research in civil engineering
and the role of intercultural exchange.
This volume summarizes and publishes some of the results: abstracts
of key note papers presented by the experts and selected student
research works. The overview reflects the quality of this summer school.
Furthermore the individual contributions confirm that for active students
this event has been a research forum and a special opportunity
to learn from the experiences of the researchers in terms of methodology
and strategies for research implementation in their current work.

Flächenmanagement in Hochschulen. Workshop zu Handlungsansätzen hochschulinterner Flächensteuerung
(2013)

Die Publikation dokumentiert die Beiträge des Workshops „Flächenmanagement in Hochschulen“ der Bauhaus-Universität Weimar, Professur Betriebswirtschaftslehre im Bauwesen, vom 19.11.2012. Insbesondere für Akteure aus Lehre, Forschung, Hochschulverwaltung, Bau- und Liegenschaftsverwaltungen sowie Politik bieten die wiedergegebenen Vorträge theoretische und praktische Anregungen für das Vorgehen bei der Steuerung hochschulinterner Flächen.
Dokumentiert werden unterschiedliche Modi zur Steuerung der Flächenressourcen. Ziel ist es dabei, die liegenschaftspolitischen Rahmenbedingungen aufzuzeigen, von denen das Flächenmanagement abhängig ist. Mit der Auswertung einer deutschlandweiten Hochschulbefragung zum Flächenmanagement wird der Status quo hochschulinterner Flächensteuerung nachgezeichnet. Es wird zuerst ein Überblick gegeben, welche Ansätze zur Optimierung der Flächen-steuerung von Hochschulen möglich sind. Hochschulvertreter von zwei staatlichen und einer privaten Hochschule stellten praktizierte Handlungsansätze für einen ressourcenschonenden Umgang mit Flächen vor und arbeiteten die aus ihrer Sicht Erfolg versprechenden Steuerungsaspekte heraus. Zusätzliche Diskussionsimpulse für die Flächensteuerung an Hochschulen bieten die Dokumentationen von Praxisbeispielen aus anderen Bereichen. Es werden das Vorgehen bei der Flächensteuerung eines Chemie- und Pharmakonzerns mit umfangreichen eigenen Forschungsaktivitäten sowie Flächenoptimierungsmaßnahmen bei Büroflächen der öffentlichen Verwaltung vorgestellt.

Based on the description of a conceptual framework for the representation of planning problems on various scales, we introduce an evolutionary design optimization system. This system is exemplified by means of the generation of street networks with locally defined properties for centrality. We show three different scenarios for planning requirements and evaluate the resulting structures with respect to the requirements of our framework. Finally the potentials and challenges of the presented approach are discussed in detail.

Damping in Bolted Joints
(2013)

With the help of modern CAE-based simulation processes, it is possible to predict the dynamic behavior of fatigue strength problems in order to improve products of many industries, e.g. the building, the machine construction or the automotive industry. Amongst others, it can be used to improve the acoustic design of automobiles in an early development stage.
Nowadays, the acoustics of automobiles plays a crucial role in the process of vehicle development. Because of the advanced demand of comfort and due to statutory rules the manufacturers are faced with the challenge of optimizing their car’s sound emissions. The optimization includes not only the reduction of noises. Lately with the trend to hybrid and electric cars, it has been shown that vehicles can become too quiet. Thus, the prediction of structural and acoustic properties based on FE-simulations is becoming increasingly important before any experimental prototype is examined. With the state of the art, qualitative comparisons between different implementations are possible. However, an accurate and reliable quantitative prediction is still a challenge.
One aspect in the context of increasing the prediction quality of acoustic (or general oscillating) problems - especially in power-trains of automobiles - is the more accurate implementation of damping in joint structures. While material damping occurs globally and homogenous in a structural system, the damping due to joints is a very local problem, since energy is especially dissipated in the vicinity of joints.
This paper focusses on experimental and numerical studies performed on a single (extracted) screw connection. Starting with experimental studies that are used to identify the underlying physical model of the energy loss, the locally influencing parameters (e.g. the damping factor) should be identified. In contrast to similar research projects, the approach tends to a more local consideration within the joint interface. Tangential stiffness and energy loss within the interface are spatially distributed and interactions between the influencing parameters are regarded. As a result, the damping matrix is no longer proportional to mass or stiffness matrix, since it is composed of the global material damping and the local joint damping. With this new approach, the prediction quality can be increased, since the local distribution of the physical parameters within the joint interface corresponds much closer to the reality.

The 19th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 4th till 6th July 2012. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!

In this paper, wavelet energy damage indicator is used in response surface methodology to identify the damage in simulated filler beam railway bridge. The approximate model is addressed to include the operational and surrounding condition in the assessment. The procedure is split into two stages, the training and detecting phase. During training phase, a so-called response surface is built from training data using polynomial regression and radial basis function approximation approaches. The response surface is used to detect the damage in structure during detection phase. The results show that the response surface model is able to detect moderate damage in one of bridge supports while the temperatures and train velocities are varied.

Electromagnetic wave propagation is currently present in the vast majority of situations which occur in veryday life, whether in mobile communications, DTV, satellite tracking, broadcasting, etc. Because of this the study of increasingly complex means of propagation of lectromagnetic waves has become necessary in order to optimize resources and increase the capabilities of the devices as required by the growing demand for such services.
Within the electromagnetic wave propagation different parameters are considered that characterize it under various circumstances and of particular importance are the reflectance and transmittance. There are several methods or the analysis of the reflectance and transmittance such as the method of approximation by boundary condition, the plane wave expansion method (PWE), etc., but this work focuses on the WKB and SPPS methods.
The implementation of the WKB method is relatively simple but is found to be relatively efficient only when working at high frequencies. The SPPS method (Spectral Parameter Powers Series) based on the theory of pseudoanalytic functions, is used to solve this problem through a new representation for solutions of Sturm Liouville equations and has recently proven to be a powerful tool to solve different boundary value and eigenvalue problems. Moreover, it has a very suitable structure for numerical implementation, which in this case took place in the Matlab software for the valuation of both conventional and turning points profiles.
The comparison between the two methods allows us to obtain valuable information about their perfor mance which is useful for determining the validity and propriety of their application for solving problems where these parameters are calculated in real life applications.

The present research analyses the error on prediction obtained under different data availability scenarios to determine which measurements contribute to an improvement of model prognosis and which not. A fully coupled 2D hydromechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are ranked by scaled sensitivities, Particle Swarm Optimization is applied to determine the optimal parameter values and model validation is performed to determine the magnitude of error forecast. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain actual prediction errors.
The analyses presented here were performed to 31 data sets of 100 observations of varying data types. Calibrating with multiple information types instead of only one sort, brings better calibration results and improvement in model prognosis. However, when using several types of information the number of observations have to be increased to be able to cover a representative part of the model domain; otherwise a compromise between data availability and domain
coverage prove best. Which type of information for calibration contributes to the best prognoses, could not be determined in advance. For the error in model prognosis does not depends on the error in calibration, but on the parameter error, which unfortunately can not be determined in reality since we do not know its real value. Excellent calibration fits with parameters’ values near the limits of reasonable physical values, provided the highest prognosis errors. While models which included excess pore pressure values for calibration provided the best prognosis, independent of the calibration fit.

Non-destructive techniques for damage detection became the focus of engineering interests in the last few years. However, applying these techniques to large complex structures like civil engineering buildings still has some limitations since these types of structures are
unique and the methodologies often need a large number of specimens for reliable results. For this reason, cost and time can greatly influence the final results.
Model Assisted Probability Of Detection (MAPOD) has taken its place among the ranks of damage identification techniques, especially with advances in computer capacity and modeling tools. Nevertheless, the essential condition for a successful MAPOD is having a reliable model in advance. This condition is opening the door for model assessment and model quality problems. In this work, an approach is proposed that uses Partial Models (PM) to compute the Probability Of damage Detection (POD). A simply supported beam, that can be structurally modified and
tested under laboratory conditions, is taken as an example. The study includes both experimental and numerical investigations, the application of vibration-based damage detection approaches and a comparison of the results obtained based on tests and simulations.
Eventually, a proposal for a methodology to assess the reliability and the robustness of the models is given.

We study the Weinstein equation u on the upper half space R3+. The Weinstein equation is connected to the axially symmetric potentials. We compute solutions of the Weinstein equation depending on the hyperbolic distance and x2. These results imply the explicit mean value properties. We also compute the fundamental solution. The main tools are the hyperbolic metric and its invariance properties.

Safety operation of important civil structures such as bridges can be estimated by using fracture analysis. Since the analytical methods are not capable of solving many complicated engineering problems, numerical methods have been increasingly adopted. In this paper, a part of isotropic material which contains a crack is considered as a partial model and the proposed model quality is evaluated. EXtended IsoGeometric Analysis (XIGA) is a new developed numerical approach [1, 2] which benefits from advantages of its origins: eXtended Finite Element Method (XFEM) and IsoGeometric Analysis (IGA). It is capable of simulating crack propagation problems with no remeshing necessity and capturing singular field at the crack tip by using the crack tip enrichment functions. Also, exact representation of geometry is possible using only few elements. XIGA has also been successfully applied for fracture analysis of cracked orthotropic bodies [3] and for simulation of curved cracks [4]. XIGA applies NURBS functions for both geometry description and solution field approximation. The drawback of NURBS functions is that local refinement cannot be defined regarding that it is based on tensorproduct constructs unless multiple patches are used which has also some limitations. In this contribution, the XIGA is further developed to make the local refinement feasible by using Tspline basis functions. Adopting a recovery based error estimator in the proposed approach for evaluation of the model quality and performing the adaptive processes is in progress. Finally, some numerical examples with available analytical solutions are investigated by the developed scheme.

New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea that Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n×n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multilinear quantities.

We briefly review and use the recent comprehensive research on the manifolds of square roots of −1 in real Clifford geometric algebras Cl(p,q) in order to construct the Clifford Fourier transform. Basically in the kernel of the complex Fourier transform the complex imaginary unit j is replaced by a square root of −1 in Cl(p,q). The Clifford Fourier transform (CFT) thus obtained generalizes previously known and applied CFTs, which replaced the complex imaginary unit j only by blades (usually pseudoscalars) squaring to −1. A major advantage of real Clifford algebra CFTs is their completely real geometric interpretation. We study (left and right) linearity of the CFT for constant multivector coefficients in Cl(p,q), translation (x-shift) and modulation (w -shift) properties, and signal dilations. We show an inversion theorem. We establish the CFT of vector differentials, partial derivatives, vector derivatives and spatial moments of the signal. We also derive Plancherel and Parseval identities as well as a general convolution theorem.

Numerical simulations in the general field of civil engineering are common for the design process of structures and/or the assessment of existing buildings. The behaviour of these structures is analytically unknown and is approximated with numerical simulation methods like the Finite Element Method (FEM). Therefore the real structure is transferred into a global model (GM, e.g. concrete bridge) with a wide range of sub models (partial models PM, e.g. material modelling, creep). These partial models are coupled together to predict the behaviour of the observed structure (GM) under different conditions. The engineer needs to decide which models are suitable for computing realistically and efficiently the physical processes determining the structural behaviour. Theoretical knowledge along with the experience from prior design processes will influence this model selection decision. It is thus often a qualitative selection of different models. The goal of this paper is to present a quantitative evaluation of the global model quality according to the simulation of a bridge subject to direct loading (dead load, traffic) and indirect loading (temperature), which induce restraint effects. The model quality can be separately investigated for each partial model and also for the coupled partial models in a global structural model. Probabilistic simulations are necessary for the evaluation of these model qualities by using Uncertainty and Sensitivity Analysis. The method is applied to the simulation of a semi-integral concrete bridge with a monolithic connection between the superstructure and the piers, and elastomeric bearings at the abutments. The results show that the evaluation of global model quality is strongly dependent on the sensitivity of the considered partial models and their related quantitative prediction quality. This method is not only a relative comparison between different models, but also a quantitative representation of model quality using probabilistic simulation methods, which can support the process of model selection for numerical simulations in research and practice.

Bridge vibration due to traffic loading has been subject of extensive research in the last decades. Such studies are concerned with deriving solutions for the bridge-vehicle interaction (BVI) and analyzing the dynamic responses considering randomness of the coupled model’s (BVI) input parameters and randomness of road unevenness. This study goes further to examine the effects of such randomness of input parameters and processes on the variance of dynamic responses in quantitative measures. The input parameters examined in the sensitivity analysis are, stiffness and damping of vehicle’s suspension system, axle spacing, and stiffness and damping of bridge. This study also examines the effects of the initial excitation of a vehicle on the influences of the considered input parameters. Variance based sensitivity analysis is often applied to deterministic models. However, the models for the dynamic problem is a stochastic one due to the simulations of the random processes. Thus, a setting using a joint meta-model; one for the mean response and other for the dispersion of the response is developed. The joint model is developed within the framework of Generalized Linear Models (GLM). An enhancement of the GLM procedure is suggested and tested; this enhancement incorporates Moving Least Squares (MLS) approximation algorithms in the fitting of the mean component of the joint model. The sensitivity analysis is then performed on the joint-model developed for the dynamic responses caused by BVI.

In this paper we review two distint complete orthogonal systems of monogenic polynomials over 3D prolate spheroids. The underlying functions take on either values in the reduced and full quaternions (identified, respectively, with R3 and R4), and are generally assumed to be nullsolutions of the well known Riesz and Moisil Théodoresco systems in R3. This will be done in the spaces of square integrable functions over R and H. The representations of these polynomials are explicitly given. Additionally, we show that these polynomial functions play an important role in defining the Szegö kernel function over the surface of 3D spheroids. As a concrete application, we prove the explicit expression of the monogenic Szegö kernel function over 3D prolate spheroids.

This paper presents a methodology for uncertainty quantification in cyclic creep analysis. Several models- , namely BP model, Whaley and Neville model, modified MC90 for cyclic loading and modified Hyperbolic function for cyclic loading are used for uncertainty quantification. Three types of uncertainty are included in Uncertainty Quantification (UQ): (i) natural variability in loading and materials properties; (ii) data uncertainty due to measurement errors; and (iii) modelling uncertainty and errors during cyclic creep analysis. Due to the consideration of all type of uncertainties, a measure for the total variation of the model response is achieved. The study finds that the BP, modified Hyperbolic and modified MC90 are best performing models for cyclic creep prediction in that order. Further, global Sensitivity Analysis (SA) considering the uncorrelated and correlated parameters is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identifying the important parameters. The error in determining the input quantities and model itself can produce significant changes in creep prediction values. The variability influence of input random quantities on the cyclic creep was studied by means of the stochastic uncertainty and sensitivity analysis namely the Gartner et al. method and Saltelli et al. method. All input imperfections were considered to be random quantities. The Latin Hypercube Sampling (LHS) numerical simulation method (Monte Carlo type method) was used. It has been found by the stochastic sensitivity analysis that the cyclic creep deformation variability is most sensitive to the Elastic modulus of concrete, compressive strength, mean stress, cyclic stress amplitude, number of cycle, in that order.

In this paper experimental studies and numerical analysis carried out on reinforced concrete beam are partially reported. They aimed to apply the rigid finite element method to calculations for reinforced concrete beams using discrete crack model. Hence rotational ductility resulting from crack occurrence had to be determined. A relationship for calculating it in static equilibrium was proposed. Laboratory experiments proved that dynamic ductility is considerably smaller. Therefore scaling of the empirical parameter was carried out. Consequently a formula for its value depending on reinforcement ratio was obtained.

A numerical analysis of the mode of deformation of the main load-bearing components of a typical frame sloping shaft headgear was performed. The analysis was done by a design model consisting of plane and solid finite elements, which were modeled in the program «LIRA». Due to the numerical results, the regularities of local stress distribution under a guide pulley bearing were revealed and parameters of a plane stress under both emergency and normal working loads were determined. In the numerical simulation, the guidelines to improve the construction of the joints of guide pulleys resting on sub-pulley frame-type structures were established. Overall, the results obtained are the basis for improving the engineering procedures of designing steel structures of shaft sloping headgear.

Monogenic functions play a role in quaternion analysis similarly to that of holomorphic functions in complex analysis. A holomorphic function with nonvanishing complex derivative is a conformal mapping. It is well-known that in Rn+1, n ≥ 2 the set of conformal mappings is restricted to the set of Möbius transformations only and that the Möbius transformations are not monogenic. The paper deals with a locally geometric mapping property of a subset of monogenic functions with nonvanishing hypercomplex derivatives (named M-conformal mappings). It is proved that M-conformal mappings orthogonal to all monogenic constants admit a certain change of solid angles and vice versa, that change can characterize such mappings. In addition, we determine planes in which those mappings behave like conformal mappings in the complex plane.

The topic of structural robustness is covered extensively in current literature in structural engineering. A few evaluation methods already exist. Since these methods are based on different evaluation approaches, the comparison is difficult. But all the approaches have one in common, they need a structural model which represents the structure to be evaluated. As the structural model is the basis of the robustness evaluation, there is the question if the quality of the chosen structural model is influencing the estimation of the structural robustness index. This paper shows what robustness in structural engineering means and gives an overview of existing assessment methods. One is the reliability based robustness index, which uses the reliability indices of a intact and a damaged structure. The second one is the risk based robustness index, which estimates the structural robustness by the usage of direct and indirect risk. The paper describes how these approaches for the evaluation of structural robustness works and which parameters will be used. Since both approaches needs a structural model for the estimation of the structural behavior and the probability of failure, it is necessary to think about the quality of the chosen structural model. Nevertheless, the chosen model has to represent the structure, the input factors and reflect the damages which occur. On the example of two different model qualities, it will be shown, that the model choice is really influencing the quality of the robustness index.

BAUHAUS ISOMETRY AND FIELDS
(2012)

While integration increases by networking, segregation strides ahead too. Most of us fixate our mind on special topics. Yet we are relying on our intuition too. We are sometimes waiting for the inflow of new ideas or valuable information that we hold in high esteem, although we are not entirely conscious of its origin. We may even say the most precious intuitions are rooting in deep subconscious, collective layers of the mind. Take as a simple example the emergence of orientation in paleolithic events and its relation to the dihedral symmetry of the compass. Consider also the extension of this algebraic matter into the operational structures of the mind on the one hand and into the algebra of geometry, Clifford algebra as we use to call it today, on the other. Culture and mind, and even the individual act of creation may be connected with transient events that are subconscious and inaccessible to cognition in principle. Other events causative for our work may be merely invisible too us, though in principle they should turn out attainable. In this case we are just ignorant of the whole creative process. Sometimes we begin to use unusual tools or turn into handicraft enthusiasts. Then our small institutes turn into workshops and factories. All this is indeed joining with the Bauhaus and its spirit. We shall go together into this, and we shall present a record of this session.

The Bernstein polynomials are used for important applications in many branches of Mathematics and the other sciences, for instance, approximation theory, probability theory, statistic theory, num- ber theory, the solution of the di¤erential equations, numerical analysis, constructing Bezier curves, q-calculus, operator theory and applications in computer graphics. The Bernstein polynomials are used to construct Bezier curves. Bezier was an engineer with the Renault car company and set out in the early 1960’s to develop a curve formulation which would lend itself to shape design. Engineers may …nd it most understandable to think of Bezier curves in terms of the center of mass of a set of point masses. Therefore, in this paper, we study on generating functions and functional equations for these polynomials. By applying these functions, we investigate interpolation function and many properties of these polynomials.

The aim of this study is to show an application of model robustness measures for soilstructure interaction (henceforth written as SSI) models. Model robustness defines a measure for the ability of a model to provide useful model answers for input parameters which typically have a wide range in geotechnical engineering. The calculation of SSI is a major problem in geotechnical engineering. Several different models exist for the estimation of SSI. These can be separated into analytical, semi-analytical and numerical methods. This paper focuses on the numerical models of SSI specific macro-element type models and more advanced finite element method models using contact description as continuum or interface elements. A brief description of the models used is given in the paper. Following this description, the applied SSI problem is introduced. The observed event is a static loaded shallow foundation with an inclined load. The different partial models to consider the SSI effects are assessed using different robustness measures during numerical application. The paper shows the investigation of the capability to use these measures for the assessment of the model quality of SSI partial models. A variance based robustness and a mathematical robustness approaches are applied. These different robustness measures are used in a framework which allows also the investigation of computational time consuming models. Finally the result shows that the concept of using robustness approaches combined with other model–quality indicators (e.g. model sensitivity or model reliability) can lead to unique model–quality assessment for SSI models.

The process of analysis and design in structural engineering requires the consideration of different partial models, for example loading, structural materials, structural elements, and analysis types. The various partial models are combined by coupling several of their components. Due to the large number of available partial models describing similar phenomena, many different model combinations are possible to simulate the same aspects of a structure. The challenging task of an engineer is to select a model combination that ensures a sufficient, reliable prognosis. In order to achieve this reliable prognosis of the overall structural behavior, a high individual quality of the partial models and an adequate coupling of the partial models is required. Several methodologies have been proposed to evaluate the quality of partial models for their intended application, but a detailed study of the coupling quality is still lacking. This paper proposes a new approach to assess the coupling quality of partial models in a quantitative manner. The approach is based on the consistency of the coupled data and applies for uni- and bidirectional coupled partial models. Furthermore, the influence of the coupling quality on the output quantities of the partial models is considered. The functionality of the algorithm and the effect of the coupling quality are demonstrated using an example of coupled partial models in structural engineering.

The aim of this paper we discuss explicit series constructions for the fundamental solution of the Helmholtz operator on some important examples non-orientable conformally at manifolds. In the context of this paper we focus on higher dimensional generalizations of the Klein bottle which in turn generalize higher dimensional Möbius strips that we discussed in preceding works. We discuss some basic properties of pinor valued solutions to the Helmholtz equation on these manifolds.

THE INFLUENCE OF THE LOCAL CONCAVITY ON THE FUNCTIONING OF BEARING SHELL OF HIGH-RISE CONSTRUCTION
(2012)

Areas with various defects and damages, which reduce carrying capacity, were examined in a study of metal chimneys. In this work, the influence of the local dimples on the function of metal chimneys was considered. Modeling tasks were completed in the software packages LIRA and ANSYS. Parameters were identified, which characterize the local dimples, and a numerical study of the influence of local dimples on the stress-strain state of shells of metal chimneys was conducted. A distribution field of circular and meridional tension was analyzed in a researched area. Zones of influence of dimples on the bearing cover of metal chimneys were investigated. The bearing capacities of high-rise structures with various dimple geometries and various cover parameters were determined with respect to specified areas of the trunk. Dependent relationships are represented graphically for the decrease in bearing capacity of a cover with respect to dimples. Diameter and thickness of covers of metal chimneys were constructed according to the resulting data.

Many structures in different engineering applications suffer from cracking. In order to make reliable prognosis about the serviceability of those structures it is of utmost importance to identify cracks as precisely as possible by non-destructive testing. A novel approach (XIGA), which combines the Isogeometric Analysis (IGA) and the Extended Finite Element Method (XFEM) is used for the forward problem, namely the analysis of a cracked material, see [1]. Applying the NURBS (Non-Uniform Rational B-Spline) based approach from IGA together with the XFEM allows to describe effectively arbitrarily shaped cracks and avoids the necessity of remeshing during the crack identification problem. We want to exploit these advantages for the inverse problem of detecting existing cracks by non-destructive testing, see e.g. [2]. The quality of the reconstructed cracks however depends on two major issues, namely the quality of the measured data (measurement error) and the discretization of the crack model. The first one will be taken into account by applying regularizing methods with a posteriori stopping criteria. The second one is critical in the sense that too few degrees of freedom, i.e. the number of control points of the NURBS, do not allow for a precise description of the crack. An increased number of control points, however, increases the number of unknowns in the inverse analysis and intensifies the ill-posedness. The trade-off between accuracy and stability is aimed to be found by applying an inverse multilevel algorithm [3, 4] where the identification is started with short knot vectors which successively will be enlarged during the identification process.

It is well known that complex quaternion analysis plays an important role in the study of higher order boundary value problems of mathematical physics. Following the ideas given for real quaternion analysis, the paper deals with certain orthogonal decompositions of the complex quaternion Hilbert space into its subspaces of null solutions of Dirac type operator with an arbitrary complex potential. We then apply them to consider related boundary value problems, and to prove the existence and uniqueness as well as the explicit representation formulae of the underlying solutions.

This paper is focused on the first numerical tests for coupling between analytical solution and finite element method on the example of one problem of fracture mechanics. The calculations were done according to ideas proposed in [1]. The analytical solutions are constructed by using an orthogonal basis of holomorphic and anti-holomorphic functions. For coupling with finite element method the special elements are constructed by using the trigonometric interpolation theorem.