### Refine

#### Document Type

- Conference Proceeding (80)
- Doctoral Thesis (17)
- Article (12)
- Master's Thesis (4)
- Bachelor Thesis (3)
- Study Thesis (2)
- Report (1)

#### Institute

- In Zusammenarbeit mit der Bauhaus-Universität Weimar (50)
- Institut für Strukturmechanik (20)
- Graduiertenkolleg 1462 (12)
- Professur Angewandte Mathematik (4)
- Juniorprofessur Augmented Reality (3)
- Juniorprofessur Stochastik und Optimierung (3)
- Professur Betriebswirtschaftslehre im Bauwesen (3)
- Professur Grundbau (3)
- Professur Informatik im Bauwesen (3)
- Institut für Europäische Urbanistik (2)

#### Keywords

#### Year of publication

- 2010 (119) (remove)

We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.

The application of partly decoupled approach by means of continuum mechanics facilitates the calculation of structural responses due to welding. The numerical results demonstrate the ability of a qualitative prediction of welded connections. As it is intended to integrate the local effects of a joint in structural analysis of steel constructions, it is necessary to meet higher approaches towards quality. The wide array of material parameters are presented, which are affecting the thermal, metallurgical and mechanical behavior, and which have to be identified. For that purpose further investigations are necessary to analyze the sensitivity of the models towards different material properties. The experimental determination of every material parameter is not possible due to the extraordinary laborious efforts needed. Besides that, experimentally identified parameters can be applied only for the tested steel quality for measured temperature-time regimes. For that reason alternative approaches for identification of material parameters, such as optimization strategies, have to be applied. After a definition of material parameters a quantitative prediction of welded connections will also be possible. Numerical results show the effect of phase transformation, activated by welding process, on residual stress state. As these phenomena occur in local areas in the range of crystal and grain sizes, the description of microscopic phenomena and their propagation on a macroscopic level due to approaches of homogenization might be expedient. Nevertheless, one should bear in mind, the increasing number of material parameters as well as the complexity of their experimental determination. Thus the microscopic approach should always be investigated under the scope of ability and efficiency of a required prediction. Under certain circumstances a step backwards, adopting a phenomenological approach, also can be beneficial.

Nodal integration of finite elements has been investigated recently. Compared with full integration it shows better convergence when applied to incompressible media, allows easier remeshing and highly reduces the number of material evaluation points thus improving efficiency. Furthermore, understanding it may help to create new integration schemes in meshless methods as well. The new integration technique requires a nodally averaged deformation gradient. For the tetrahedral element it is possible to formulate a nodal strain which passes the patch test. On the downside, it introduces non-physical low energy modes. Most of these "spurious modes" are local deformation maps of neighbouring elements. Present stabilization schemes rely on adding a stabilizing potential to the strain energy. The stabilization is discussed within this article. Its drawbacks are easily identified within numerical experiments: Nonlinear material laws are not well represented. Plastic strains may often be underestimated. Geometrically nonlinear stabilization greatly reduces computational efficiency. The article reinterpretes nodal integration in terms of imposing a nonconforming C0-continuous strain field on the structure. By doing so, the origins of the spurious modes are discussed and two methods are presented that solve this problem. First, a geometric constraint is formulated and solved using a mixed formulation of Hu-Washizu type. This assumption leads to a consistent representation of the strain energy while eliminating spurious modes. The solution is exact, but only of theoretical interest since it produces global support. Second, an integration scheme is presented that approximates the stabilization criterion. The latter leads to a highly efficient scheme. It can even be extended to other finite element types such as hexahedrals. Numerical efficiency, convergence behaviour and stability of the new method is validated using linear tetrahedral and hexahedral elements.

Steel structural design is an integral part of the building construction process. So far, various methods of design have been applied in practice to satisfy the design requirements. This paper attempts to acquire the Differential Evolution Algorithms in automatization of specific synthesis and rationalization of design process. The capacity of the Differential Evolution Algorithms to deal with continuous and/or discrete optimization of steel structures is also demonstrated. The goal of this study is to propose an optimal design of steel frame structures using built-up I-sections and/or a combination of standard hot-rolled profiles. All optimized steel frame structures in this paper generated optimization solutions better than the original solution designed by the manufacturer. Taking the criteria regarding the quality and efficiency of the practical design into consideration, the produced optimal design with the Differential Evolution Algorithms can completely replace conventional design because of its excellent performance.

A practical framework for generating cross correlated fields with a specified marginal distribution function, an autocorrelation function and cross correlation coefficients is presented in the paper. The contribution promotes a recent journal paper [1]. The approach relies on well known series expansion methods for simulation of a Gaussian random field. The proposed method requires all cross correlated fields over the domain to share an identical autocorrelation function and the cross correlation structure between each pair of simulated fields to be simply defined by a cross correlation coefficient. Such relations result in specific properties of eigenvectors of covariance matrices of discretized field over the domain. These properties are used to decompose the eigenproblem which must normally be solved in computing the series expansion into two smaller eigenproblems. Such decomposition represents a significant reduction of computational effort. Non-Gaussian components of a multivariate random field are proposed to be simulated via memoryless transformation of underlying Gaussian random fields for which the Nataf model is employed to modify the correlation structure. In this method, the autocorrelation structure of each field is fulfilled exactly while the cross correlation is only approximated. The associated errors can be computed before performing simulations and it is shown that the errors happen especially in the cross correlation between distant points and that they are negligibly small in practical situations.

PARAMETER IDENTIFICATION OF MESOSCALE MODELS FROM MACROSCOPIC TESTS USING BAYESIAN NEURAL NETWORKS
(2010)

In this paper, a parameter identification procedure using Bayesian neural networks is proposed. Based on a training set of numerical simulations, where the material parameters are simulated in a predefined range using Latin Hypercube sampling, a Bayesian neural network, which has been extended to describe the noise of multiple outputs using a full covariance matrix, is trained to approximate the inverse relation from the experiment (displacements, forces etc.) to the material parameters. The method offers not only the possibility to determine the parameters itself, but also the accuracy of the estimate and the correlation between these parameters. As a result, a set of experiments can be designed to calibrate a numerical model.

The article presents analysis of stress distribution in the reinforced concrete support beam bracket which is a component of prefabricated reinforced concrete building. The building structure is spatial frame where dilatations were applied. The proper stiffness of its structure is provided by frames with stiff joints, monolithic lift shifts and staircases. The prefabricated slab floors are supported by beam shelves which are shaped as inverted letter ‘T’. Beams are supported by the column brackets. In order to lower the storey height and fulfill the architectural demands at the same time, the designer lowered the height of beam at the support zone. The analyzed case refers to the bracket zone where the slant crack. on the support beam bracket was observed. It could appear as a result of overcrossing of allowable tension stresses in reinforced concrete, in the bracket zone. It should be noted that the construction solution applied, i.e. concurrent support of the “undercut” beam on the column bracket causes local concentration of stresses in the undercut zone where the strongest transverse forces and tangent stresses occur concurrently. Some additional rectangular stresses being a result of placing the slab floors on the lower part of beam shelves sum up with those described above.

Since the 90-ties the Pascal matrix, its generalizations and applications have been in the focus of a great amount of publications. As it is well known, the Pascal matrix, the symmetric Pascal matrix and other special matrices of Pascal type play an important role in many scientific areas, among them Numerical Analysis, Combinatorics, Number Theory, Probability, Image processing, Sinal processing, Electrical engineering, etc. We present a unified approach to matrix representations of special polynomials in several hypercomplex variables (new Bernoulli, Euler etc. polynomials), extending results of H. Malonek, G.Tomaz: Bernoulli polynomials and Pascal matrices in the context of Clifford Analysis, Discrete Appl. Math. 157(4)(2009) 838-847. The hypercomplex version of a new Pascal matrix with block structure, which resembles the ordinary one for polynomials of one variable will be discussed in detail.

Die Planung von komplexen Bauwerken erfolgt zunehmend mit Planungswerkzeugen, die den Export von Bauwerksinformationen im STEP-Format auf Grundlage der IFC (Industry Foundation Classes) erlauben. Durch die Verfügbarkeit dieser Schnittstelle ist es möglich, Bauwerksinformationen für die weiterführende Verarbeitung zu verwenden. Zur Visualisierung der geometrischen Daten stehen innerhalb der IFC verschiedene geometrische Modelle für die Darstellung von Bauteilen zur Verfügung. Unter anderem werden für das „Ausschneiden“ von Öffnungen aus Bauteilen (z.B. für Fenster und Türen) geometrische boolesche Operationen benötigt.
Gegenstand des Beitrags ist die Vorstellung eines Algorithmus zur Berechnung von booleschen Operationen auf Basis eines triangulierten B-Rep (Boundary Representation) Modells nach HUBBARD (1990). Da innerhalb von IFC-Gebäudemodellen Bauteile oft das Resultat mehrerer boolescher Operationen sind (z.B. um mehrere Fensteröffnungen von einer gegebenen Wand abzuziehen), wurde der Algorithmus von Hubbard angepasst, sodass mehrere boolesche Operationen gleichzeitig berechnet werden können. Durch diese Optimierung wird eine deutliche Reduzierung der benötigten Berechnungen und somit der Rechenzeit erreicht.

Complex buildings and other structures are cumulatively planned with software that supports the export of building information in the STEP-format on the basis of the IFC (Industry Foundation Classes). Because of the availability of this interface, it is possible to use the data of a building for further processing.
Within the IFC, several geometrical models for the visualization of building elements are provided. Among others, geometric Boolean set operations are needed to "subtract" openings from building elements (e.g. for windows or doors) - CSG (Constructive Solid Geometry).
Therefore, software components based on the algorithms [Laidlaw86] and [Hubbard90] were developed at the professorship Informatik im Bauwesen that support these functionalities on the basis of Java3D. However, it turned out in praxis, that these components are numerically instable and that there is no acceptable robustness or tolerance of errors. This is caused by mistakes in the implementation (bugs) as well as the insufficient handling of numerical inaccuracies. Further, a verification and, where applicable, a correction of qualitative substandard initial data is missing.
Prior to this student research project, the implementation of a self-contained application for a visual error control was initiated. This tool visualizes several program steps and their corresponding data. With use of this tool, the implemented algorithms can be analyzed in detail.
The papers [Laidlaw86] and [Hubbard90] are unsatisfactory describing some essential steps of the algorithm as well as implementation details to execute Boolean set operations on the basis of a B-rep (Boundary Representation) model. Hence, the algorithm should be documented comprehensible with the help of figures and pseudo code. Moreover, problems within the existing implementation shall be identified and possible solution strategies shall be provided.

Der inhaltlichen Qualitätssicherung von Bauwerksinformationsmodellen (BIM) kommt im Zuge einer stetig wachsenden Nutzung der verwendeten BIM für unterschiedliche Anwen-dungsfälle eine große Bedeutung zu. Diese ist für jede am Datenaustausch beteiligte Software dem Projektziel entsprechend durchzuführen. Mit den Industry Foundation Classes (IFC) steht ein etabliertes Format für die Beschreibung und den Austausch eines solchen Modells zur Verfügung. Für den Prozess der Qualitätssicherung wird eine serverbasierte Testumgebung Bestandteil des neuen Zertifizierungsverfahrens der IFC sein. Zu diesem Zweck wurde durch das „iabi - Institut für angewandte Bauinformatik” in Zusammenarbeit mit „buildingSMART e.V.“ (http://www.buildingsmart.de) ein Global Testing Documentation Server (GTDS) implementiert. Der GTDS ist eine, auf einer Datenbank basierte, Web-Applikation, die folgende Intentionen verfolgt:
• Bereitstellung eines Werkzeugs für das qualitative Testen IFC-basierter Modelle
• Unterstützung der Kommunikation zwischen IFC Entwicklern und Anwendern
• Dokumentation der Qualität von IFC-basierten Softwareanwendungen
• Bereitstellung einer Plattform für die Zertifizierung von IFC Anwendungen
Gegenstand der Arbeit ist die Planung und exemplarische Umsetzung eines Werkzeugs zur interaktiven Visualisierung von Qualitätsdefiziten, die vom GTDS im Modell erkannt wurden. Die exemplarische Umsetzung soll dabei aufbauend auf den OPEN IFC TOOLS (http://www.openifctools.org) erfolgen.

This paper deals with the modelling and the analysis of masonry vaults. Numerical FEM analyses are performed using LUSAS code. Two vault typologies are analysed (barrel and cross-ribbed vaults) parametrically varying geometrical proportions and constraints. The proposed model and the developed numerical procedure are implemented in a computer analysis. Numerical applications are developed to assess the model effectiveness and the efficiency of the numerical procedure. The main object of the present paper is the development of a computational procedure which allows to define 3D structural behaviour of masonry vaults. For each investigated example, the homogenized limit analysis approach has been employed to predict ultimate load and failure mechanisms. Finally, both a mesh dependence study and a sensitivity analysis are reported. Sensitivity analysis is conducted varying in a wide range mortar tensile strength and mortar friction angle with the aim of investigating the influence of the mechanical properties of joints on collapse load and failure mechanisms. The proposed computer model is validated by a comparison with experimental results available in the literature.

Building information modeling offers a huge potential for increasing the productivity and quality of construction planning processes. Despite its promising concept, this approach has not found widespread use. One of the reasons is the insufficient coupling of the structural models with the general building model. Instead, structural engineers usually set up a structural model that is independent from the building model and consists of mechanical models of reduced dimension. An automatic model generation, which would be valuable in case of model revisions is therefore not possible. This can be overcome by a volumetric formulation of the problem. A recent approach employed the p-version of the finite element method to this problem. This method, in conjunction with a volumetric formulation is suited to simulate the structural behaviour of both „thick“ solid bodies and thin-walled structures. However, there remains a notable discretization error in the numerical models. This paper therefore proposes a new approach for overcoming this situation. It sugggests the combination of the Isogeometric analysis together with the volumetric models in order to integrate the structural design into the digital, building model-centered planning process and reduce the discretization error. The concept of the isogeometric analysis consists, roughly, in the application of NURBS functions to represent the geometry and the shape functions of the elements. These functions possess some beneficial properties regarding numerical simulation. Their use, however, leads to some intricacies related to the setup of the stiffness matrix. This paper describes some of these properties.

Information technology plays a key role in the everyday operation of buildings and campuses. Many proprietary technologies and methodologies can assist in effective Building Performance Monitoring (BPM) and efficient managing of building resources. The integration of related tools like energy simulator packages, facility, energy and building management systems, and enterprise resource planning systems is of benefit to BPM. However, the complexity to integrating such domain specific systems prevents their common usage. Service Oriented Architecture (SOA) has been deployed successfully in many large multinational companies to create integrated and flexible software systems, but so far this methodology has not been applied broadly to the field of BPM. This paper envisions that SOA provides an effective integration framework for BPM. Service oriented architecture for the ITOBO framework for sustainable and optimised building operation is proposed and an implementation for a building performance monitoring system is introduced.

The paper is devoted to a study of properties of homogeneous solutions of massless field equation in higher dimensions. We first treat the case of dimension 4. Here we use the two-component spinor language (developed for purposes of general relativity). We describe how are massless field operators related to a higher spin analogues of the de Rham sequence - the so called Bernstein-Gel'fand-Gel'fand (BGG) complexes - and how are they related to the twisted Dirac operators. Then we study similar question in higher (even) dimensions. Here we have to use more tools from representation theory of the orthogonal group. We recall the definition of massless field equations in higher dimensions and relations to higher dimensional conformal BGG complexes. Then we discuss properties of homogeneous solutions of massless field equation. Using some recent techniques for decomposition of tensor products of irreducible $Spin(m)$-modules, we are able to add some new results on a structure of the spaces of homogenous solutions of massless field equations. In particular, we show that the kernel of the massless field equation in a given homogeneity contains at least on specific irreducible submodule.

We consider a structural truss problem where all of the physical model parameters are uncertain: not just the material values and applied loads, but also the positions of the nodes are assumed to be inexact but bounded and are represented by intervals. Such uncertainty may typically arise from imprecision during the process of manufacturing or construction, or round-off errors. In this case the application of the finite element method results in a system of linear equations with numerous interval parameters which cannot be solved conventionally. Applying a suitable variable substitution, an iteration method for the solution of a parametric system of linear equations is firstly employed to obtain initial bounds on the node displacements. Thereafter, an interval tightening (pruning) technique is applied, firstly on the element forces and secondly on the node displacements, in order to obtain tight guaranteed enclosures for the interval solutions for the forces and displacements.

Due to increasing numbers of wind energy converters, the accurate assessment of the lifespan of their structural parts and the entire converter system is becoming more and more paramount. Lifespan-oriented design, inspections and remedial maintenance are challenging because of their complex dynamic behavior. Wind energy converters are subjected to stochastic turbulent wind loading causing corresponding stochastic structural response and vibrations associated with an extreme number of stress cycles (up to 109 according to the rotation of the blades). Currently, wind energy converters are constructed for a service life of about 20 years. However, this estimation is more or less made by rule of thumb and not backed by profound scientific analyses or accurate simulations. By contrast, modern structural health monitoring systems allow an improved identification of deteriorations and, thereupon, to drastically advance the lifespan assessment of wind energy converters. In particular, monitoring systems based on artificial intelligence techniques represent a promising approach towards cost-efficient and reliable real-time monitoring. Therefore, an innovative real-time structural health monitoring concept based on software agents is introduced in this contribution. For a short time, this concept is also turned into a real-world monitoring system developed in a DFG joint research project in the authors’ institute at the Ruhr-University Bochum. In this paper, primarily the agent-based development, implementation and application of the monitoring system is addressed, focusing on the real-time monitoring tasks in the deserved detail.

In order to model and simulate collapses of large scale complex structures, a user-friendly and high performance software system is essential. Because a large number of simulation experiments have to be performed, therefore, next to an appropriate simulation model and high performance computing, efficient interactive control and visualization capabilities of model parameters and simulation results are crucial. To this respect, this contribution is concerned with advancements of the software system CADCE (Computer Aided Demolition using Controlled Explosives) that is extended under particular consideration of computational steering concepts. Thereby, focus is placed on problems and solutions for the collapse simulation of real world large scale complex structures. The simulation model applied is based on a multilevel approach embedding finite element models on a local as well as a near field length scale, and multibody models on a global scale. Within the global level simulation, relevant effects of the local and the near field scale, such as fracture and failure processes of the reinforced concrete parts, are approximated by means of tailor-made multibody subsystems. These subsystems employ force elements representing nonlinear material characteristics in terms of force/displacement relationships that, in advance, are determined by finite element analysis. In particular, enhancements concerning the efficiency of the multibody model and improvements of the user interaction are presented that are crucial for the capability of the computational steering. Some scenarios of collapse simulations of real world large scale structures demonstrate the implementation of the above mentioned approaches within the computational steering.

CRITICAL STRESS ASSESSMENT IN ANGLE TO GUSSET PLATE BOLTED CONNECTION BY SIMPLIFIED FEM MODELLING
(2010)

Simplified modelling of friction grip bolted connections of steel member – to – gusset plate is often applied in engineering practise. The paper deals with the simplification of pre-tensioned bolt model and simplification of load transfer within connection. Influence on normal strain (and thus stress) distribution at critical cross-section is investigated. Laboratory testing of single-angle or double-angle members – to – gusset plates bolted connections were taken as basis for numerical analysis. FE models were created using 1D and 2D elements. Angles and gusset plates were modelled with shell elements. Two methods of modelling of friction grip bolting were considered: bolt-regarding approach with 1D element systems modelling bolts and two variants of bolt-disregarding approach with special constraints over some part of member and gusset plate surfaces in contact: a) constraints over whole area of contact, b) constraints over the area around each bolt shank (“partially tied”). Modelling of friction grip bolted connections using simplified bolt modelling may be effective, especially in the case of analysis concerning elastic range only. In such a case disregarding bolts and replacing them with “partially tied” modelling seems to be more attractive. It is less time-consuming and provides results of similar accuracy in comparison to analysis utilizing simplified bolt modelling.

The uncertainty existing in the construction industry is bigger than in other industries. Consequently, most construction projects do not go totally as planned. The project management plan needs therefore to be adapted repeatedly within the project lifecycle to suit the actual project conditions. Generally, the risks of change in the project management plan are difficult to be identified in advance, especially if these risks are caused by unexpected events such as human errors or changes in the client preferences. The knowledge acquired from different resources is essential to identify the probable deviations as well as to find proper solutions to the faced change risks. Hence, it is necessary to have a knowledge base that contains known solutions for the common exceptional cases that may cause changes in each construction domain. The ongoing research work presented in this paper uses the process modeling technique of Event-driven Process Chains to describe different patterns of structure changes in the schedule networks. This results in several so called “change templates”. Under each template different types of change risk/ response pairs can be categorized and stored in a knowledge base. This knowledge base is described as an ontology model populated with reference construction process data. The implementation of the developed approach can be seen as an iterative scheduling cycle that will be repeated within the project lifecycle as new change risks surface. This can help to check the availability of ready solutions in the knowledge base for the situation at hand. Moreover, if the solution is adopted, CPSP, “Change Project Schedule Plan „a prototype developed for the purpose of this research work, will be used to make the needed structure changes of the schedule network automatically based on the change template. What-If scenarios can be implemented using the CPSP prototype in the planning phase to study the effect of specific situations without endangering the success of the project objectives. Hence, better designed and more maintainable project schedules can be achieved.

Die vorliegende Arbeit beschäftigt sich mit der geometrischen Suffosionsbeständigkeit von Erdstoffen. Mit dem wahrscheinlichkeitstheoretischen Ansatz der Perkolationstheorie wurde ein analytisches Verfahren gewählt, mit dem suffosive Materialtransportprozesse modelliert und quantifiziert werden können. Mit dem verwendeten Perkolationsmodell wurde eine beliebige Porenstruktur eines realen Erdstoffes im 3-Dimensionalen modelliert. Mögliche Materialtransportprozesse innerhalb der modellierten Porenstruktur wurden anschließend simuliert. Allgemein gültige Gesetzmäßigkeiten wurden hergeleitet und Grenzbedingungen formuliert. Diese sind vom Erdstoff unabhängig und beschreiben Zusammenhänge zwischen Materialtransport und Porenstruktur. Anwendbar sind diese Ergebnisse auf homogene, isotrope und selbstähnliche Erdstoffgefüge. Aussagen über konkrete Erdstoffe können über die Transformationsmethode erfolgen. Für die Verwendung der Transformationsmethode ist vorab die relevante Porenstruktur, d. h. die Porenengstellenverteilung, zu ermitteln.

Die vorliegende Arbeit beschäftigt sich mit der vergleichenden Analyse unterschiedlicher Berechnungsansätze zum hydraulischen Grundbruch. Diese wurden zunächst analysiert, an Beispielberechnungen angewandt und schließlich miteinander verglichen. Weiterhin wurde der Einfluss verschiedener Randbedingungen, allem voran der Baugrubenbreite, auf die Sicherheit gegen einen hydraulischen Grundbruch untersucht. Es werden Empfehlungen zur Anwendbarkeit verschiedener Näherungsansätze bei Vorhandensein bestimmter Einflussfaktoren gegeben.

The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency.

Quality is one of the most important properties of a product. Providing the optimal quality can reduce costs for rework, scrap, recall or even legal actions while satisfying customers demand for reliability. The aim is to achieve ``built-in'' quality within product development process (PDP). The common approach therefore is the robust design optimization (RDO). It uses stochastic values as constraint and/or objective to obtain a robust and reliable optimal design. In classical approaches the effort required for stochastic analysis multiplies with the complexity of the optimization algorithm. The suggested approach shows that it is possible to reduce this effort enormously by using previously obtained data. Therefore the support point set of an underlying metamodel is filled iteratively during ongoing optimization in regions of interest if this is necessary. In a simple example, it will be shown that this is possible without significant loss of accuracy.

The main aim of the research project in progress is to develop virtual models as tools to support decision-making in the planning of construction maintenance. The virtual models gives the capacity to allow them to transmit, visually and interactively, information related to the physical behaviour of materials, components of given infrastructures, defined as a function of the time variable. The interactive application allows decisions to be made on conception options in the definition of plans for maintenance, conservation or rehabilitation. The first virtual prototype that is now in progress concerns just lamps. It allows the examination of the physical model, visualizing, for each element modelled in 3D and linked to a database, the corresponding technical information concerned with the wear and tear aspects of the material, calculated for that period of time. In addition, the analysis of solutions for repair work or substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The aim is that the virtual model should be able to be applied directly over the 3D models of new constructions, in situations of rehabilitation. The practical usage of these models is directed, then, towards supporting decision-making in the conception phase and the planning of maintenance. In further work other components will be analysed and incorporated into the virtual system.

Virtual reality systems offer substantial potential in supporting decision processes based purely on computer-based representations and simulations. The automotive industry is a prime application domain for such technology, since almost all product parts are available as three-dimensional models. The consideration of ergonomic aspects during assembly tasks, the evaluation of humanmachine interfaces in the car interior, design decision meetings as well as customer presentations serve as but a few examples, wherein the benefit of virtual reality technology is obvious. All these tasks require the involvement of a group of people with different expertises. However, current stereoscopic display systems only provide correct 3D-images for a single user, while other users see a more or less distorted virtual model. This is a major reason why these systems still face limited acceptance in the automotive industry. They need to be operated by experts, who have an advanced understanding of the particular interaction techniques and are aware of the limitations and shortcomings of virtual reality technology. The central idea of this thesis is to investigate the utility of stereoscopic multi-user systems for various stages of the car development process. Such systems provide multiple users with individual and perspectively correct stereoscopic images, which are key features and serve as the premise for the appropriate support of collaborative group processes. The focus of the research is on questions related to various aspects of collaboration in multi-viewer systems such as verbal communication, deictic reference, embodiments and collaborative interaction techniques. The results of this endeavor provide scientific evidence that multi-viewer systems improve the usability of VR-applications for various automotive scenarios, wherein co-located group discussions are necessary. The thesis identifies and discusses the requirements for these scenarios as well as the limitations of applying multi-viewer technology in this context. A particularly important gesture in real-world group discussions is referencing an object by pointing with the hand and the accuracy which can be expected in VR is made evident. A novel two-user seating buck is introduced for the evaluation of ergonomics in a car interior and the requirements on avatar representations for users sitting in a car are identified. Collaborative assembly tasks require high precision. The novel concept of a two-user prop significantly increases the quality of such a simulation in a virtual environment and allows ergonomists to study the strain on workers during an assembly sequence. These findings contribute toward an increased acceptance of VR-technology for collaborative development meetings in the automotive industry and other domains.

The changed global security situation in the last eight years has shown the importance of emergency management plans in public buildings. Therefore, the use of computer simulators for surveying fire safety design and evacuation process is increasing. The aim of these simulators is to have more realistic evacuation simulations. The challenge is, firstly, to realize the virtual simulation environment based on geometrical and material boundary conditions, secondly, to considerate the mutual interaction effects between different parameters and, finally, to have a realistic visualization of the simulated results. In order to carry out this task, an especial new software method on a BIM-platform has to be developed which can integrate all required simulations and will be able to have an immersive output BIM ISEE (Immersive Safety Engineering Environment). The new BIM-ISEE will integrate the Fire Dynamics Simulator (FDS) for fire and evacuation simulation in the Autodesk Revit which is a BIM-platform and will represent the simulation results in the immersive virtual environment at the institute (CES-Lab). With BIM-ISEE the fire safety engineer will be able to obtain more realistic visualizations in the immersive environment, to modify his concept more effectively, to evaluate the simulation results more accurately and to visualize the various simulation results. It can also give the rescue staff the opportunity to perform and evaluate emergency evacuation trainings.

We present the way of calculation of displacement in the bent reinforced concrete bar elements where rearrangement of internal forces and plastic hinge occurred. The described solution is based on prof. Borcz’s mathematical model. It directly takes into consideration the effects connected with the occurrence of plastic hinge, such as for example a crack, by means of a differential equation of axis of the bent reinforced concrete beam. The EN Eurocode 2 makes it possible to consider the influence of plastic hinge on the values of the reinforced concrete structures. This influence can also be assumed using other analytical methods. However, the results obtained by the application of Eurocode 2 are higher from those received in testing. Just comparably big error level occurs when calculations are made by means of Borcz’s method, but in the latter case, the results depend on the assumptions made beforehand. This method makes it possible to apply the experimental results using parameters r1 i r0. When the experimental results are taken into account, one could observe the compatibility between the calculations and actual deflections of the structure.

Several results concerning the distribution of the headway of busses in the flow behind a traffic signal are presented. In the main focus of interest is the description of analytical models, which are verified by the results of Monte-Carlo-Methods. The advantage of analytical models (verified, but not derived by simulation methods) is their flexibility with respect to possible generalizations. For instance, several random distributions of the flow incoming to the traffic signal can be compared. The attention will be directed at the question, how the primary headway H (analyzed in front of the traffic signal) is mapped to the headway H’ analyzed behind of the traffic signal and how the random distribution of H is mapped to that of H’. For the traffic flow in front of the traffic signal several models will be discussed. The first case considers the situation, that busses operate on a common lane with the individual motor car traffic and the traffic flow is saturated. In the second situation, busses operate on a separated bus lane. Moreover, a mixed situation is discussed to model as close to reality as possible.

By the use of numerical methods and the rapid development of computer technology in the recent years, a large variety, complexity, refinement and capability of partial models have been achieved. This can be noticed in the evaluation of the reliability of structures, e.g. the increased use of spatial structural systems. For the different fields of civil engineering, well developed partial models already exist. Because these partial models are most often used separately, the general view is not entirely illustrated. Until now, there has been no common methodology for evaluating the efficiency of models; the trust in the prediction of a special engineering model has generally relied on the engineer’s experience. In this paper the basics of evaluation of simple models and coupled partial models of frame structures will be discussed using sustainable numerical methods. Furthermore, quality classes (levels) of design tasks will be defined based on their practical relevance. In addition, analysis methods will be systemized. After analysis of different published assessment methods, it may be noted, that the Efficiency Indicator Method (EWM) is most suitable for the observed evaluation problem. Therefore, the EWM was modified to the Model Efficiency Analysis (MEA) for the purpose of a holistic evaluation. The criteria are characterized by two groups, benefit and expenditure, and it is possible by calculating the quotient (benefit/expenditure) to make a statement about the efficiency of the observed models. Presently, the expenditure value is not a subject of investigation, and so the model efficiency is calculated only by the benefit value. This paper also contains the associated criteria catalog, different normalization methods, as well as weighting possibilities.

In the paper presented, reinforced concrete shells of revolution are analyzed in both meridional and circumferential directions. Taking into account the physical non-linearity of the material, the internal forces and the deflections of the shell as well as the strain distribution at the cross-sections are calculated. The behavior of concrete under compression is described by linear and non-linear stress-strain relations. The description of the behavior of concrete under tension must account for tension stiffening effects. A tri-linear function is used to formulate the material law of reinforcement. The problem cannot be solved analytically due to the physical non-linearity. Thus a numerical solution is formulated by means of the LAGRANGE Principle of the minimum of the total potential energy. The kinematically admissible field of deformation is defined by the displacements u in the meridional and w in the radial direction. These displacements must satisfy the equations of compatibility and the kinematical boundary conditions of the shell. The strains are linearly distributed across the wall thickness. The strain energy depends on the specific of the material behavior. Using integral formulations of the material law [1], the strain energy of each part of the cross-section is defined as a function of the strains at the boundaries of the cross-sections. The shell is discretised in the meridional direction. Various methods of numerical differentiation and numerical integration are applied in order to determine the deformations and the strain energy. The unknown displacements u and w are calculated by a non-restricted extremum problem based on the minimum of the total potential energy. From mathematical point of view, the objective function is a convex function, thus the minimum can be determined without difficulty. The advantage of this formulation is that unlike non-linear methods with path-following algorithms the calculation does not have to account for changing stiffness and load increments. All iterations necessary to find the solution are integrated into the “Solver”. The model presented provides many ways of investigating the influence of various material parameters on the stresses and deformations of the entire shell structure.

An energy method based on the LAGRANGE Principle of the minimum of total potential en-ergy is presented to calculate the stresses and strains of composite cross-sections. The stress-strain relation of each partition of the cross-section can be an arbitrary piecewise continuous function. The strain energy is transformed into a line integral by GAUSS’s integral theorem. The total strain of each partition of the cross-section is split into load-dependent strain and pre-strain. Pre-strains have to be taken into account when the cross-section is pre-stressed, retrofit-ted or influenced by shrinkage, temperature etc. The unconstrained minimum problem can be solved for each load combination using standard software. The application of the method presented in the paper is demonstrated by means of examples.

As numerical techniques for solving PDE or integral equations become more sophisticated, treatments of the generation of the geometric inputs should also follow that numerical advancement. This document describes the preparation of CAD data so that they can later be applied to hierarchical BEM or FEM solvers. For the BEM case, the geometric data are described by surfaces which we want to decompose into several curved foursided patches. We show the treatment of untrimmed and trimmed surfaces. In particular, we provide prevention of smooth corners which are bad for diffeomorphism. Additionally, we consider the problem of characterizing whether a Coons map is a diffeomorphism from the unit square onto a planar domain delineated by four given curves. We aim primarily at having not only theoretically correct conditions but also practically efficient methods. As for FEM geometric preparation, we need to decompose a 3D solid into a set of curved tetrahedra. First, we describe some method of decomposition without adding too many Steiner points (additional points not belonging to the initial boundary nodes of the boundary surface). Then, we provide a methodology for efficiently checking whether a tetrahedral transfinite interpolation is regular. That is done by a combination of degree reduction technique and subdivision. Along with the method description, we report also on some interesting practical results from real CAD data.

Im Rahmen der Bachelorarbeit wird das Entwicklungspotenzial durch Deregulierung des Verkehrssektors am Beispiel des deutschen Fernlinienbusverbotes herausgear-beitet. Hierbei werden das bestehende Regulierungsregime mit der aktuellen Markt-situation und die sich daraus ergebenen Deregulierungsoptionen betrachtet. In der Arbeit wird zunächst ein Überblick über die Grundlagen der Liberalisierung, Regulierung und Deregulierung gegeben. Im Anschluss daran werden das aktuelle Regulierungsregime und die derzeitige Marktsituation anhand von drei bestehenden Busunternehmen erläutert. Anschließend werden die internen und externen Kosten der einzelnen Verkehrsträ-ger gegenübergestellt und miteinander verglichen, umso das günstigste Fortbewe-gungsmittel heraus zu filtern und daraufhin Angaben zur Kosten-Nutzen-Abwägung treffen zu können. Bei der Erarbeitung der Kosten-Nutzen-Abwägung wird der Deregulierungsvorgang in drei ausgewählten Ländern beschrieben. Zudem werden der demographische Wandel und die derzeitige Verkehrsentwicklung erläutert. Um die Abwägungen der Nachfrager bezüglich der Verkehrsmittelwahl abbilden zu können, werden relevante Aspekte der Verkehrsmittelwahl mit Hilfe von sozidemographischen Kriterien gewer-tet. Abschließend können Deregulierungsoptionen für den Fernlinienbusmarkt entwickelt werden, um am Ende einen Ausblick in die zukünftige Entwicklung des Fernlinien-busverkehrs zu geben.

Das Bund-Länder-Programm "Soziale Stadt" hat die Aufgabe, Stadtteile mit besonderem Entwicklungsbedarf zu fördern. Das negative Image ist einerseits Ursache, andererseits auch Folge von sozialen und städtebaulichen Problemlagen und Entwicklungen im Stadtteil. Diese Abwärtsspirale soll durch das Programm aufgebrochen werden. Der Autor nähert sich interdisziplinär dem Imagebegriff an und zeigt die Auswirkungen des Programms "Soziale Stadt" auf die Großwohnsiedlung Jena-Winzerla. Die Studie erfasst anhand des semantischen Differentials das Image im Stadtteil, wie es von den Bewohnern beurteilt wird und vergleicht es mit der Sicht von Außen. Der Einfluß des Programms auf das Image wird durch Experteninterviews beleuchtet. Das Beispiel eigt die Entwicklungen, die das Programm "Soziale Stadt" bewirken kann. Es werden aber auch Grenzen deutlich. Vor diesem Hintergrund werden abschließend Überlegungen angestellt, in welche Richtungen die Entwicklungen innerhalb des Förderprogramms gelenkt werden sollten, um das Image nachhaltig zu verbessern und betroffene Stadtteile adäquat zu fördern.

Nonlinear analyses are characterised by approximations of the fundamental equations in different quality. Starting with a general description of nonlinear finite element formulation the fundamental equations are derived for plane truss elements. Special emphasis is placed on the determination of internal and external system energy as well as influence of different quality approaches for the displacement-strain relationship on solution quality. To simplify the solution procedure the nonlinear function describing the kinematics is expanded into a Taylor series and truncated after the n-th series term. The different kinematics influence speed of convergence as well as exactness of solution. On a simple truss structure this influence is shown. To assess the quality of different formulations concerning the nonlinear kinematic equation three approaches are discussed. First the overall internal and external energy is compared for different kinematical models. In a second step the energy content related to single terms describing displacement-strain relationship is investigated and used for quality control following two different paths. Based on single ε-terms an adaptive scheme is used to change the kinematical model depending on increasing nonlinearity of the structure. The solution quality has turned out satisfactory compared to the exact result. More detailed investigations are necessary to find criteria for the threshold values for the iterative process as well as for decision on number and step size of incremental load steps.

A four-node quadrilateral shell element with smoothed membrane-bending based on Mindlin-Reissner theory is proposed. The element is a combination of a plate bending and membrane element. It is based on mixed interpolation where the bending and membrane stiffness matrices are calculated on the boundaries of the smoothing cells while the shear terms are approximated by independent interpolation functions in natural coordinates. The proposed element is robust, computationally inexpensive and free of locking. Since the integration is done on the element boundaries for the bending and membrane terms, the element is more accurate than the MITC4 element for distorted meshes. This will be demonstrated for several numerical examples.

Isogeometric finite element analysis has become a powerful alternative to standard finite elements due to their flexibility in handling complex geometries. One major drawback of NURBS based isogeometric finite elements is their less effectiveness of local refinement. In this study, we present an alternative to NURBS based isogeometric finite elements that allow for local refinement. The idea is based on polynomial splines and exploits the flexibility of T-meshes for local refinement. The shape functions satisfy important properties such as non-negativity, local support and partition of unity. We will demonstrate the efficiency of the proposed method by two numerical examples.

Sand-bentonite mixtures are well recognized as buffer and sealing material in nuclear waste repository constructions. The behaviour of compacted sand-bentonite mixture needs to be well understood in order to guarantee the safety and the efficiency of the barrier construction. This paper presents numerical simulations of swelling test and coupled thermo-hydro-mechanical (THM) test on compacted sand-bentonite mixture in order to reveal the influence of the temperature and hydraulic gradients on the distribution of temperature, mechanical stress and water content in such materials. Sensitivity analysis is carried out to identify the parameters which influence the most the response of the numerical model. Results of back analysis of the model parameters are reported and critically assessed.

Der Siedlungsbau in Hanoi kan heutzutage - über 20 Jahre nach dem Beginn der Renovierungspolitik udn der Markwirtschaft, die dem Städtebau eine große Gelegenheit zur Verbesserung gegeben haben - zurückblickend und eingeschätz werden. Die letzten 20 Jahre sind eine kurze Zeit in der tausendjährigen Geschichte der Stadt, trotzdem entwickelte sich die Stadt in diesem Zeitraum am schnellsten und auch am problematischten aus Sicht der Umwelt. Ohne eine passende Entwicklungsstategie oder eine geeignete Maßnahme bei der Stadtplanung vergrößert sich der Konflikt Ökonomie - Ökologie immer weiter. ... Die Findung eines neuen Wohnkonzeptes im Gleichgewicht zwischen Ökonomie und der Ökologie ist eine hochaktuelle Frage geworden.

In spite of the extensive research in dynamic soil-structure interaction (SSI), there still exist miscon-ceptions concerning the role of SSI in the seismic performance of structures, especially the ones founded on soft soil. This is due to the fact that current analytical SSI models that are used to evaluate the influence of soil on the overall structural behavior are approximate models and may involve creeds and practices that are not always precise. This is especially true in the codified approaches which in-clude substantial approximations to provide simple frameworks for the design. As the direct numerical analysis requires a high computational effort, performing an analysis considering SSI is computationally uneconomical for regular design applications. This paper outlines the set up some milestones for evaluating SSI models. This will be achieved by investigating the different assumptions and involved factors, as well as varying the configurations of R/C moment-resisting frame structures supported by single footings which are subject to seismic excita-tions. It is noted that the scope of this paper is to highlight, rather than fully resolve, the above subject. A rough draft of the proposed approach is presented in this paper, whereas a thorough illustration will be carried out throughout the presentation in the course of the conference.

FREE VIBRATION FREQUENCIES OF THE CRACKED REINFORCED CONCRETE BEAMS - METHODS OF CALCULATIONS
(2010)

The paper presents method of calculation of natural frequencies of the cracked reinforced concrete beams including discreet model of crack. The described method is based on the stiff finite elements method. It was modified in such a way as to take into account local discontinuities (ie. cracks). In addition, some theoretical studies as well as experimental tests of concrete mechanics based on discrete crack model were taken into consideration. The calculations were performed using the author’s own numerical algorithm. Moreover, other calculation methods of dynamic reinforced concrete beams presented in standards and guidelines are discussed. Calculations performed by using different methods are compared with the results obtained in experimental tests.

ESTIMATING UNCERTAINTIES FROM INACCURATE MEASUREMENT DATA USING MAXIMUM ENTROPY DISTRIBUTIONS
(2010)

Modern engineering design often considers uncertainties in geometrical and material parameters and in the loading conditions. Based on initial assumptions on the stochastic properties as mean values, standard deviations and the distribution functions of these uncertain parameters a probabilistic analysis is carried out. In many application fields probabilities of the exceedance of failure criteria are computed. The out-coming failure probability is strongly dependent on the initial assumptions on the random variable properties. Measurements are always more or less inaccurate data due to varying environmental conditions during the measurement procedure. Furthermore the estimation of stochastic properties from a limited number of realisation also causes uncertainties in these quantities. Thus the assumption of exactly known stochastic properties by neglecting these uncertainties may not lead to very useful probabilistic measures in a design process. In this paper we assume the stochastic properties of a random variable as uncertain quantities caused by so-called epistemic uncertainties. Instead of predefined distribution types we use the maximum entropy distribution which enables the description of a wide range of distribution functions based on the first four stochastic moments. These moments are taken again as random variables to model the epistemic scatter in the stochastic assumptions. The main point of this paper is the discussion on the estimation of these uncertain stochastic properties based on inaccurate measurements. We investigate the bootstrap algorithm for its applicability to quantify the uncertainties in the stochastic properties considering imprecise measurement data. Based on the obtained estimates we apply standard stochastic analysis on a simple example to demonstrate the difference and the necessity of the proposed approach.

A stress based remodeling approach is used to investigate the sensitivity of the collagen architecture in humane eye tissues on the biomechanical response of the lamina cribrosa with a particular focus on the stress environment of the nerve fibers. This approach is based on a multi-level biomechanical framework, where the biomechanical properties of eye tissues are derived from a single crimped fibril at the micro-scale via the collagen network of distributed fibrils at the meso-scale to the incompressible and anisotropic soft tissue at the macro-scale. Biomechanically induced remodeling of the collagen network is captured on the meso-scale by allowing for a continuous reorientation of collagen fibrils. To investigate the multi-scale phenomena related to glaucomatous neuropathy a generalized computational homogenization scheme is applied to a coupled two-scale analysis of the human eye considering a numerical macro- and meso-scale model of the lamina cribrosa.

Je besser die Vorbetrachtung und Gründungsuntersuchung, desto dauerhafter und günstiger wird das Ergebnis der Sanierung sein. Voreilige Überlegungen zur Schadensursache und sofortige Beseitigung sollen unbedingt vermieden werden. Aus diesem Grund wird anfangs der Arbeit ein grober Überblick durch zutreffende Vorarbeiten gegeben. Diese stellen die Interpretation des Schadenbilds, die Baugeschichte des Gebäudes, die zeitliche Entwicklung der Schäden, die Baugrundanalyse und die Grundwasseruntersuchung dar. Sie machen alle Prozesse aus, die vor einer Gründungserkundung stattfinden sollten. Aufgrund ihres Informationsgehalts will dann ein ingenieurfachlicher Untersuchungsplan erstellt werden, der auch eine angepasste Gründungserkundung (Art und Umfang) vorsieht. Kenntnis über frühere Bauweisen und Ausführungen, Technologien und deren Einsatz bei variierenden territorialen Besonderheiten sollten dem Fachingenieur vorher bekannt sein. Daher werden alle in der Historie relevanten vorkommenden Holzgründungsformen mit zeitlichem Entstehungsbezug bündig vorgestellt, wobei auch Erläuterungen zur statischen Wirksamkeit gegeben werden. Eine durch Erkundung hervorgegangene, treffende Analyse einer derartigen historischen Gründung ist nur möglich, wenn man sich mit den relevanten Eigenschaften des Baugrunds und des Holzes auseinandergesetzt hat. Bei Holz spielen vor allem die Art, das Alter und der Querschnitt eine wichtige Rolle bei Erkundungsprozessen. Sie erlauben bei einer Freilegung der Holzgründung und deren visuellen Begutachtung erste grobe Schlussfolgerungen auf die noch vorhandene Tragfähigkeit. Der Boden hingegen kann Aufschluss über seine Wasserhaltung und Setzungsempfindlichkeit bei Grundwassersenkung geben. Die Beschreibung der Vorgehensweise zur Sichtung und deren Ausführungsarten soll dazu dienen, wirtschaftlich konstruktive und statische Gegebenheiten sorgfältig und richtig zu erfassen. Des Weiteren soll eine genaue Recherche der Art und des Tragprinzips sowie des Zustands der historischen Holzgründung möglich sein. Am Ende der Arbeit sollen dann alle Ergebnisse der Erkundungsmethoden zusammengefasst und bewertet werden. Sie dienen dann zur Analyse und Bewertung von Schäden an Holzgründungen und sollen des Weiteren adäquate und möglichst wirtschaftliche Sanierungslösungen zur Folge haben. Die Arbeit selbst stellt nur einen kleinen Beitrag eines Leifadens dar, der als Ergebnis die Bewertung von Holzgründungen vorsieht. Der hier aufgeführte Teil des Leitfadens beschäftigt sich mit allen Schritten, die vor einer Materialschadensanlyse ausgeführt werden sollten und legt sein Hauptaugenmerk auf die Erkundung.

Energie-basierte Auslegung von Tragsystemen für Hochhäuser in Abhängigkeit von der Größenordnung
(2010)

Angelehnt an Entwicklungen des aktuellen Hochhausbaus, die Gebäudehöhen von über 600 m vorsehen, behandelt die vorliegende Arbeit Möglichkeiten der Konzeption von Aussteifungssystemen. Ein ausgewähltes Tragwerk aus Stahlbetonschubwänden und einer Höhe von 800 m wird mit der 3D-Analyse-Software ETABS (Version 9.0.9) bemessen. Dieses Tragwerk wird mit extremen Einwirkungen infolge Wind und Erdbeben belastet. Da ein solch hohes Gebäude außerhalb der Anwendungsgrenzen internationaler Normen liegt, wird ein eigener Ansatz für den Lastfall Wind zur Analyse des Schwingungsverhaltens gewählt. Aufbauend auf den Ergebnissen der Analyse werden Möglichkeiten der Reduktion bzw. Dämpfung von kritischen Gebäudeschwingungen diskutiert. Die konkrete Dämpfungsvariante „Passiver Schwingungsdämpfer“ (Tuned Mass Damper) wird, unter Verwendung von Optimierungskriterien, in ETBAS modelliert und in die Berechnungen eingebunden. Dieses Tragwerk wird zwei kleineren Tragwerken (H = 200 m bzw. 400 m) gegenübergestellt und mittels dem MIPS-Konzept (Material-Input pro Serviceeinheit) analysiert. Ziel ist es dabei, qualitative Aussagen zur Nachhaltigkeit und ökologischer Effizienz besonders hoher Gebäude zu treffen.

In recent years special hypercomplex Appell polynomials have been introduced by several authors and their main properties have been studied by different methods and with different objectives. Like in the classical theory of Appell polynomials, their generating function is a hypercomplex exponential function. The observation that this generalized exponential function has, for example, a close relationship with Bessel functions confirmed the practical significance of such an approach to special classes of hypercomplex differentiable functions. Its usefulness for combinatorial studies has also been investigated. Moreover, an extension of those ideas led to the construction of complete sets of hypercomplex Appell polynomial sequences. Here we show how this opens the way for a more systematic study of the relation between some classes of Special Functions and Elementary Functions in Hypercomplex Function Theory.

The numerical simulation of damage using phenomenological models on the macroscale was state of the art for many decades. However, such models are not able to capture the complex nature of damage, which simultaneously proceeds on multiple length scales. Furthermore, these phenomenological models usually contain damage parameters, which are physically not interpretable. Consequently, a reasonable experimental determination of these parameters is often impossible. In the last twenty years, the ongoing advance in computational capacities provided new opportunities for more and more detailed studies of the microstructural damage behavior. Today, multiphase models with several million degrees of freedom enable for the numerical simulation of micro-damage phenomena in naturally heterogeneous materials. Therewith, the application of multiscale concepts for the numerical investigation of the complex nature of damage can be realized. The presented thesis contributes to a hierarchical multiscale strategy for the simulation of brittle intergranular damage in polycrystalline materials, for example aluminum. The numerical investigation of physical damage phenomena on an atomistic microscale and the integration of these physically based information into damage models on the continuum meso- and macroscale is intended. Therefore, numerical methods for the damage analysis on the micro- and mesoscale including the scale transfer are presented and the transition to the macroscale is discussed. The investigation of brittle intergranular damage on the microscale is realized by the application of the nonlocal Quasicontinuum method, which fully describes the material behavior by atomistic potential functions, but reduces the number of atomic degrees of freedom by introducing kinematic couplings. Since this promising method is applied only by a limited group of researchers for special problems, necessary improvements have been realized in an own parallelized implementation of the 3D nonlocal Quasicontinuum method. The aim of this implementation was to develop and combine robust and efficient algorithms for a general use of the Quasicontinuum method, and therewith to allow for the atomistic damage analysis in arbitrary grain boundary configurations. The implementation is applied in analyses of brittle intergranular damage in ideal and nonideal grain boundary models of FCC aluminum, considering arbitrary misorientations. From the microscale simulations traction separation laws are derived, which describe grain boundary decohesion on the mesoscale. Traction separation laws are part of cohesive zone models to simulate the brittle interface decohesion in heterogeneous polycrystal structures. 2D and 3D mesoscale models are presented, which are able to reproduce crack initiation and propagation along cohesive interfaces in polycrystals. An improved Voronoi algorithm is developed in 2D to generate polycrystal material structures based on arbitrary distribution functions of grain size. The new model is more flexible in representing realistic grain size distributions. Further improvements of the 2D model are realized by the implementation and application of an orthotropic material model with Hill plasticity criterion to grains. The 2D and 3D polycrystal models are applied to analyze crack initiation and propagation in statically loaded samples of aluminum on the mesoscale without the necessity of initial damage definition.

In der vorliegenden Arbeit wird eine kraftschlüssige Verbindungstechnik für modulare, schalenartige Faserverbundbauteile vorgestellt. Die Verbindung basiert auf der Verklebung mit lokal begrenzten Stahlblechen. Aus dem Verbindungsansatz wird die Verklebung zwischen Stahl und Faserverbundkunststoff vertiefend betrachtet. Ziel sind die Wahl von technologischen Randbedingungen, die Erarbeitung eines Vorschlages zur numerischen Berechnung und Bemessung und die Formulierung konstruktiver Empfehlungen zum Entwurf von Verklebungen. Mechanische Kennwerte werden in Zugversuchen ermittelt und direkt auf die nichtlinearen Berechnungen übertragen. Technologische Einflüsse und die Streuungen aus realen Verklebungen werden über die Nachrechnung von Zugscherversuchen in die Bemessung integriert. Es wird gezeigt, dass die Verklebungen ausreichende Festigkeiten und ein zufriedenstellendes Bruchverhalten aufweisen. Die Kombination aus einer Werkstattverklebung und einer baustellengerechten Montage ermöglicht eine materialgerechte und effiziente Verbindungen für Faserverbundkonstruktionen unter den Randbedingungen des Bauwesens.

There are many different approaches to simulate the mechanical behavior of RC−Frames with masonry infills. In this paper, selected modeling techniques for masonry infills and reinforced concrete frame members will be discussed − stressing the attention on the damaging effects of the individual members and the entire system under quasi−static horizontal loading. The effect of the infill walls on the surrounding frame members is studied using equivalent strut elements. The implemented model consider in−plane failure modes for the infills, such as bed joint sliding and corner crushing. These frame member models differ with respect to their stress state. Finally, examples are provided and compared with experimental data from a real size test executed on a three story RC−Frame with and without infills. The quality of the model is evaluated on the basis of load−displacement relationships as well as damage progression.

MULTI-SITE CONSTRUCTION PROJECT SCHEDULING CONSIDERING RESOURCE MOVING TIME IN DEVELOPING COUNTRIES
(2010)

Under the booming construction demands in developing countries, particularly in Vietnam situation, construction contractors often perform multiple concurrent projects in different places. In construction project scheduling processes, the existing scheduling methods often assume the resource moving time between activities/projects to be negligible. When multiple projects are deployed in different places and far from each other, this assumption has many shortcomings for properly modelling the real-world constraints. Especially, with respect to developing countries such as the Vietnam which contains transportation systems that are still in backward and low technical standards. This paper proposes a new algorithm named Multi-Site Construction Project Scheduling - MCOPS. The objective of this algorithm is to solve the problem of minimising multi-site construction project duration under limited available conditions of renewable resources (labour, machines and equipment) combining with the moving time of required resource among activities/projects. Additionally, in order to mitigate the impact of resource moving time into the multi-site project duration, this paper proposed a new priority rule: Minimum Resource Moving Time (MinRMT). The MinRMT is applied to rank the finished activities according to a priority order, to support the released resources to the scheduling activities. In order to investigate the impact of the resource moving time among activities during the scheduling process, computational experimentation was implemented. The results of the MCOPS-based computational experiments showed that, the resource moving time among projects has significantly impacted the multi-site project durations and this amount of time can not be ignored in the multi-site project scheduling process. Besides, the efficient application of the MinRMT is also demonstrated through the achieved results of the computational experiment in this paper. Though the efforts in this paper are based on the Vietnamese construction conditions, the proposed method can be usefully applied in other developing countries which have similar construction conditions.

In this note, we describe quite explicitly the Howe duality for Hodge systems and connect it with the well-known facts of harmonic analysis and Clifford analysis. In Section 2, we recall briefly the Fisher decomposition and the Howe duality for harmonic analysis. In Section 3, the well-known fact that Clifford analysis is a real refinement of harmonic analysis is illustrated by the Fisher decomposition and the Howe duality for the space of spinor-valued polynomials in the Euclidean space under the so-called L-action. On the other hand, for Clifford algebra valued polynomials, we can consider another action, called in Clifford analysis the H-action. In the last section, we recall the Fisher decomposition for the H-action obtained recently. As in Clifford analysis the prominent role plays the Dirac equation in this case the basic set of equations is formed by the Hodge system. Moreover, analysis of Hodge systems can be viewed even as a refinement of Clifford analysis. In this note, we describe the Howe duality for the H-action. In particular, in Proposition 1, we recognize the Howe dual partner of the orthogonal group O(m) in this case as the Lie superalgebra sl(2 1). Furthermore, Theorem 2 gives the corresponding multiplicity free decomposition with an explicit description of irreducible pieces.

In this paper we present an inverse method which is capable of identifying system components in a hydro-mechanically coupled system, i.e. for fluid flow in porous media. As an example we regard water dams that were constructed more than hundred years ago but which are still in use. Over the time ageing processes have changed the condition of these dams. Within the dams fissures might have grown. The proposed method is designed to locate these fissures out of combined mechanical and hydraulic measurements. In a numerical example the fissures or damaged zones are described by a smeared crack model. The task is now to identify simultaneously the spatial distribution of Young’s modulus and the hydraulic permeability due to the fact, that in regions where damages are present, the mechanical stiffness of the system is reduced and the permeability increased. The inversion is shown to be an ill-posed problem. As a consequence regularizing methods have to be applied, where the nonlinear Landweber method (a gradient type method combined with a discrepancy principle) has proven to be an efficient choice.

Im vorliegenden Beitrag wird ein Framework für ein verteiltes dynamisches Produktmodell (FREAC) vorgestellt, welches der experimentellen Softwareentwicklung dient. Bei der Entwicklung von FREAC wurde versucht, folgende Eigenschaften umzusetzen, die bei herkömmlichen Systemen weitgehend fehlen: Erstens eine hohe Flexibilität, also eine möglichst hohe Anpassbarkeit für unterschiedliche Fachdisziplinen; Zweitens die Möglichkeit, verschiedene Tools nahtlos miteinander zu verknüpfen; Drittens die verteilte Modellbearbeitung in Echtzeit; Viertens das Abspeichern des gesamten Modell-Bearbeitungsprozesses; Fünftens eine dynamische Erweiterbarkeit sowohl für Softwareentwickler, als auch für die Nutzer der Tools. Die Bezeichnung FREAC umfasst sowohl das Framework zur Entwicklung und Pflege eines Produktmodells (FREAC-Development) als auch die entwickelten Tools selbst (FREAC-Tools).

Nähert man sich der Frage nach den Zusammenhängen zwischen Strukturalismus und generativen algorithmischen Planungsmethoden, so ist zunächst zu klären, was man unter Strukturalismus in der Architektur versteht. Allerdings gibt es letztlich keinen verbindlichen terminologischen Rahmen, innerhalb dessen sich eine solche Klärung vollziehen könnte. Strukturalismus in der Architektur wird oftmals auf ein formales Phänomen und damit auf eine Stilfrage reduziert. Der vorliegende Text will sich nicht mit Stilen und Phänomenen strukturalistischer Architektur auseinandersetzen, sondern konzentriert sich auf die Betrachtung strukturalistischer Entwurfsmethoden und stellt Bezüge her zu algorithmischen Verfahren, wobei das Zusammenspiel zwischen regelgeleitetem und intuitivem Vorgehen beim Entwerfen herausgearbeitet wird.

For many applications, nonuniformly distributed functional data is given which lead to large–scale scattered data problems. We wish to represent the data in terms of a sparse representation with a minimal amount of degrees of freedom. For this, an adaptive scheme which operates in a coarse-to-fine fashion using a multiscale basis is proposed. Specifically, we investigate hierarchical bases using B-splines and spline-(pre)wavelets. At each stage a leastsquares approximation of the data is computed. We take into account different requests arising in large-scale scattered data fitting: we discuss the fast iterative solution of the least square systems, regularization of the data, and the treatment of outliers. A particular application concerns the approximate continuation of harmonic functions, an issue arising in geodesy.

CONSTITUTIVE MODELS FOR SUBSOIL IN THE CONTEXT OF STRUCTURAL ANALYSIS IN CONSTRUCTION ENGINEERING
(2010)

Parameters of constitutive models are obtained generally comparing the results of forward numerical simulations to measurement data. Mostly the parameter values are varied by trial-and-error in order to reach an improved fit and obtain plausible results. However, the description of complex soil behavior requires advanced constitutive models where the rising complexity of these models mainly increases the number of unknown constitutive parameters. Thus an efficient identification "by hand" becomes quite difficult for most practical geotechnical problems. The main focus of this article is on finding a vector of parameters in a given search space which minimizes discrepancy between measurements and the associated numerical result. Classically, the parameter values are estimated from laboratory tests on small samples (triaxial tests or oedometer tests). For this purpose an automatic population-based approach is present to determine the material parameters for reconstituted and natural Bothkennar Clay. After the identification a statistical assessment is carried out of numerical results to evaluate different constitutive models. On the other side a geotechnical problem, stone columns under an embankment, is treated in a well instrumented field trial in Klagenfurt, Austria. For the identification purpose there are measurements from multilevel-piezometers, multilevel-extensometers and horizontal inclinometer. Based on the simulation of the stone columns in a FE-Model the identification of the constitutive parameters is similar to the experimental tests by minimizing the absolute error between measurement and numerical curves.

Geotechnical constructions are sophisticated structures due to the non-linear soil behaviour and the complex soil-structure interaction, which entails great exigencies on the liable engineer during the design process. The process can be schematised as a difficult and, depending on the opportunities and skills of the processor more or less innovative, creative and heuristic search for one or a multiple of defined objectives under given boundary conditions. Wholistic approaches including numerical optimisation which support the constructing engineer in this task do not currently exist. Abstract problem formulation is not state of the art; commonly parameter studies are bounded by computational effort. Thereby potential regarding cost effectiveness, construction time, load capacity and/or serviceability are often used insufficiently. This paper describes systematic approaches for comprehensive optimisation of selected geotechnical constructions like combined pile raft foundations and quay wall structures. Several optimisation paradigms like the mono- and the multi-objective optimisation are demonstrated and their use for a more efficient design concerning various intentions is shown in example. The optimisation is implemented by using Evolutionary Algorithms. The applicability to geotechnical real world problems including nonlinearities, discontinuities and multi-modalities is shown. The routines are adapted to common problems and coupled with conventional analysis procedures as well as with numerical calculation software based on the finite element method. Numerical optimisation of geotechnical design using efficient algorithms is able to deliver highly effective solutions after investing more effort into the parameterization of the problem. Obtained results can be used for realizing different constructions near the stability limit, visualizing the sensitivity regarding the construction parameters or simply procuring more effective solutions.

In order to make control decisions, Smart Buildings need to collect data from multiple sources and bring it to a central location, such as the Building Management System (BMS). This needs to be done in a timely and automated fashion. Besides data being gathered from different energy using elements, information of occupant behaviour is also important for a building’s requirement analysis. In this paper, the parameter of Occupant Density was considered to help find behaviour of occupants towards a building space. Through this parameter, support for building energy consumption and requirements based on occupant need and demands was provided. The demonstrator presented provides information on the number of people present in a particular building space at any time, giving the space density. Such collections of density data made over a certain period of time represents occupant behaviour towards the building space, giving its usage patterns. Similarly, inventory items were tracked and monitored for moving out or being brought into a particular read zone. For both, people and inventory items, this was achieved using small, low-cost, passive Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) tags. Occupants were given the tags in a form factor of a credit card to be possessed at all times. A central database was built where occupant and inventory information for a particular building space was maintained for monitoring and providing a central data access.

Tests on Polymer Modified Cement Concrete (PCC) have shown significant large creep deformation. The reasons for that as well as additional material phenomena are explained in the following paper. Existing creep models developed for standard concrete are studied to determine the time-dependent deformations of PCC. These models are: model B3 by Bažant and Bajewa, the models according to Model Code 90 and ACI 209 as well as model GL2000 by Gardner and Lockman. The calculated creep strains are compared to existing experimental data of PCC and the differences are pointed out. Furthermore, an optimization of the model parameters is performed to fit the models to the experimental data to achieve a better model prognosis.

The evident advances of the computational power of the digital computers enable the modeling of the total system of structures. Such modeling demands compatible representations of the couplings of different structural subsystems. Therefore, models of dynamic interaction between the vehicle and the bridge and models of a bridge bearing, a coupling element between the bridge's superstructure and substructure, are of interest and discussed within this paper. The vehicle-bridge interaction may be described as a function connecting two sets of behavior. In this case, the coupling is embodied by mutual parameters that affect both systems, such as the frequency content of the bridge and the vehicle. Whereas the bridge bearings are elements used specifically to couple, in such elements the deformation and the transferred loads are used in characterizing the coupling The nature of these couplings and their influence on the bridge response is different. However, the need to assess the amount of dynamic response transferred by or within these couplings is a common argument.

NUMERICAL SIMULATION OF THERMO-HYGRAL ALKALI-SILICA REACTION MODEL IN CONCRETE AT THE MESOSCALE
(2010)

This research aims to model Alkali-Silica Reaction gel expansion in concrete under the influence of hygral and thermal loading, based on experimental results. ASR provokes a heterogeneous expansion in concrete leading to dimensional changes and eventually the premature failure of the concrete structure. This can result in map cracking on the concrete surface which will decrease the concrete stiffness. Factors that influence ASR are parameters such as the cement alkalinity, the number of deleterious silica from the aggregate used, concrete porosity, and external factors like temperature, humidity and external source of alkali from ingression of deicing salts. Uncertainties of the influential factors make ASR a difficult phenomenon to solve; hence my approach to this matter is to solve the problem using stochastic modelling, where a numerical simulation of concrete cross-section with integration of experimental results from Finger-Institute for Building Materials Science at the Bauhaus-Universität Weimar. The problem is formulated as a multi-field problem, combining heat transfer, fluid transfer and the reaction rate model with the mechanical stress field. Simulation is performed as a mesoscale model considering aggregates and mortar matrix. The reaction rate model will be conducted using experimental results from concrete expansions due to ASR gained from concrete prism tests. Expansive strains values for transient environmental conditions due to the reaction rate will be determined from calculation based on the reaction rate model. Results from these models will be able to predict the rate of ASR expansion and the cracking propagation that may arise.

Mit der Finite-Elemente-Methode kann das geometrisch und physikalisch nichtlineare Tragverhalten von Stahlbetonelementen nachgebildet werden. Aufgrund des nichtlinearen Verhaltens im Druckbereich sowie der Rissbildung unter Zugbeanspruchung ist die Steifigkeit des Betons in der Regel nicht konstant. Maßgebend für die Genauigkeit der Berechnung ist die Beschreibung dieser Steifigkeit über die gesamte Struktur sowie über ein einzelnes Element. Der Fokus der Arbeit liegt auf der Ermittlung der Steifigkeitsmatrizen für die geometrisch und physikalisch nichtlineare Analyse. Dabei sollen die linearen und nichtlinearen Anteile der Steifigkeitsmatrix unter Berücksichtigung einer über das Element veränderlichen Steifigkeit entwickelt werden. Auf Basis der theoretischen Untersuchungen werden die Algorithmen zum Aufstellen der Steifigkeits- und B-Matrix in der Software MATLAB implementiert. Durch Integration der neuen Module in eine bestehende MATLAB FEM Anwendung sollen die Algorithmen anhand von Beispielrechnungen verifiziert werden.

The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions.

Planning and construction processes are characterized by the peculiarity that they need to be designed individually for each project. It is necessary to set up an individual schedule for each project. As a basis for a new project, schedules from already finished projects are used, but adaptions are always necessary. In practice, scheduling tools only document a process. Schedules cover a set of activities, their duration and a set of interdependencies between activities. The design of a process is up to the user. It is not necessary to specify each interdependency, and completeness and correctness need to be checked manually. No methodologies are available to guarantee properties such as correctness or completeness. The considerations presented in the paper are based on an approach where a planning and a construction process including the interdependencies between planning and construction activities are regarded as a result. Selected information need to be specified by a user, and a proposal for an order of planning and construction activities is computed. As a consequence, process properties such as correctness and completeness can be guaranteed with respect to user input. Especially in Germany, clients are allowed to modify their requirements at any time. This leads to modifications in the planning and construction processes. This paper covers a mathematical formulation for this problem based on set theory. A complex structure is set up covering objects and relations; and operations are defined that guarantee consistency in the underlying and versioned process description. The presented considerations are based on previous work. This paper can be regarded as the next step in a series of previous work describing how a suitable concept for handling, planning and construction processes in civil engineering can be formed.

This cumulative dissertation investigates aspects of consumer decision making in hedonic contexts and its implications for the marketing of media goods through a series of three empirical studies. All three studies take place within a common theoretical framework of decision making models, applying parts of the framework in novel ways to solve real-world marketing research problems (study 1 and 2), and examining theoretical relationships between variables within of the framework (study 3). One notable way in which the studies differ is their theoretical treatment of the hedonic component of decision making, i.e. the role and conceptualization of emotions.

Nach dem aufgeregten Palaver um den Computer als 'Medium' und die akademische Begleitrhetorik zum Internet wird erneut die Frage nach der Leistung von Medienphilosophie gestellt - in diesem Beitrag als medienanthropologische Vergewisserung: welche technischen Überschreitungen definieren das Neue unserer Lage?

In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.

Within the scheduling of construction projects, different, partly conflicting objectives have to be considered. The specification of an efficient construction schedule is a challenging task, which leads to a NP-hard multi-criteria optimization problem. In the past decades, so-called metaheuristics have been developed for scheduling problems to find near-optimal solutions in reasonable time. This paper presents a Simulated Annealing concept to determine near-optimal construction schedules. Simulated Annealing is a well-known metaheuristic optimization approach for solving complex combinatorial problems. To enable dealing with several optimization objectives the Pareto optimization concept is applied. Thus, the optimization result is a set of Pareto-optimal schedules, which can be analyzed for selecting exactly one practicable and reasonable schedule. A flexible constraint-based simulation approach is used to generate possible neighboring solutions very quickly during the optimization process. The essential aspects of the developed Pareto Simulated Annealing concept are presented in detail.

Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.

Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ...

In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.

This thesis focuses on the cryptanalysis and the design of block ciphers and hash func- tions. The thesis starts with an overview of methods for cryptanalysis of block ciphers which are based on differential cryptanalysis. We explain these concepts and also sev- eral combinations of these attacks. We propose new attacks on reduced versions of ARIA and AES. Furthermore, we analyze the strength of the internal block ciphers of hash functions. We propose the first attacks that break the internal block ciphers of Tiger, HAS-160, and a reduced round version of SHACAL-2. The last part of the thesis is concerned with the analysis and the design of cryptographic hash functions. We adopt a block cipher attack called slide attack into the scenario of hash function cryptanalysis. We then use this new method to attack different variants of GRINDAHL and RADIOGATUN. Finally, we propose a new hash function called TWISTER which was designed and pro- posed for the SHA-3 competition. TWISTER was accepted for round one of this com- petition. Our approach follows a new strategy to design a cryptographic hash function. We also describe several attacks on TWISTER and discuss the security issues concern- ing these attack on TWISTER.

In the last two decades, many cities have faced changes in their economic basis and therefore adopted an entrepreneurial approach in the municipal administration accompanied by city marketing strategies. Brazilian cities have also adopted this approach, like the case of Florianópolis. Florianópolis has promoted advertising campaigns on the natural resources of the Island of Santa Catarina as well as on its quality of life in comparison to other cities. However, due also to such campaigns, it has experienced a great demographic growth and, consequently, infrastructural and social problems. Nevertheless, it seems to have a good image within the national urban scenario and has been commonly considered an “urban consumption dream” for many Brazilians. This paradoxical situation is the reason why it has been chosen as the research object in this dissertation. Thus, the questions of this research are: is there a gap between the promise and the performance of the city of Florianópolis? If so, can tourists and residents recognize it? And finally, how can this gap be demonstrated? Accordingly, the main objective of this research is to propose a conformity assessment approach applicable to cities, by which the content of city advertisement campaigns can be compared to its performance indicators and satisfaction degree of its consumers. Therefore, this approach is composed by different methods: literature and legislation reviews, semi-structured and structured interviews with experts and inhabitants, an urban centrality development analysis, a qualitative discourse analysis of advertising material (including images), a qualitative content analysis of newspaper reports and a questionnaire survey. Finally, the theses are: yes, there is a gap between promise and performance of Florianópolis; this promise is a result of city marketing campaigns which advertise its natural features and at the same time hiding its urban aspects, supported by some political and private actors, mainly interested in the development of tourism and real estate market in the city; this gap has been already recognized by tourists and more intensively by residents; the selected methods worked as a kind of conformity assessment for cities and tourist destinations; and last but not least, since there is a gap, it designates the practice of “make-up urbanism”. Research limitations are the short time frame covered by this analysis and small and non-representative samples. However, its relevance lies in the attempt to fill in two disciplinary lacunas: a conformity assessment approach for cities and the creation of knowledge about Florianópolis and its further presentation at an international level, on the one hand. On the other hand, the transfer of this approach to other cities would help explaining a (common) contemporary urban phenomenon and appeal for more ethical conduct and transparency in the practices of city marketing.

Reducing energy consumption is one of the major challenges for present day and will continue for future generations. The emerging EU directives relating to energy (EU EPBD and the EU Directive on Emissions Trading) now place demands on building owners to rate the energy performance of their buildings for efficient energy management. Moreover European Legislation (Directive 2006/32/EC) requires Facility Managers to reduce building energy consumption and operational costs. Currently sophisticated building services systems are available integrating off-the-shelf building management components. However this ad-hoc combination presents many difficulties to building owners in the management and upgrade of these systems. This paper addresses the need for integration concepts, holistic monitoring and analysis methodologies, life-cycle oriented decision support and sophisticated control strategies through the seamless integration of people, ICT-devices and computational resources via introducing the newly developed integrated system architecture. The first concept was applied to a residential building and the results were elaborated to improve current building conditions.

Buildings can be divided into various types and described by a huge number of parameters. Within the life cycle of a building, especially during the design and construction phases, a lot of engineers with different points of view, proprietary applications and data formats are involved. The collaboration of all participating engineers is characterised by a high amount of communication. Due to these aspects, a homogeneous building model for all engineers is not feasible. The status quo of civil engineering is the segmentation of the complete model into partial models. Currently, the interdependencies of these partial models are not in the focus of available engineering solutions. This paper addresses the problem of coupling partial models in civil engineering. According to the state-of-the-art, applications and partial models are formulated by the object-oriented method. Although this method solves basic communication problems like subclass coupling directly it was found that many relevant coupling problems remain to be solved. Therefore, it is necessary to analyse and classify the relevant coupling types in building modelling. Coupling in computer science refers to the relationship between modules and their mutual interaction and can be divided into different coupling types. The coupling types differ on the degree by which the coupled modules rely upon each other. This is exemplified by a general reference example from civil engineering. A uniform formulation of coupling patterns is described analogously to design patterns, which are a common methodology in software engineering. Design patterns are templates for describing a general reusable solution to a commonly occurring problem. A template is independent of the programming language and the operating system. These coupling patterns are selected according to the specific problems of building modelling. A specific meta-model for coupling problems in civil engineering is introduced. In our meta-model the coupling patterns are a semantic description of a specific coupling design.

An introduction is given to Clifford Analysis over pseudo-Euclidean space of arbitrary signature, called for short Ultrahyperbolic Clifford Analysis (UCA). UCA is regarded as a function theory of Clifford-valued functions, satisfying a first order partial differential equation involving a vector-valued differential operator, called a Dirac operator. The formulation of UCA presented here pays special attention to its geometrical setting. This permits to identify tensors which qualify as geometrically invariant Dirac operators and to take a position on the naturalness of contravariant and covariant versions of such a theory. In addition, a formal method is described to construct the general solution to the aforementioned equation in the context of covariant UCA.

Verkehrsmengenrisiko bei PPP-Projekten im Straßensektor - Determinanten effizienter Risikoallokation
(2010)

Trotz weltweit umfangreichen Erfahrungen mit Public Private Partnership Projekten im Straßensektor bleibt der Umgang mit dem Verkehrsmengenrisiko für die Projektbeteiligten eine Herausforderung. Die Arbeit widmet sich daher der wesentlichen Fragestellung nach einer effizienten Allokation dieses Risikos, dem nicht weniger Bedeutung zukommt als für den gesamtwirtschaftlichen Erfolg eines Straßenkonzessionsprojektes eine entscheidende Rolle zu spielen. Untersucht werden zunächst die Charakteristika des Verkehrsmengenrisikos mit seinen umfänglichen Einflussfaktoren. Anschließend werden die in der Praxis zur Anwendung kommenden Vertragsmodelle zur Bewirtschaftung von Straßeninfrastruktur dargestellt und analysiert, wie in den einzelnen Modellen Verkehrsmengenrisiko auf die verschiedenen Vertragspartner verteilt wird. Auf Basis dieser Grundlagen wird ein kriteriengestützter Analyserahmen entwickelt, der die Effizienz unterschiedlicher Risikoallokationen zwischen den Vertragspartner bewertet. Dabei werden einerseits die effizienzbeeinflussenden Eigenschaften der potentiellen Risikoträger eines PPP-Projektes berücksichtigt als auch die die effizienzbeeinflussenden Wirkungen der unterschiedlichen Vertragsmodelle. Aus den Erkenntnissen dieser Analyse werden letztlich Handlungs- und Gestaltungsempfehlungen zum Umgang mit dem Verkehrsmengenrisiko abgeleitet.

SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)

After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.

In nonlinear simulations the loading is, in general, applied in an incremental way. Path-following algorithms are used to trace the equilibrium path during the failure process. Standard displacement controlled solution strategies fail if snap-back phenomena occur. In this contribution, a path-following algorithm based on the dissipation of the inelastic energy is presented which allows for the simulation of snap-backs. Since the constraint is defined in terms of the internal energy, the algorithm is not restricted to continuum damage models. Furthermore, no a priori knowledge about the final damage distribution is required. The performance of the proposed algorithm is illustrated using nonlinear mesoscale simulations.

NONZONAL WAVELETS ON S^N
(2010)

In the present article we will construct wavelets on an arbitrary dimensional sphere S^n due the approach of approximate Identities. There are two equivalently approaches to wavelets. The group theoretical approach formulates a square integrability condition for a group acting via unitary, irreducible representation on the sphere. The connection to the group theoretical approach will be sketched. The concept of approximate identities uses the same constructions in the background, here we select an appropriate section of dilations and translations in the group acting on the sphere in two steps. At First we will formulate dilations in terms of approximate identities and than we call in translations on the sphere as rotations. This leads to the construction of an orthogonal polynomial system in L²(SO(n+1)). That approach is convenient to construct concrete wavelets, since the appropriate kernels can be constructed form the heat kernel leading to the approximate Identity of Gauss-Weierstra\ss. We will work out conditions to functions forming a family of wavelets, subsequently we formulate how we can construct zonal wavelets from a approximate Identity and the relation to admissibility of nonzonal wavelets. Eventually we will give an example of a nonzonal Wavelet on $S^n$, which we obtain from the approximate identity of Gauss-Weierstraß.

On the mechanisms of shrinkage reducing admixtures in self con-solidating mortars and concretes
(2010)

Self Consolidating Concrete – a dream has come true!(?) Self Consolidating Concrete (SCC) is mainly characterised by its special rheological properties. With-out any vibration this concrete can be placed and compacted under its own weight, without segrega-tion or bleeding. The use of such concrete can increase the productivity on construction sites and en-able the use of a higher degree of well distributed reinforcement for thin walled structural members. This new technology also reduces health risks since in contrast to the traditional handling of concrete, the emission of noise and vibration are substantially decreased. The specific mix design for self consolidating concretes was introduced around the 1980s in Japan. In comparison to normal vibrated concrete an increased paste volume enables a good distribution of aggregates within the paste matrix, minimising the influence of aggregates friction on the concrete flow property. The introduction of inert and/or pozzolanic additives as part of the paste provides the required excess paste volume without using disproportionally high amounts of plain cement. Due to further developments of concrete admixtures such as superplasticizers, the cement paste can gain self levelling properties without causing segregation of aggregates. Whereas SCC differs from normal vibrated concrete in its fresh attributes, it should reach similar properties in the hardened state. Due to the increased paste volume it usually shows higher shrinkage. Furthermore, owing to strength requirements, SCC is often produced at low water to cement ratios and hence may additionally suffer from autogenous shrinkage. This means that cracking caused by drying or autogenous shrinkage is a real risk for SCC and can compromise its durability as cracks may serve as ingression paths for gases and salts or might permit leaching. For the time being SCC still exhibits increased shrinkage and cracking probability and hence may be discarded in many practical applications. This can be overcome by a better understanding of those mechanisms and the ways to mitigate them. It is a target of this thesis to contribute to this. How to cope with increased shrinkage of SCC? In general, engineers are facing severe problems related to shrinkage and cracking. Even for normal and high performance concrete, containing moderate amounts of binder, a lot of effort was put on counteracting shrinkage and avoiding cracking. For the time being these efforts resulted in the knowledge of how to distribute cracks rather to avoid them. The most efficient way to decrease shrinkage turned out to be to decrease the cement content of concrete down to a minimum but still sufficient amount. For SCC this obviously seems to be contradictory with the requirement of a high paste volume. Indeed, the potential for shrinkage reduction is limited to some small range modifications in the mix design following two major concepts. The first one is the reduction of the required paste volume by optimising the aggregate grading curve. The second one involves high volume substitution of cement, preferentially using inert mineral additives. The optimization of grading curves is limited by several severe practical issues. Problems start with the availability of sufficiently fractionated aggregates. Usually attempts fail because of the enormous effort in composing application-optimized grading curves or mix designs. Due to durability reasons, the substitution rate for cement is limited depending on the application purpose and on environmental exposure of the hardened concrete. In the early 1980s Shrinkage Reducing Admixtures (SRA) were introduced to counteract drying shrinkage of concrete. The first publications explicitly dealing with SRA go back to Goto and Sato (Japan). They were published in 1983, which is also the time when the SCC concept was introduced. SRA modified concretes showed a substantial reduction of free drying shrinkage contributing to crack prevention or at least a significant decrease of crack width in situations of restrained drying shrinkage. Will shrinkage reducing admixtures contribute to a broader application of SCC? Within the last three decades performance tests on several types of concrete proved the efficiency of shrinkage reducing admixtures. So, at least in terms of shrinkage and cracking, concretes in general and SCC in particular can benefit from SRA application. But "One man's meat is another man's poison" and with respect to long term performance of SRA modified concretes there are still several issues to be clarified. One of these concerns the impact of SRAs on cement hydration. It is therefore an issue to know if changes in the hydrated phase composition, induced by SRA, result in undesired properties or decreased durability. Another issue is that the long term shrinkage reduction has to be evaluated. For example, one can wonder if SRA leaching may diminish or even eliminate long term shrinkage reduction and if the release of admixtures could be a severe environmental issue. It should also be noted that the basic mechanism or physical impact of SRA as well as its implementation in recent models for shrinkage of concrete is still being discussed. The present thesis tries to shed light on the role of SRA in self consolidating concrete focusing on the three questions outlined above: basic mechanisms of cement hydration, physical impact on shrinkage and the sustainability of SRA-application. Which contributions result from this study? Based on an extensive patent search, commercial SRAs could be identified to be synergistic mixtures of non-ionic surfactants and glycols. This turns out to be most important information for more than one reason and is the subject of chapter 4. An abundant literature focuses on properties of these non-ionic surfactants. Moreover, from this rich pool of information, the behaviour of SRAs and their interactions in cementitious systems were better understood through this thesis. For example, it could be anticipated how SRAs behave in strong electrolytes and how surface activity, i.e. surface tension, and interparticle forces might be affected. The synergy effect regarding enhanced performance induced by the presence of additional glycol in SRAs could be derived from the literature on the co-surfactant nature of glycols. Generally it now can be said that glycols ensure that the non-ionic surfactant is properly distributed onto the paste interfaces to efficiently reduce surface tension. In literature, the impact of organic matter on cement hydration was extensively studied for other admixtures like superplasticizer. From there, main impact factors related to the nature of these molecules could be identified. In addition, here again, the literature on non-ionic surfactants provides sufficient information to anticipate possible interactions of SRA with cement hydration based on the nature of non-ionic surfactants. All in all, the extensive study on the nature of non-ionic surfactants, presented in chapter 4, provides fundamental understanding of the behaviour of SRAs in cement paste. Taking a step further to relate this to the impact on drying and shrinkage required to review recent models for drying and shrinkage of cement paste as presented in chapter 3. There, it is shown that macroscopic thermodynamics of the open pore systems can be successfully applied to predict drying induced deformation, but that surface activity of SRA still has to be implemented to explain the shrinkage reduction it causes. Because of severe issues concerning the importance of capillary pressure on shrinkage, a new macroscopic thermodynamic model was derived in a way that meets requirements to properly incorporate surface activity of SRA. This is the subject of chapter 5. Based on theoretical considerations, in chapter 5 the broader impact of SRA on drying cementitious matter could be outlined. In a next step, cement paste was treated as a deformable, open drying pore system. Thereby, the drying phenomena of SRA modified mortars and concrete observed by other authors could be retrieved. This phenomenological consistency of the model constitutes an important contribution towards the understanding of SRA mechanisms. Another main contribution of this work came from introducing an artificial pore system, denominated the normcube. Using this model system, it could be shown how the evolution of interfacial area and its properties interact in presence of SRAs and how this impacts drying characteristics. In chapter 7, the surface activity of commercial SRAs in aqueous solution and synthetic pore solution was investigated. This shows how the electrolyte concentration of synthetic pore solution impacts the phase behaviour of SRA and conversely, how the presence of SRA impacts the aqueous electrolyte solution. Whilst electrolytes enhance self-aggregation of SRAs into micelles and liquid crystals, the presence of SRAs leads to precipitation of minerals as syngenite and mirabilite. Moreover, electrolyte solutions containing SRAs comprise limited miscibility or rather show miscibility gaps, where the liquid separates into isotropic micellar solutions and surfactant rich reverse micellar solutions. The investigation of surface activity and phase behaviour of SRA unravelled another important contribution. From macroscopic surface tension measurements, a relationship between excess surface concentration of SRA, bulk concentration of SRA and exposed interfacial area could be derived. Based on this, it is now possible to predict the actual surface tension of the pore fluid in the course of drying once the evolution of internal interfacial area is known. This is used later in this thesis to describe the specific drying and shrinkage behaviour of SRA modified pastes and mortars. Calorimetric studies on normal Portland cement and composite binders revealed that SRA alone show only minor impact on hydration kinetics. In presence of superplasticizer however the cement hydration can be significantly decelerated. The delaying impact of SRA could be related to a selective deceleration of silicate phase hydration. Moreover, it could be shown that portlandite precipitation in presence of SRA is changed, turning the compact habitus into more or less layered structures. Thereby, the specific surface increases, causing the amount of physically bound water to increase, which in turn reduces the maximum degree of hydration achievable for sealed systems. Extensive phase analysis shows that the hydrated phase composition of SRA modified binders re-mains almost unaffected. The appearance of a temporary mineral phase could be detected by environmental scanning electron microscopy. As could be shown for synthetic pore solutions, syngenite precipitates during early hydration stages and is later consumed in the course of aluminate hydration, i.e. when sulphates are depleted. Moreover, for some SRAs, the salting out phenomena supposed to be enhanced in strong electrolytes could also be shown to take place. The resulting organic precipitates could be identified by SEM-EDX in cement paste and by X-ray diffraction on solid residues of synthetic pore solution. The presence of SRAs could also be identified to impact microstructure of well cured cement paste. Based on nitrogen adsorption measurements and mercury intrusion porosimetry the amount of small pores is seen to increase with SRA dosage, whilst the overall porosity remains unchanged. The question regarding sustainability of SRA application is the subject of chapter 10. By means of leaching studies it could be shown that SRA can be leached significantly. The mechanism could be identified as a diffusion process and a range of effective diffusion coefficients could be estimated. Thereby, the leaching of SRA can now be estimated for real structural members. However, while the admixture can be leached to high extents in tank tests, the leaching rates in practical applications can be assumed to be low because of much reduced contact with water. This could be proven by quantifying admixture loss during long term drying and rewetting cycles. Despite a loss of admixture shrinkage reduction is hardly impacted. Moreover, the cyclic tests revealed that the total deformations in presence of SRA remain low due to a lower extent of irreversibly shrinkage deformations. Another important contribution towards the better understanding of the working mechanism of SRA for drying and shrinkage came from the same leaching tests. A significant fraction of SRA is found to be immobile and does not diffuse in leaching. This fraction of SRA is probably strongly associated to cement phases as the calcium-silicate-hydrates or portlandite. Based on these findings, it is now also possible to quantify the amount of admixture active at the interfaces. This means that, the evolution of surface tension in the course of drying can be approximated, which is a fundamental requirement for modeling shrinkage in presence of SRA. The last experimental chapter of this study focuses on the working mechanism and impact of SRA on drying and shrinkage. Based on the thermodynamics of the open deformable pore system introduced in chapter 5, energy balances are set up using desorption and shrinkage isotherms of actual samples. Information on distribution of SRA in the hydrated paste is used to estimate the actual surface tensions of the pore solution. In other words, this is the first time that the surface activity of the SRA in the course of the drying is fully accounted for. From the energy balances the evolution and properties of the internal interface are then obtained. This made it possible to explain why SRAs impact drying and shrinkage and in what specific range of relative humidity they are active. Summarising the findings of this thesis it can be said that the understanding of the impact of SRAs on hydration, drying and shrinkage was brought forward. Many of the new insights came from the careful investigation of the theory of non-ionic surfactants, something that the cement community had generally overlooked up to now.

One of the main focuses of recent Chinese urban development is the creation and retrofitting of public spaces driven by the market force and demand. However, researches concerning human and cultural influences on shaping public spaces have been scanty. There still exist many undefined ambiguous planning aspects institutionally and legislatively. This is an explanatory research to address interactions, incorporations and interrelationship between the lived environment and its peoples. It is knowledge-seeking and normative. Theoretically, public space in a Chinese context is conceptualized; empirically, a selected case is inquired. The research has unfolded a comparatively complete understanding of China’s planning evolution and on-going practices. Data collection emphasizes the concept of ‘people’ and ‘space’. First-hand data is derived from the intensive fieldwork and observatory and participatory documentations. The ample detailed authentic empirical data empowers space syntax as a strong analysis tool in decoding how human’s activities influence the public space. Findings fall into two categories but interdependent. Firstly, it discloses the studied settlement as a generic, organic and incremental development model. Its growth and established environment is evolutionary and incremental, based on its intrinsic traditions, life values and available resources. As a self-sustaining settlement, it highlights certain vernacular traits of spatial development out of lifestyles and cultural practices. Its spatial articulation appears as a process parallel to socio-economic transitions. Secondly, crucial planning aspects are theoretically summarized to address the existing gap between current planning methodology and practicalities. It pinpoints several most significant and particular issues, namely, disintegrated land use system and urban planning; missing of urban design in the planning system, loss of a human-responsive environment resulted from standardized planning and under-estimation of heritage in urban development. The research challenges present Chinese planning laws and regulations through urban public space study; and pinpoints to yield certain growth leverage for planning and development. Thus, planning is able to empower inhabitants to make decisions along the process of shaping and sustaining their space. Therefore, it discusses not only legislative issues, concerning land use planning, urban design and heritage conservation. It leads to a pivotal proposal, i.e., the integration of human and their social spaces in formulating a new spatial strategy. It expects to inform policymakers of underpinning social values and cultural practices in reconfiguring postmodern Chinese spatiality. It propounds that social context endemic to communities shall be integrated as a crucial tool in spatial strategy design, hence to strengthen spatial attributes and improve life quality.

This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.

THE FOURIER-BESSEL TRANSFORM
(2010)

In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.

In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.

Public Private Partnership (PPP) setzt sich zunehmend als alternative Beschaffungsvariante für die öffentliche Hand durch. Im Krankenhausbereich bestehen erste Erfahrungen mit PPP, allerdings kann hier im Gegensatz zu anderen öffentlichen Bereichen noch nicht von einer Etablierung gesprochen werden. In vielen Krankenhäusern besteht Unklarheit über dieses neue Organisationskonzept. Was steckt hinter diesem Begriff, der teilweise synonym zur „Privatisierung“ verwendet wird? Ausgehend von dieser Fragestellung wird in der vorliegenden Arbeit gezeigt, dass PPP bei richtiger Anwendung eine Alternative zum Verkauf eines öffentlichen Krankenhauses darstellt. PPP ist ein Instrument, mit dem privates Know-how und Kapital für den öffentlichen Krankenhausträger nutzbar gemacht wird. Die öffentliche Trägerschaft des Krankenhauses bleibt dabei, im Gegensatz zu einer materiellen Privatisierung, erhalten. Die Rahmenbedingungen des Gesundheitswesens stellen insbesondere die öffentlichen Krankenhäuser vor große Herausforderungen. Die Lage ist zunehmend geprägt von Mittelknappheit, Sanierungsstau und stetig steigendem Wettbewerbsdruck um die Patienten. Die Reformbemühungen der Bundesregierung zur Senkung der Gesundheitsausgaben haben in den letzten Jahrzehnten zu immer neuen Gesetzesregelungen in immer kürzeren Zeitabständen geführt. Den bisher letzten großen Schritt in dieser Entwicklung stellt die Umstellung der Krankenhausvergütung auf DRG-Fallpauschalen dar. Die Auswirkungen sind insbesondere in den öffentlichen Krankenhäusern zu spüren. Defizitäre Einrichtungen, die bisher durch Subventionen gestützt wurden, werden nun nicht mehr „künstlich am Leben“ erhalten. Alle Krankenhäuser erhalten eine leistungsorientierte Vergütung, weitgehend unabhängig von den krankenhausspezifisch anfallenden Kosten. Durch diese Entwicklungen wurde das Bestreben in den Krankenhäuser, die internen Leistungsprozesse zu optimieren, weiter forciert. Dabei kommt den mit der Gebäudesubstanz verbundenen Leistungen eine besondere Bedeutung zu. Aufgrund hoher Investitionskosten und bedeutender Aufwendungen in der Nutzungsphase erreichen die nicht-medizinischen Leistungen in einem Krankenhaus einen beachtlichen Anteil an den Gesamtkosten. Fast ein Drittel der Krankenhaus-Kosten steht nicht in direkter Beziehung zum Heilungsprozess. In Deutschland macht dieser Anteil der nicht-medizinischen Abläufe jährlich rd. 18 Mrd. Euro aus. Das Optimierungspotenzial des nicht-medizinischen Leistungsbereichs, der auch die bau- und immobilienwirtschaftlichen Leistungen umfasst, wird bisher oft noch unterschätzt und ist in den meisten Fällen noch nicht ausgeschöpft. Allein schon aufgrund dessen finanzieller Bedeutung bedarf es einer verstärkten wissenschaftlichen Auseinandersetzung. Dieser Notwendigkeit ist bisher noch unzureichend Rechnung getragen wurden. Die vorliegende Arbeit will mit der Erforschung der Anwendbarkeit von PPP für Krankenaus-Immobilien einen Beitrag dazu leisten, diese Lücke zu schließen. Mit dieser für den deutschen Krankenhausbereich neuartigen Beschaffungsvariante wird ein Weg aufgezeigt, wie bei den nicht-medizinischen Leistungen nachhaltig Effizienzpotenziale erschlossen werden können und auf diese Weise ein Beitrag zum wirtschaftlichen Erfolg des gesamten Krankenhauses erzielt werden kann.

Seit 1969 werden für die Bundesrepublik kontinuierlich Berechnungen zu den Gesamtkosten des Straßenverkehrs der Bundesfernstraßen und deren Verteilung auf die Verkehrsteilnehmer durchgeführt. Die Ergebnisse der Wegekostenrechnungen der Jahre 2002 und 2007 sind die Grundlage für die mittlerweile für das deutsche Autobahnnetz eingeführte fahrleistungsbezogene Benutzungsgebühr für Lkw mit einem zulässigen Gesamtgewicht von mindestens zwölf Tonnen. Damit wird die Forderung der EU-Richtlinie 1999/62/EG umgesetzt, nach der sich die durchschnittlichen Straßenbenutzungsgebühren an den Kosten für den Bau, den Betrieb und den Ausbau des betreffenden Verkehrswegenetzes orientieren sollen. Mit der EU-Richtlinie 2006/38/EG kündigt sich die weitere Entwicklung bei der Berechnung von Straßenbenutzungsgebühren an. Zukünftig sollen auch externe Kosten in die Berechnung einfließen. Ein erster Schritt zur Berücksichtigung dieser externen Kosten erfolgte mit Erstellung eines Handbuchs im Rahmen eines EU-Forschungsprojektes. Das Handbuch enthält aufgrund der unterschiedlichen Rahmenbedingungen in den Mitgliedsstaaten der EU keine exakten Berechnungsvorschriften, sondern stellt verschiedene methodische Ansätze bisher durchgeführter Studien zu externen Kosten vor, gibt Empfehlungen hinsichtlich der Methodenwahl und beinhaltet Schätzungen über die Höhe der externen Kosten. Die im europäischen Raum in den vergangenen Jahren durchgeführten Studien zur Ermittlung externer Kosten des Verkehrs zeichnen sich durch einander ähnelnde Vorgehensweisen aus, die aber vor allem hinsichtlich der Kostenrechnungsart und der verwendeten Kostensätze aus Sicht des Verfassers der vorliegenden Arbeit kritische Aspekte aufweisen. In der vorliegenden Dissertationsschrift wird daher eine alternative Berechnungsmethodik zur Ermittlung abschnitts-, fahrzeugklassen- und fahrleistungsbezogener externer Kosten für Autobahnen entwickelt und an einem ausgewählten Beispielnetz zur Anwendung gebracht. Dabei wird in einigen wesentlichen Punkten von der in aktuellen Studien überwiegend gewählten Vorgehensweise abgewichen, um eine andere Sichtweise darzustellen. Damit trägt die vorliegende Arbeit substanziell zur Erweiterung des Erkenntnisstands zu Berechnungsmethoden externer Kosten des Straßenverkehrs bei. Die hier entwickelte Berechnungsmethodik ist außerdem als Grundlage für ein in der Praxis anwendbares Verfahren zu verstehen und zeichnet sich auch daher durch eine einfach zu handhabende Übertragbarkeit auf das gesamte Autobahnnetz Deutschlands aus. Die Abschnitte entsprechen den Teilstrecken zwischen zwei Autobahnanschlussstellen. Es wird zwischen den beiden Fahrzeugklassen "Lkw ab 12 t zulässigem Gesamtgewicht" und "Sonstigen Fahrzeugen" unterschieden. Obwohl momentan nur eine Benutzungsgebühr für Lkw ab 12 t zulässigem Gesamtgewicht erhoben wird, ist es mit der entwickelten Methodik möglich, fahrleistungsbezogene externe Kosten für alle Kfz angeben zu können. Die Einbeziehung externer Nutzen wird in diesem Zusammenhang andiskutiert; der Schwerpunkt liegt allerdings auf den externen Kosten. Im Rahmen der Arbeit werden zunächst Definitionen wesentlicher Terminologien dargestellt, soweit diese für das Verständnis der sich anschließenden Diskussion und Festlegung der Grundlagen der entwickelten Berechnungsmethodik notwendig erscheinen. Diese Diskussion und Festlegung umfasst die Bereiche Kostenrechnungsart, Bewertungsverfahren zur Ermittlung des Wertegerüsts, Diskontrate, zu betrachtende Kostenbereiche, Mengengerüst und Allokationsrechnung. Darauf folgend werden die betrachteten Kostenbereiche anhand vorliegender Studien und eigener Überlegungen detailliert dargestellt und das Wertegerüst bestimmt. Außerdem wird die Allokationsrechnung und das für die Berechnung heranzuziehende Mengengerüst für jeden Bereich separat vorgestellt. Anschließend wird die entwickelte Berechnungsmethodik auf ein Beispielnetz (Autobahnnetz Thüringen) angewendet. Neben der Vorstellung des Untersuchungsgebiets, der Berechnung der externen Kosten und der disaggregierten Ergebnisdarstellung wird die Einteilung des Beispielnetzes in unterschiedliche Preiskategorien auf der Grundlage der abschnittsbezogen vorliegenden Ergebnisse diskutiert, auf deren Basis die externen Kosten über Straßenbenutzungsgebühren internalisiert werden könnten. Im Rahmen einer Sensitivitätsanalyse werden einzelne Annahmen der Berechnungsmethodik bzw. Kostensätze des Wertegerüsts variiert. Die Auswirkungen dieser Variationen werden wiederum am Beispielnetz, für das erneute Kostenberechnungen vorgenommen werden, dargelegt. Abschließend werden offen gebliebene Fragestellungen und Empfehlungen für weitere Untersuchungen benannt.

In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.