Refine
Document Type
- Conference Proceeding (80)
- Doctoral Thesis (17)
- Article (12)
- Master's Thesis (4)
- Bachelor Thesis (3)
- Periodical (2)
- Study Thesis (2)
- Report (1)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (50)
- Institut für Strukturmechanik (20)
- Graduiertenkolleg 1462 (12)
- Professur Angewandte Mathematik (4)
- Juniorprofessur Augmented Reality (3)
- Juniorprofessur Stochastik und Optimierung (3)
- Professur Betriebswirtschaftslehre im Bauwesen (3)
- Professur Grundbau (3)
- Professur Informatik im Bauwesen (3)
- An-Institute (2)
Keywords
Year of publication
- 2010 (121) (remove)
In der vorliegenden Arbeit wird eine kraftschlüssige Verbindungstechnik für modulare, schalenartige Faserverbundbauteile vorgestellt. Die Verbindung basiert auf der Verklebung mit lokal begrenzten Stahlblechen. Aus dem Verbindungsansatz wird die Verklebung zwischen Stahl und Faserverbundkunststoff vertiefend betrachtet. Ziel sind die Wahl von technologischen Randbedingungen, die Erarbeitung eines Vorschlages zur numerischen Berechnung und Bemessung und die Formulierung konstruktiver Empfehlungen zum Entwurf von Verklebungen. Mechanische Kennwerte werden in Zugversuchen ermittelt und direkt auf die nichtlinearen Berechnungen übertragen. Technologische Einflüsse und die Streuungen aus realen Verklebungen werden über die Nachrechnung von Zugscherversuchen in die Bemessung integriert. Es wird gezeigt, dass die Verklebungen ausreichende Festigkeiten und ein zufriedenstellendes Bruchverhalten aufweisen. Die Kombination aus einer Werkstattverklebung und einer baustellengerechten Montage ermöglicht eine materialgerechte und effiziente Verbindungen für Faserverbundkonstruktionen unter den Randbedingungen des Bauwesens.
Nähert man sich der Frage nach den Zusammenhängen zwischen Strukturalismus und generativen algorithmischen Planungsmethoden, so ist zunächst zu klären, was man unter Strukturalismus in der Architektur versteht. Allerdings gibt es letztlich keinen verbindlichen terminologischen Rahmen, innerhalb dessen sich eine solche Klärung vollziehen könnte. Strukturalismus in der Architektur wird oftmals auf ein formales Phänomen und damit auf eine Stilfrage reduziert. Der vorliegende Text will sich nicht mit Stilen und Phänomenen strukturalistischer Architektur auseinandersetzen, sondern konzentriert sich auf die Betrachtung strukturalistischer Entwurfsmethoden und stellt Bezüge her zu algorithmischen Verfahren, wobei das Zusammenspiel zwischen regelgeleitetem und intuitivem Vorgehen beim Entwerfen herausgearbeitet wird.
SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)
After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.
THE FOURIER-BESSEL TRANSFORM
(2010)
In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.
In spite of the extensive research in dynamic soil-structure interaction (SSI), there still exist miscon-ceptions concerning the role of SSI in the seismic performance of structures, especially the ones founded on soft soil. This is due to the fact that current analytical SSI models that are used to evaluate the influence of soil on the overall structural behavior are approximate models and may involve creeds and practices that are not always precise. This is especially true in the codified approaches which in-clude substantial approximations to provide simple frameworks for the design. As the direct numerical analysis requires a high computational effort, performing an analysis considering SSI is computationally uneconomical for regular design applications. This paper outlines the set up some milestones for evaluating SSI models. This will be achieved by investigating the different assumptions and involved factors, as well as varying the configurations of R/C moment-resisting frame structures supported by single footings which are subject to seismic excita-tions. It is noted that the scope of this paper is to highlight, rather than fully resolve, the above subject. A rough draft of the proposed approach is presented in this paper, whereas a thorough illustration will be carried out throughout the presentation in the course of the conference.
We present the way of calculation of displacement in the bent reinforced concrete bar elements where rearrangement of internal forces and plastic hinge occurred. The described solution is based on prof. Borcz’s mathematical model. It directly takes into consideration the effects connected with the occurrence of plastic hinge, such as for example a crack, by means of a differential equation of axis of the bent reinforced concrete beam. The EN Eurocode 2 makes it possible to consider the influence of plastic hinge on the values of the reinforced concrete structures. This influence can also be assumed using other analytical methods. However, the results obtained by the application of Eurocode 2 are higher from those received in testing. Just comparably big error level occurs when calculations are made by means of Borcz’s method, but in the latter case, the results depend on the assumptions made beforehand. This method makes it possible to apply the experimental results using parameters r1 i r0. When the experimental results are taken into account, one could observe the compatibility between the calculations and actual deflections of the structure.
The main aim of the research project in progress is to develop virtual models as tools to support decision-making in the planning of construction maintenance. The virtual models gives the capacity to allow them to transmit, visually and interactively, information related to the physical behaviour of materials, components of given infrastructures, defined as a function of the time variable. The interactive application allows decisions to be made on conception options in the definition of plans for maintenance, conservation or rehabilitation. The first virtual prototype that is now in progress concerns just lamps. It allows the examination of the physical model, visualizing, for each element modelled in 3D and linked to a database, the corresponding technical information concerned with the wear and tear aspects of the material, calculated for that period of time. In addition, the analysis of solutions for repair work or substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The aim is that the virtual model should be able to be applied directly over the 3D models of new constructions, in situations of rehabilitation. The practical usage of these models is directed, then, towards supporting decision-making in the conception phase and the planning of maintenance. In further work other components will be analysed and incorporated into the virtual system.
The numerical simulation of damage using phenomenological models on the macroscale was state of the art for many decades. However, such models are not able to capture the complex nature of damage, which simultaneously proceeds on multiple length scales. Furthermore, these phenomenological models usually contain damage parameters, which are physically not interpretable. Consequently, a reasonable experimental determination of these parameters is often impossible. In the last twenty years, the ongoing advance in computational capacities provided new opportunities for more and more detailed studies of the microstructural damage behavior. Today, multiphase models with several million degrees of freedom enable for the numerical simulation of micro-damage phenomena in naturally heterogeneous materials. Therewith, the application of multiscale concepts for the numerical investigation of the complex nature of damage can be realized. The presented thesis contributes to a hierarchical multiscale strategy for the simulation of brittle intergranular damage in polycrystalline materials, for example aluminum. The numerical investigation of physical damage phenomena on an atomistic microscale and the integration of these physically based information into damage models on the continuum meso- and macroscale is intended. Therefore, numerical methods for the damage analysis on the micro- and mesoscale including the scale transfer are presented and the transition to the macroscale is discussed. The investigation of brittle intergranular damage on the microscale is realized by the application of the nonlocal Quasicontinuum method, which fully describes the material behavior by atomistic potential functions, but reduces the number of atomic degrees of freedom by introducing kinematic couplings. Since this promising method is applied only by a limited group of researchers for special problems, necessary improvements have been realized in an own parallelized implementation of the 3D nonlocal Quasicontinuum method. The aim of this implementation was to develop and combine robust and efficient algorithms for a general use of the Quasicontinuum method, and therewith to allow for the atomistic damage analysis in arbitrary grain boundary configurations. The implementation is applied in analyses of brittle intergranular damage in ideal and nonideal grain boundary models of FCC aluminum, considering arbitrary misorientations. From the microscale simulations traction separation laws are derived, which describe grain boundary decohesion on the mesoscale. Traction separation laws are part of cohesive zone models to simulate the brittle interface decohesion in heterogeneous polycrystal structures. 2D and 3D mesoscale models are presented, which are able to reproduce crack initiation and propagation along cohesive interfaces in polycrystals. An improved Voronoi algorithm is developed in 2D to generate polycrystal material structures based on arbitrary distribution functions of grain size. The new model is more flexible in representing realistic grain size distributions. Further improvements of the 2D model are realized by the implementation and application of an orthotropic material model with Hill plasticity criterion to grains. The 2D and 3D polycrystal models are applied to analyze crack initiation and propagation in statically loaded samples of aluminum on the mesoscale without the necessity of initial damage definition.
In order to model and simulate collapses of large scale complex structures, a user-friendly and high performance software system is essential. Because a large number of simulation experiments have to be performed, therefore, next to an appropriate simulation model and high performance computing, efficient interactive control and visualization capabilities of model parameters and simulation results are crucial. To this respect, this contribution is concerned with advancements of the software system CADCE (Computer Aided Demolition using Controlled Explosives) that is extended under particular consideration of computational steering concepts. Thereby, focus is placed on problems and solutions for the collapse simulation of real world large scale complex structures. The simulation model applied is based on a multilevel approach embedding finite element models on a local as well as a near field length scale, and multibody models on a global scale. Within the global level simulation, relevant effects of the local and the near field scale, such as fracture and failure processes of the reinforced concrete parts, are approximated by means of tailor-made multibody subsystems. These subsystems employ force elements representing nonlinear material characteristics in terms of force/displacement relationships that, in advance, are determined by finite element analysis. In particular, enhancements concerning the efficiency of the multibody model and improvements of the user interaction are presented that are crucial for the capability of the computational steering. Some scenarios of collapse simulations of real world large scale structures demonstrate the implementation of the above mentioned approaches within the computational steering.
CONSTITUTIVE MODELS FOR SUBSOIL IN THE CONTEXT OF STRUCTURAL ANALYSIS IN CONSTRUCTION ENGINEERING
(2010)
Parameters of constitutive models are obtained generally comparing the results of forward numerical simulations to measurement data. Mostly the parameter values are varied by trial-and-error in order to reach an improved fit and obtain plausible results. However, the description of complex soil behavior requires advanced constitutive models where the rising complexity of these models mainly increases the number of unknown constitutive parameters. Thus an efficient identification "by hand" becomes quite difficult for most practical geotechnical problems. The main focus of this article is on finding a vector of parameters in a given search space which minimizes discrepancy between measurements and the associated numerical result. Classically, the parameter values are estimated from laboratory tests on small samples (triaxial tests or oedometer tests). For this purpose an automatic population-based approach is present to determine the material parameters for reconstituted and natural Bothkennar Clay. After the identification a statistical assessment is carried out of numerical results to evaluate different constitutive models. On the other side a geotechnical problem, stone columns under an embankment, is treated in a well instrumented field trial in Klagenfurt, Austria. For the identification purpose there are measurements from multilevel-piezometers, multilevel-extensometers and horizontal inclinometer. Based on the simulation of the stone columns in a FE-Model the identification of the constitutive parameters is similar to the experimental tests by minimizing the absolute error between measurement and numerical curves.
FREE VIBRATION FREQUENCIES OF THE CRACKED REINFORCED CONCRETE BEAMS - METHODS OF CALCULATIONS
(2010)
The paper presents method of calculation of natural frequencies of the cracked reinforced concrete beams including discreet model of crack. The described method is based on the stiff finite elements method. It was modified in such a way as to take into account local discontinuities (ie. cracks). In addition, some theoretical studies as well as experimental tests of concrete mechanics based on discrete crack model were taken into consideration. The calculations were performed using the author’s own numerical algorithm. Moreover, other calculation methods of dynamic reinforced concrete beams presented in standards and guidelines are discussed. Calculations performed by using different methods are compared with the results obtained in experimental tests.
A four-node quadrilateral shell element with smoothed membrane-bending based on Mindlin-Reissner theory is proposed. The element is a combination of a plate bending and membrane element. It is based on mixed interpolation where the bending and membrane stiffness matrices are calculated on the boundaries of the smoothing cells while the shear terms are approximated by independent interpolation functions in natural coordinates. The proposed element is robust, computationally inexpensive and free of locking. Since the integration is done on the element boundaries for the bending and membrane terms, the element is more accurate than the MITC4 element for distorted meshes. This will be demonstrated for several numerical examples.
On the mechanisms of shrinkage reducing admixtures in self con-solidating mortars and concretes
(2010)
Self Consolidating Concrete – a dream has come true!(?) Self Consolidating Concrete (SCC) is mainly characterised by its special rheological properties. With-out any vibration this concrete can be placed and compacted under its own weight, without segrega-tion or bleeding. The use of such concrete can increase the productivity on construction sites and en-able the use of a higher degree of well distributed reinforcement for thin walled structural members. This new technology also reduces health risks since in contrast to the traditional handling of concrete, the emission of noise and vibration are substantially decreased. The specific mix design for self consolidating concretes was introduced around the 1980s in Japan. In comparison to normal vibrated concrete an increased paste volume enables a good distribution of aggregates within the paste matrix, minimising the influence of aggregates friction on the concrete flow property. The introduction of inert and/or pozzolanic additives as part of the paste provides the required excess paste volume without using disproportionally high amounts of plain cement. Due to further developments of concrete admixtures such as superplasticizers, the cement paste can gain self levelling properties without causing segregation of aggregates. Whereas SCC differs from normal vibrated concrete in its fresh attributes, it should reach similar properties in the hardened state. Due to the increased paste volume it usually shows higher shrinkage. Furthermore, owing to strength requirements, SCC is often produced at low water to cement ratios and hence may additionally suffer from autogenous shrinkage. This means that cracking caused by drying or autogenous shrinkage is a real risk for SCC and can compromise its durability as cracks may serve as ingression paths for gases and salts or might permit leaching. For the time being SCC still exhibits increased shrinkage and cracking probability and hence may be discarded in many practical applications. This can be overcome by a better understanding of those mechanisms and the ways to mitigate them. It is a target of this thesis to contribute to this. How to cope with increased shrinkage of SCC? In general, engineers are facing severe problems related to shrinkage and cracking. Even for normal and high performance concrete, containing moderate amounts of binder, a lot of effort was put on counteracting shrinkage and avoiding cracking. For the time being these efforts resulted in the knowledge of how to distribute cracks rather to avoid them. The most efficient way to decrease shrinkage turned out to be to decrease the cement content of concrete down to a minimum but still sufficient amount. For SCC this obviously seems to be contradictory with the requirement of a high paste volume. Indeed, the potential for shrinkage reduction is limited to some small range modifications in the mix design following two major concepts. The first one is the reduction of the required paste volume by optimising the aggregate grading curve. The second one involves high volume substitution of cement, preferentially using inert mineral additives. The optimization of grading curves is limited by several severe practical issues. Problems start with the availability of sufficiently fractionated aggregates. Usually attempts fail because of the enormous effort in composing application-optimized grading curves or mix designs. Due to durability reasons, the substitution rate for cement is limited depending on the application purpose and on environmental exposure of the hardened concrete. In the early 1980s Shrinkage Reducing Admixtures (SRA) were introduced to counteract drying shrinkage of concrete. The first publications explicitly dealing with SRA go back to Goto and Sato (Japan). They were published in 1983, which is also the time when the SCC concept was introduced. SRA modified concretes showed a substantial reduction of free drying shrinkage contributing to crack prevention or at least a significant decrease of crack width in situations of restrained drying shrinkage. Will shrinkage reducing admixtures contribute to a broader application of SCC? Within the last three decades performance tests on several types of concrete proved the efficiency of shrinkage reducing admixtures. So, at least in terms of shrinkage and cracking, concretes in general and SCC in particular can benefit from SRA application. But "One man's meat is another man's poison" and with respect to long term performance of SRA modified concretes there are still several issues to be clarified. One of these concerns the impact of SRAs on cement hydration. It is therefore an issue to know if changes in the hydrated phase composition, induced by SRA, result in undesired properties or decreased durability. Another issue is that the long term shrinkage reduction has to be evaluated. For example, one can wonder if SRA leaching may diminish or even eliminate long term shrinkage reduction and if the release of admixtures could be a severe environmental issue. It should also be noted that the basic mechanism or physical impact of SRA as well as its implementation in recent models for shrinkage of concrete is still being discussed. The present thesis tries to shed light on the role of SRA in self consolidating concrete focusing on the three questions outlined above: basic mechanisms of cement hydration, physical impact on shrinkage and the sustainability of SRA-application. Which contributions result from this study? Based on an extensive patent search, commercial SRAs could be identified to be synergistic mixtures of non-ionic surfactants and glycols. This turns out to be most important information for more than one reason and is the subject of chapter 4. An abundant literature focuses on properties of these non-ionic surfactants. Moreover, from this rich pool of information, the behaviour of SRAs and their interactions in cementitious systems were better understood through this thesis. For example, it could be anticipated how SRAs behave in strong electrolytes and how surface activity, i.e. surface tension, and interparticle forces might be affected. The synergy effect regarding enhanced performance induced by the presence of additional glycol in SRAs could be derived from the literature on the co-surfactant nature of glycols. Generally it now can be said that glycols ensure that the non-ionic surfactant is properly distributed onto the paste interfaces to efficiently reduce surface tension. In literature, the impact of organic matter on cement hydration was extensively studied for other admixtures like superplasticizer. From there, main impact factors related to the nature of these molecules could be identified. In addition, here again, the literature on non-ionic surfactants provides sufficient information to anticipate possible interactions of SRA with cement hydration based on the nature of non-ionic surfactants. All in all, the extensive study on the nature of non-ionic surfactants, presented in chapter 4, provides fundamental understanding of the behaviour of SRAs in cement paste. Taking a step further to relate this to the impact on drying and shrinkage required to review recent models for drying and shrinkage of cement paste as presented in chapter 3. There, it is shown that macroscopic thermodynamics of the open pore systems can be successfully applied to predict drying induced deformation, but that surface activity of SRA still has to be implemented to explain the shrinkage reduction it causes. Because of severe issues concerning the importance of capillary pressure on shrinkage, a new macroscopic thermodynamic model was derived in a way that meets requirements to properly incorporate surface activity of SRA. This is the subject of chapter 5. Based on theoretical considerations, in chapter 5 the broader impact of SRA on drying cementitious matter could be outlined. In a next step, cement paste was treated as a deformable, open drying pore system. Thereby, the drying phenomena of SRA modified mortars and concrete observed by other authors could be retrieved. This phenomenological consistency of the model constitutes an important contribution towards the understanding of SRA mechanisms. Another main contribution of this work came from introducing an artificial pore system, denominated the normcube. Using this model system, it could be shown how the evolution of interfacial area and its properties interact in presence of SRAs and how this impacts drying characteristics. In chapter 7, the surface activity of commercial SRAs in aqueous solution and synthetic pore solution was investigated. This shows how the electrolyte concentration of synthetic pore solution impacts the phase behaviour of SRA and conversely, how the presence of SRA impacts the aqueous electrolyte solution. Whilst electrolytes enhance self-aggregation of SRAs into micelles and liquid crystals, the presence of SRAs leads to precipitation of minerals as syngenite and mirabilite. Moreover, electrolyte solutions containing SRAs comprise limited miscibility or rather show miscibility gaps, where the liquid separates into isotropic micellar solutions and surfactant rich reverse micellar solutions. The investigation of surface activity and phase behaviour of SRA unravelled another important contribution. From macroscopic surface tension measurements, a relationship between excess surface concentration of SRA, bulk concentration of SRA and exposed interfacial area could be derived. Based on this, it is now possible to predict the actual surface tension of the pore fluid in the course of drying once the evolution of internal interfacial area is known. This is used later in this thesis to describe the specific drying and shrinkage behaviour of SRA modified pastes and mortars. Calorimetric studies on normal Portland cement and composite binders revealed that SRA alone show only minor impact on hydration kinetics. In presence of superplasticizer however the cement hydration can be significantly decelerated. The delaying impact of SRA could be related to a selective deceleration of silicate phase hydration. Moreover, it could be shown that portlandite precipitation in presence of SRA is changed, turning the compact habitus into more or less layered structures. Thereby, the specific surface increases, causing the amount of physically bound water to increase, which in turn reduces the maximum degree of hydration achievable for sealed systems. Extensive phase analysis shows that the hydrated phase composition of SRA modified binders re-mains almost unaffected. The appearance of a temporary mineral phase could be detected by environmental scanning electron microscopy. As could be shown for synthetic pore solutions, syngenite precipitates during early hydration stages and is later consumed in the course of aluminate hydration, i.e. when sulphates are depleted. Moreover, for some SRAs, the salting out phenomena supposed to be enhanced in strong electrolytes could also be shown to take place. The resulting organic precipitates could be identified by SEM-EDX in cement paste and by X-ray diffraction on solid residues of synthetic pore solution. The presence of SRAs could also be identified to impact microstructure of well cured cement paste. Based on nitrogen adsorption measurements and mercury intrusion porosimetry the amount of small pores is seen to increase with SRA dosage, whilst the overall porosity remains unchanged. The question regarding sustainability of SRA application is the subject of chapter 10. By means of leaching studies it could be shown that SRA can be leached significantly. The mechanism could be identified as a diffusion process and a range of effective diffusion coefficients could be estimated. Thereby, the leaching of SRA can now be estimated for real structural members. However, while the admixture can be leached to high extents in tank tests, the leaching rates in practical applications can be assumed to be low because of much reduced contact with water. This could be proven by quantifying admixture loss during long term drying and rewetting cycles. Despite a loss of admixture shrinkage reduction is hardly impacted. Moreover, the cyclic tests revealed that the total deformations in presence of SRA remain low due to a lower extent of irreversibly shrinkage deformations. Another important contribution towards the better understanding of the working mechanism of SRA for drying and shrinkage came from the same leaching tests. A significant fraction of SRA is found to be immobile and does not diffuse in leaching. This fraction of SRA is probably strongly associated to cement phases as the calcium-silicate-hydrates or portlandite. Based on these findings, it is now also possible to quantify the amount of admixture active at the interfaces. This means that, the evolution of surface tension in the course of drying can be approximated, which is a fundamental requirement for modeling shrinkage in presence of SRA. The last experimental chapter of this study focuses on the working mechanism and impact of SRA on drying and shrinkage. Based on the thermodynamics of the open deformable pore system introduced in chapter 5, energy balances are set up using desorption and shrinkage isotherms of actual samples. Information on distribution of SRA in the hydrated paste is used to estimate the actual surface tensions of the pore solution. In other words, this is the first time that the surface activity of the SRA in the course of the drying is fully accounted for. From the energy balances the evolution and properties of the internal interface are then obtained. This made it possible to explain why SRAs impact drying and shrinkage and in what specific range of relative humidity they are active. Summarising the findings of this thesis it can be said that the understanding of the impact of SRAs on hydration, drying and shrinkage was brought forward. Many of the new insights came from the careful investigation of the theory of non-ionic surfactants, something that the cement community had generally overlooked up to now.
Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.
Nach dem aufgeregten Palaver um den Computer als 'Medium' und die akademische Begleitrhetorik zum Internet wird erneut die Frage nach der Leistung von Medienphilosophie gestellt - in diesem Beitrag als medienanthropologische Vergewisserung: welche technischen Überschreitungen definieren das Neue unserer Lage?
Virtual reality systems offer substantial potential in supporting decision processes based purely on computer-based representations and simulations. The automotive industry is a prime application domain for such technology, since almost all product parts are available as three-dimensional models. The consideration of ergonomic aspects during assembly tasks, the evaluation of humanmachine interfaces in the car interior, design decision meetings as well as customer presentations serve as but a few examples, wherein the benefit of virtual reality technology is obvious. All these tasks require the involvement of a group of people with different expertises. However, current stereoscopic display systems only provide correct 3D-images for a single user, while other users see a more or less distorted virtual model. This is a major reason why these systems still face limited acceptance in the automotive industry. They need to be operated by experts, who have an advanced understanding of the particular interaction techniques and are aware of the limitations and shortcomings of virtual reality technology. The central idea of this thesis is to investigate the utility of stereoscopic multi-user systems for various stages of the car development process. Such systems provide multiple users with individual and perspectively correct stereoscopic images, which are key features and serve as the premise for the appropriate support of collaborative group processes. The focus of the research is on questions related to various aspects of collaboration in multi-viewer systems such as verbal communication, deictic reference, embodiments and collaborative interaction techniques. The results of this endeavor provide scientific evidence that multi-viewer systems improve the usability of VR-applications for various automotive scenarios, wherein co-located group discussions are necessary. The thesis identifies and discusses the requirements for these scenarios as well as the limitations of applying multi-viewer technology in this context. A particularly important gesture in real-world group discussions is referencing an object by pointing with the hand and the accuracy which can be expected in VR is made evident. A novel two-user seating buck is introduced for the evaluation of ergonomics in a car interior and the requirements on avatar representations for users sitting in a car are identified. Collaborative assembly tasks require high precision. The novel concept of a two-user prop significantly increases the quality of such a simulation in a virtual environment and allows ergonomists to study the strain on workers during an assembly sequence. These findings contribute toward an increased acceptance of VR-technology for collaborative development meetings in the automotive industry and other domains.
An introduction is given to Clifford Analysis over pseudo-Euclidean space of arbitrary signature, called for short Ultrahyperbolic Clifford Analysis (UCA). UCA is regarded as a function theory of Clifford-valued functions, satisfying a first order partial differential equation involving a vector-valued differential operator, called a Dirac operator. The formulation of UCA presented here pays special attention to its geometrical setting. This permits to identify tensors which qualify as geometrically invariant Dirac operators and to take a position on the naturalness of contravariant and covariant versions of such a theory. In addition, a formal method is described to construct the general solution to the aforementioned equation in the context of covariant UCA.
In the last two decades, many cities have faced changes in their economic basis and therefore adopted an entrepreneurial approach in the municipal administration accompanied by city marketing strategies. Brazilian cities have also adopted this approach, like the case of Florianópolis. Florianópolis has promoted advertising campaigns on the natural resources of the Island of Santa Catarina as well as on its quality of life in comparison to other cities. However, due also to such campaigns, it has experienced a great demographic growth and, consequently, infrastructural and social problems. Nevertheless, it seems to have a good image within the national urban scenario and has been commonly considered an “urban consumption dream” for many Brazilians. This paradoxical situation is the reason why it has been chosen as the research object in this dissertation. Thus, the questions of this research are: is there a gap between the promise and the performance of the city of Florianópolis? If so, can tourists and residents recognize it? And finally, how can this gap be demonstrated? Accordingly, the main objective of this research is to propose a conformity assessment approach applicable to cities, by which the content of city advertisement campaigns can be compared to its performance indicators and satisfaction degree of its consumers. Therefore, this approach is composed by different methods: literature and legislation reviews, semi-structured and structured interviews with experts and inhabitants, an urban centrality development analysis, a qualitative discourse analysis of advertising material (including images), a qualitative content analysis of newspaper reports and a questionnaire survey. Finally, the theses are: yes, there is a gap between promise and performance of Florianópolis; this promise is a result of city marketing campaigns which advertise its natural features and at the same time hiding its urban aspects, supported by some political and private actors, mainly interested in the development of tourism and real estate market in the city; this gap has been already recognized by tourists and more intensively by residents; the selected methods worked as a kind of conformity assessment for cities and tourist destinations; and last but not least, since there is a gap, it designates the practice of “make-up urbanism”. Research limitations are the short time frame covered by this analysis and small and non-representative samples. However, its relevance lies in the attempt to fill in two disciplinary lacunas: a conformity assessment approach for cities and the creation of knowledge about Florianópolis and its further presentation at an international level, on the one hand. On the other hand, the transfer of this approach to other cities would help explaining a (common) contemporary urban phenomenon and appeal for more ethical conduct and transparency in the practices of city marketing.
As numerical techniques for solving PDE or integral equations become more sophisticated, treatments of the generation of the geometric inputs should also follow that numerical advancement. This document describes the preparation of CAD data so that they can later be applied to hierarchical BEM or FEM solvers. For the BEM case, the geometric data are described by surfaces which we want to decompose into several curved foursided patches. We show the treatment of untrimmed and trimmed surfaces. In particular, we provide prevention of smooth corners which are bad for diffeomorphism. Additionally, we consider the problem of characterizing whether a Coons map is a diffeomorphism from the unit square onto a planar domain delineated by four given curves. We aim primarily at having not only theoretically correct conditions but also practically efficient methods. As for FEM geometric preparation, we need to decompose a 3D solid into a set of curved tetrahedra. First, we describe some method of decomposition without adding too many Steiner points (additional points not belonging to the initial boundary nodes of the boundary surface). Then, we provide a methodology for efficiently checking whether a tetrahedral transfinite interpolation is regular. That is done by a combination of degree reduction technique and subdivision. Along with the method description, we report also on some interesting practical results from real CAD data.
An energy method based on the LAGRANGE Principle of the minimum of total potential en-ergy is presented to calculate the stresses and strains of composite cross-sections. The stress-strain relation of each partition of the cross-section can be an arbitrary piecewise continuous function. The strain energy is transformed into a line integral by GAUSS’s integral theorem. The total strain of each partition of the cross-section is split into load-dependent strain and pre-strain. Pre-strains have to be taken into account when the cross-section is pre-stressed, retrofit-ted or influenced by shrinkage, temperature etc. The unconstrained minimum problem can be solved for each load combination using standard software. The application of the method presented in the paper is demonstrated by means of examples.
We consider a structural truss problem where all of the physical model parameters are uncertain: not just the material values and applied loads, but also the positions of the nodes are assumed to be inexact but bounded and are represented by intervals. Such uncertainty may typically arise from imprecision during the process of manufacturing or construction, or round-off errors. In this case the application of the finite element method results in a system of linear equations with numerous interval parameters which cannot be solved conventionally. Applying a suitable variable substitution, an iteration method for the solution of a parametric system of linear equations is firstly employed to obtain initial bounds on the node displacements. Thereafter, an interval tightening (pruning) technique is applied, firstly on the element forces and secondly on the node displacements, in order to obtain tight guaranteed enclosures for the interval solutions for the forces and displacements.
The paper is devoted to a study of properties of homogeneous solutions of massless field equation in higher dimensions. We first treat the case of dimension 4. Here we use the two-component spinor language (developed for purposes of general relativity). We describe how are massless field operators related to a higher spin analogues of the de Rham sequence - the so called Bernstein-Gel'fand-Gel'fand (BGG) complexes - and how are they related to the twisted Dirac operators. Then we study similar question in higher (even) dimensions. Here we have to use more tools from representation theory of the orthogonal group. We recall the definition of massless field equations in higher dimensions and relations to higher dimensional conformal BGG complexes. Then we discuss properties of homogeneous solutions of massless field equation. Using some recent techniques for decomposition of tensor products of irreducible $Spin(m)$-modules, we are able to add some new results on a structure of the spaces of homogenous solutions of massless field equations. In particular, we show that the kernel of the massless field equation in a given homogeneity contains at least on specific irreducible submodule.
In this dissertation, a new, unique and original biaxial device for testing unsaturated soil was designed and developed. A study on the mechanical behaviour of unsaturated sand in plane-strain conditions using the new device is presented. The tests were mainly conducted on Hostun sand specimens. A series of experiments including basic characterisation, soil water characteristic curves, and compression biaxial tests on dry, saturated, and unsaturated sand were conducted. A set of bearing capacity tests of strip model footing on unsaturated sand were performed. Additionally, since the presence of fine content (i.e., clay) influences the behavior of soils, soil water characteristic tests were also performed for sand-kaolin mixtures specimens.
A stress based remodeling approach is used to investigate the sensitivity of the collagen architecture in humane eye tissues on the biomechanical response of the lamina cribrosa with a particular focus on the stress environment of the nerve fibers. This approach is based on a multi-level biomechanical framework, where the biomechanical properties of eye tissues are derived from a single crimped fibril at the micro-scale via the collagen network of distributed fibrils at the meso-scale to the incompressible and anisotropic soft tissue at the macro-scale. Biomechanically induced remodeling of the collagen network is captured on the meso-scale by allowing for a continuous reorientation of collagen fibrils. To investigate the multi-scale phenomena related to glaucomatous neuropathy a generalized computational homogenization scheme is applied to a coupled two-scale analysis of the human eye considering a numerical macro- and meso-scale model of the lamina cribrosa.
Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ...
Reducing energy consumption is one of the major challenges for present day and will continue for future generations. The emerging EU directives relating to energy (EU EPBD and the EU Directive on Emissions Trading) now place demands on building owners to rate the energy performance of their buildings for efficient energy management. Moreover European Legislation (Directive 2006/32/EC) requires Facility Managers to reduce building energy consumption and operational costs. Currently sophisticated building services systems are available integrating off-the-shelf building management components. However this ad-hoc combination presents many difficulties to building owners in the management and upgrade of these systems. This paper addresses the need for integration concepts, holistic monitoring and analysis methodologies, life-cycle oriented decision support and sophisticated control strategies through the seamless integration of people, ICT-devices and computational resources via introducing the newly developed integrated system architecture. The first concept was applied to a residential building and the results were elaborated to improve current building conditions.
From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.
Models in the context of engineering can be classified in process based and data based models. Whereas the process based model describes the problem by an explicit formulation, the data based model is often used, where no such mapping can be found due to the high complexity of the problem. Artificial Neuronal Networks (ANN) is a data based model, which is able to “learn“ a mapping from a set of training patterns. This paper deals with the application of ANN in time dependent bathymetric models. A bathymetric model is a geometric representation of the sea bed. Typically, a bathymetry is been measured and afterwards described by a finite set of measured data. Measuring at different time steps leads to a time dependent bathymetric model. To obtain a continuous surface, the measured data has to be interpolated by some interpolation method. Unlike the explicitly given interpolation methods, the presented time dependent bathymetric model using an ANN trains the approximated surface in space and time in an implicit way. The ANN is trained by topographic measured data, which consists of the location (x,y) and time t. In other words the ANN is trained to reproduce the mapping h = f(x,y,t) and afterwards it is able to approximate the topographic height for a given location and date. In a further step, this model is extended to take meteorological parameters into account. This leads to a model of more predictive character.
In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.
In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.
This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.
NUMERICAL SIMULATION OF THERMO-HYGRAL ALKALI-SILICA REACTION MODEL IN CONCRETE AT THE MESOSCALE
(2010)
This research aims to model Alkali-Silica Reaction gel expansion in concrete under the influence of hygral and thermal loading, based on experimental results. ASR provokes a heterogeneous expansion in concrete leading to dimensional changes and eventually the premature failure of the concrete structure. This can result in map cracking on the concrete surface which will decrease the concrete stiffness. Factors that influence ASR are parameters such as the cement alkalinity, the number of deleterious silica from the aggregate used, concrete porosity, and external factors like temperature, humidity and external source of alkali from ingression of deicing salts. Uncertainties of the influential factors make ASR a difficult phenomenon to solve; hence my approach to this matter is to solve the problem using stochastic modelling, where a numerical simulation of concrete cross-section with integration of experimental results from Finger-Institute for Building Materials Science at the Bauhaus-Universität Weimar. The problem is formulated as a multi-field problem, combining heat transfer, fluid transfer and the reaction rate model with the mechanical stress field. Simulation is performed as a mesoscale model considering aggregates and mortar matrix. The reaction rate model will be conducted using experimental results from concrete expansions due to ASR gained from concrete prism tests. Expansive strains values for transient environmental conditions due to the reaction rate will be determined from calculation based on the reaction rate model. Results from these models will be able to predict the rate of ASR expansion and the cracking propagation that may arise.
In recent years special hypercomplex Appell polynomials have been introduced by several authors and their main properties have been studied by different methods and with different objectives. Like in the classical theory of Appell polynomials, their generating function is a hypercomplex exponential function. The observation that this generalized exponential function has, for example, a close relationship with Bessel functions confirmed the practical significance of such an approach to special classes of hypercomplex differentiable functions. Its usefulness for combinatorial studies has also been investigated. Moreover, an extension of those ideas led to the construction of complete sets of hypercomplex Appell polynomial sequences. Here we show how this opens the way for a more systematic study of the relation between some classes of Special Functions and Elementary Functions in Hypercomplex Function Theory.
For many applications, nonuniformly distributed functional data is given which lead to large–scale scattered data problems. We wish to represent the data in terms of a sparse representation with a minimal amount of degrees of freedom. For this, an adaptive scheme which operates in a coarse-to-fine fashion using a multiscale basis is proposed. Specifically, we investigate hierarchical bases using B-splines and spline-(pre)wavelets. At each stage a leastsquares approximation of the data is computed. We take into account different requests arising in large-scale scattered data fitting: we discuss the fast iterative solution of the least square systems, regularization of the data, and the treatment of outliers. A particular application concerns the approximate continuation of harmonic functions, an issue arising in geodesy.
Das Bund-Länder-Programm "Soziale Stadt" hat die Aufgabe, Stadtteile mit besonderem Entwicklungsbedarf zu fördern. Das negative Image ist einerseits Ursache, andererseits auch Folge von sozialen und städtebaulichen Problemlagen und Entwicklungen im Stadtteil. Diese Abwärtsspirale soll durch das Programm aufgebrochen werden. Der Autor nähert sich interdisziplinär dem Imagebegriff an und zeigt die Auswirkungen des Programms "Soziale Stadt" auf die Großwohnsiedlung Jena-Winzerla. Die Studie erfasst anhand des semantischen Differentials das Image im Stadtteil, wie es von den Bewohnern beurteilt wird und vergleicht es mit der Sicht von Außen. Der Einfluß des Programms auf das Image wird durch Experteninterviews beleuchtet. Das Beispiel eigt die Entwicklungen, die das Programm "Soziale Stadt" bewirken kann. Es werden aber auch Grenzen deutlich. Vor diesem Hintergrund werden abschließend Überlegungen angestellt, in welche Richtungen die Entwicklungen innerhalb des Förderprogramms gelenkt werden sollten, um das Image nachhaltig zu verbessern und betroffene Stadtteile adäquat zu fördern.
Quality is one of the most important properties of a product. Providing the optimal quality can reduce costs for rework, scrap, recall or even legal actions while satisfying customers demand for reliability. The aim is to achieve ``built-in'' quality within product development process (PDP). The common approach therefore is the robust design optimization (RDO). It uses stochastic values as constraint and/or objective to obtain a robust and reliable optimal design. In classical approaches the effort required for stochastic analysis multiplies with the complexity of the optimization algorithm. The suggested approach shows that it is possible to reduce this effort enormously by using previously obtained data. Therefore the support point set of an underlying metamodel is filled iteratively during ongoing optimization in regions of interest if this is necessary. In a simple example, it will be shown that this is possible without significant loss of accuracy.
Since the 90-ties the Pascal matrix, its generalizations and applications have been in the focus of a great amount of publications. As it is well known, the Pascal matrix, the symmetric Pascal matrix and other special matrices of Pascal type play an important role in many scientific areas, among them Numerical Analysis, Combinatorics, Number Theory, Probability, Image processing, Sinal processing, Electrical engineering, etc. We present a unified approach to matrix representations of special polynomials in several hypercomplex variables (new Bernoulli, Euler etc. polynomials), extending results of H. Malonek, G.Tomaz: Bernoulli polynomials and Pascal matrices in the context of Clifford Analysis, Discrete Appl. Math. 157(4)(2009) 838-847. The hypercomplex version of a new Pascal matrix with block structure, which resembles the ordinary one for polynomials of one variable will be discussed in detail.
Nodal integration of finite elements has been investigated recently. Compared with full integration it shows better convergence when applied to incompressible media, allows easier remeshing and highly reduces the number of material evaluation points thus improving efficiency. Furthermore, understanding it may help to create new integration schemes in meshless methods as well. The new integration technique requires a nodally averaged deformation gradient. For the tetrahedral element it is possible to formulate a nodal strain which passes the patch test. On the downside, it introduces non-physical low energy modes. Most of these "spurious modes" are local deformation maps of neighbouring elements. Present stabilization schemes rely on adding a stabilizing potential to the strain energy. The stabilization is discussed within this article. Its drawbacks are easily identified within numerical experiments: Nonlinear material laws are not well represented. Plastic strains may often be underestimated. Geometrically nonlinear stabilization greatly reduces computational efficiency. The article reinterpretes nodal integration in terms of imposing a nonconforming C0-continuous strain field on the structure. By doing so, the origins of the spurious modes are discussed and two methods are presented that solve this problem. First, a geometric constraint is formulated and solved using a mixed formulation of Hu-Washizu type. This assumption leads to a consistent representation of the strain energy while eliminating spurious modes. The solution is exact, but only of theoretical interest since it produces global support. Second, an integration scheme is presented that approximates the stabilization criterion. The latter leads to a highly efficient scheme. It can even be extended to other finite element types such as hexahedrals. Numerical efficiency, convergence behaviour and stability of the new method is validated using linear tetrahedral and hexahedral elements.
A practical framework for generating cross correlated fields with a specified marginal distribution function, an autocorrelation function and cross correlation coefficients is presented in the paper. The contribution promotes a recent journal paper [1]. The approach relies on well known series expansion methods for simulation of a Gaussian random field. The proposed method requires all cross correlated fields over the domain to share an identical autocorrelation function and the cross correlation structure between each pair of simulated fields to be simply defined by a cross correlation coefficient. Such relations result in specific properties of eigenvectors of covariance matrices of discretized field over the domain. These properties are used to decompose the eigenproblem which must normally be solved in computing the series expansion into two smaller eigenproblems. Such decomposition represents a significant reduction of computational effort. Non-Gaussian components of a multivariate random field are proposed to be simulated via memoryless transformation of underlying Gaussian random fields for which the Nataf model is employed to modify the correlation structure. In this method, the autocorrelation structure of each field is fulfilled exactly while the cross correlation is only approximated. The associated errors can be computed before performing simulations and it is shown that the errors happen especially in the cross correlation between distant points and that they are negligibly small in practical situations.
Seit 1969 werden für die Bundesrepublik kontinuierlich Berechnungen zu den Gesamtkosten des Straßenverkehrs der Bundesfernstraßen und deren Verteilung auf die Verkehrsteilnehmer durchgeführt. Die Ergebnisse der Wegekostenrechnungen der Jahre 2002 und 2007 sind die Grundlage für die mittlerweile für das deutsche Autobahnnetz eingeführte fahrleistungsbezogene Benutzungsgebühr für Lkw mit einem zulässigen Gesamtgewicht von mindestens zwölf Tonnen. Damit wird die Forderung der EU-Richtlinie 1999/62/EG umgesetzt, nach der sich die durchschnittlichen Straßenbenutzungsgebühren an den Kosten für den Bau, den Betrieb und den Ausbau des betreffenden Verkehrswegenetzes orientieren sollen. Mit der EU-Richtlinie 2006/38/EG kündigt sich die weitere Entwicklung bei der Berechnung von Straßenbenutzungsgebühren an. Zukünftig sollen auch externe Kosten in die Berechnung einfließen. Ein erster Schritt zur Berücksichtigung dieser externen Kosten erfolgte mit Erstellung eines Handbuchs im Rahmen eines EU-Forschungsprojektes. Das Handbuch enthält aufgrund der unterschiedlichen Rahmenbedingungen in den Mitgliedsstaaten der EU keine exakten Berechnungsvorschriften, sondern stellt verschiedene methodische Ansätze bisher durchgeführter Studien zu externen Kosten vor, gibt Empfehlungen hinsichtlich der Methodenwahl und beinhaltet Schätzungen über die Höhe der externen Kosten. Die im europäischen Raum in den vergangenen Jahren durchgeführten Studien zur Ermittlung externer Kosten des Verkehrs zeichnen sich durch einander ähnelnde Vorgehensweisen aus, die aber vor allem hinsichtlich der Kostenrechnungsart und der verwendeten Kostensätze aus Sicht des Verfassers der vorliegenden Arbeit kritische Aspekte aufweisen. In der vorliegenden Dissertationsschrift wird daher eine alternative Berechnungsmethodik zur Ermittlung abschnitts-, fahrzeugklassen- und fahrleistungsbezogener externer Kosten für Autobahnen entwickelt und an einem ausgewählten Beispielnetz zur Anwendung gebracht. Dabei wird in einigen wesentlichen Punkten von der in aktuellen Studien überwiegend gewählten Vorgehensweise abgewichen, um eine andere Sichtweise darzustellen. Damit trägt die vorliegende Arbeit substanziell zur Erweiterung des Erkenntnisstands zu Berechnungsmethoden externer Kosten des Straßenverkehrs bei. Die hier entwickelte Berechnungsmethodik ist außerdem als Grundlage für ein in der Praxis anwendbares Verfahren zu verstehen und zeichnet sich auch daher durch eine einfach zu handhabende Übertragbarkeit auf das gesamte Autobahnnetz Deutschlands aus. Die Abschnitte entsprechen den Teilstrecken zwischen zwei Autobahnanschlussstellen. Es wird zwischen den beiden Fahrzeugklassen "Lkw ab 12 t zulässigem Gesamtgewicht" und "Sonstigen Fahrzeugen" unterschieden. Obwohl momentan nur eine Benutzungsgebühr für Lkw ab 12 t zulässigem Gesamtgewicht erhoben wird, ist es mit der entwickelten Methodik möglich, fahrleistungsbezogene externe Kosten für alle Kfz angeben zu können. Die Einbeziehung externer Nutzen wird in diesem Zusammenhang andiskutiert; der Schwerpunkt liegt allerdings auf den externen Kosten. Im Rahmen der Arbeit werden zunächst Definitionen wesentlicher Terminologien dargestellt, soweit diese für das Verständnis der sich anschließenden Diskussion und Festlegung der Grundlagen der entwickelten Berechnungsmethodik notwendig erscheinen. Diese Diskussion und Festlegung umfasst die Bereiche Kostenrechnungsart, Bewertungsverfahren zur Ermittlung des Wertegerüsts, Diskontrate, zu betrachtende Kostenbereiche, Mengengerüst und Allokationsrechnung. Darauf folgend werden die betrachteten Kostenbereiche anhand vorliegender Studien und eigener Überlegungen detailliert dargestellt und das Wertegerüst bestimmt. Außerdem wird die Allokationsrechnung und das für die Berechnung heranzuziehende Mengengerüst für jeden Bereich separat vorgestellt. Anschließend wird die entwickelte Berechnungsmethodik auf ein Beispielnetz (Autobahnnetz Thüringen) angewendet. Neben der Vorstellung des Untersuchungsgebiets, der Berechnung der externen Kosten und der disaggregierten Ergebnisdarstellung wird die Einteilung des Beispielnetzes in unterschiedliche Preiskategorien auf der Grundlage der abschnittsbezogen vorliegenden Ergebnisse diskutiert, auf deren Basis die externen Kosten über Straßenbenutzungsgebühren internalisiert werden könnten. Im Rahmen einer Sensitivitätsanalyse werden einzelne Annahmen der Berechnungsmethodik bzw. Kostensätze des Wertegerüsts variiert. Die Auswirkungen dieser Variationen werden wiederum am Beispielnetz, für das erneute Kostenberechnungen vorgenommen werden, dargelegt. Abschließend werden offen gebliebene Fragestellungen und Empfehlungen für weitere Untersuchungen benannt.
Der inhaltlichen Qualitätssicherung von Bauwerksinformationsmodellen (BIM) kommt im Zuge einer stetig wachsenden Nutzung der verwendeten BIM für unterschiedliche Anwen-dungsfälle eine große Bedeutung zu. Diese ist für jede am Datenaustausch beteiligte Software dem Projektziel entsprechend durchzuführen. Mit den Industry Foundation Classes (IFC) steht ein etabliertes Format für die Beschreibung und den Austausch eines solchen Modells zur Verfügung. Für den Prozess der Qualitätssicherung wird eine serverbasierte Testumgebung Bestandteil des neuen Zertifizierungsverfahrens der IFC sein. Zu diesem Zweck wurde durch das „iabi - Institut für angewandte Bauinformatik” in Zusammenarbeit mit „buildingSMART e.V.“ (http://www.buildingsmart.de) ein Global Testing Documentation Server (GTDS) implementiert. Der GTDS ist eine, auf einer Datenbank basierte, Web-Applikation, die folgende Intentionen verfolgt:
• Bereitstellung eines Werkzeugs für das qualitative Testen IFC-basierter Modelle
• Unterstützung der Kommunikation zwischen IFC Entwicklern und Anwendern
• Dokumentation der Qualität von IFC-basierten Softwareanwendungen
• Bereitstellung einer Plattform für die Zertifizierung von IFC Anwendungen
Gegenstand der Arbeit ist die Planung und exemplarische Umsetzung eines Werkzeugs zur interaktiven Visualisierung von Qualitätsdefiziten, die vom GTDS im Modell erkannt wurden. Die exemplarische Umsetzung soll dabei aufbauend auf den OPEN IFC TOOLS (http://www.openifctools.org) erfolgen.