### Refine

#### Document Type

- Conference Proceeding (149)
- Master's Thesis (16)
- Doctoral Thesis (15)
- Bachelor Thesis (8)
- Article (7)
- Diploma Thesis (6)
- Report (4)
- Preprint (3)
- Book (1)
- Part of a Book (1)

#### Institute

- In Zusammenarbeit mit der Bauhaus-Universität Weimar (93)
- Institut für Strukturmechanik (32)
- Professur Baubetrieb und Bauverfahren (13)
- Professur Stahlbau (7)
- Professur Angewandte Mathematik (6)
- Institut für Europäische Urbanistik (5)
- Juniorprofessur Augmented Reality (5)
- Professur Verkehrsplanung und Verkehrstechnik (5)
- Institut für Konstruktiven Ingenieurbau (4)
- Professur Informatik in der Architektur (4)

#### Keywords

#### Year of publication

- 2006 (210) (remove)

In the history of 'villages' in Shenzhen, rich traditional cultural resources that are directly related to the folk life in urban corporate community still exist today, synchronously agricultural economy of urban corporate community is transformed into joint-stock economy, and natural villages are transformed into 'heterogeneous' space of city. The most significant fact in the modern social transition is that modern societies have surpassed traditional societies, and cities have surpassed the country. Weber, Durkheim, Tönnies, Simmel and others devoted themselves to cultivating the essence of social transition. The most influential theory to observe and analyze it is the two-tiered approach of ideal type. Tönnies made distinction between 'Gemeinschaft and Gesellschaft', Durkheim distinguished 'mechanical solidarity and organic solidarity', and Redfield analyzed 'folk society and urban society'. In those classical theories, the former transit to the later is considered to be a general rule of transition from traditional society to modern society, and from traditional community to modern community. However, ever since Redfield used the dependent relationship and interactive framework of 'great tradition' and 'little tradition' to explain various complicated phenomena in the transition from tradition to modern in 1950s, he suggested that a folk-urban continuum can be formed in the transition from folk society to urban society. 'Both terms, ‘city’ and ‘country’, are not and have never been limited or restricted to their obvious denotations: ‘city’ is not and has never been only urban. As a category it always encompasses (includes, embodies, embraces) itself and its opposite, the country' (Hassenpflug 2002, 46). Generally, social groups and culture characterized by weak 'potential' will take their own 'little tradition' as 'bridge' and agency, in order to enter or melt themselves into a 'great tradition' that embodies great 'potential' to seek for space to live and develop. There are many different types of transitions that villagers enter and get melt into 'great tradition' through their individual 'little tradition'. There are exploration and development of traditional resources in 'segmentation', such as the frequent relation between a great flow of peasants to cities and the network of kinship, and of earthbound relations; alternatively, there are assistances and utilization of resources of a whole corporate network, such as the traditional corporate community’s organization of local resources during the process of non-agriculturization of villages; and 'villages' in Shenzhen is of the latter situation. The following conclusion can be made based on the above analyses: urban corporate community formed in the process of non-agricultural development and urbanization is an organizing dependency on which villagers melt into city and adapt to urban life. The unique inner-structure and function determine that comparing with other organizations, it has a better performance, efficiency and more humanity care. Firstly, corporate community which is re-organized in the non-agricultural process currently is the only and the most effective organizational resources that can be utilized and has significant meanings in protecting villagers’ interest and benefit; secondly, in the short term, other approaches do not have the advantage and the effect as urban corporate community has on the focusing degree of public affairs in the comprehensive urbanization process; thirdly, the 'new' key connotation of urban corporate community, including its community management functions, is the main reason for which such community has the rationality of being; fourthly, urban corporate community will inevitably face many problems in the urbanization due to its inner fixed characteristics (lack of external support), but to a certain degree it has the ability to self-repair and problem solving under the precondition that, the government and society have a fair, impersonal view of 'villages', and base on this view providing multi-supports, especially providing rational system arrangement and policy supports. Consequently, in order to preserve and protect social system and cultural heritage within the 'villages', and gradually make the coordinative development of 'great tradition' represented by cities and of 'little tradition' represented by 'villages', 'soft reconstruction' rather than 'hard reconstruction' should be adopted by the government, during the recent reconstruction of 'villages' in Shenzhen.

For efficient distant cooperation the members of workgroups need information about each other. This need for information disclosure often conflicts with the users' wishes for privacy. In the literature often reciprocity is suggested as a solution to this trade-off. Yet, this conception of reciprocity and its enforcement by systems does not match reality. In this paper we present our study's major findings investigating the role of reciprocity among which we found that participants greatly disregarded the above conception. Additionally we discuss their significant implications for the design of systems seeking to disclose personal information.

... WITHOUT RIGHT ANGLE.
(2006)

Currently sculptural design is one of the most discussed themes in architecture. Due to their light weight, easy transportation and assembly, as well as an almost unlimited structural variety, parameterised spatial structures are excellently suited for constructive realisation of free formed claddings. They subdivide the continuous surface into a structure of small sized nodes, straight members and plane glass panels. Thus they provide an opportunity to realise arbitrary double-curved claddings with a high degree of transparency, using industrial semi-finished products (steel sections, flat glass). Digital design strategies and a huge number of similar looking but in detail unique structural members demand a continuous digital project handling. Within a research project, named MYLOMESH, a free-formed spatial structure was designed, constructed, fabricated and assembled. All required steps were carried out based on digital data. Different digital connections (scripts) between varying software tools, which are usually not used in the planning process of buildings, were created. They allow a completely digital workflow. The project, its design, meshing, constructive detailing and the above-mentioned scripts are described in this paper.

In classical complex function theory the geometric mapping property of conformality is closely linked with complex differentiability. In contrast to the planar case, in higher dimensions the set of conformal mappings is only the set of Möbius transformations. Unfortunately, the theory of generalized holomorphic functions (by historical reasons they are called monogenic functions) developed on the basis of Clifford algebras does not cover the set of Möbius transformations in higher dimensions, since Möbius transformations are not monogenic. But on the other side, monogenic functions are hypercomplex differentiable functions and the question arises if from this point of view they can still play a special role for other types of 3D-mappings, for instance, for quasi-conformal ones. On the occasion of the 16th IKM 3D-mapping methods based on the application of Bergman's reproducing kernel approach (BKM) have been discussed. Almost all authors working before that with BKM in the Clifford setting were only concerned with the general algebraic and functional analytic background which allows the explicit determination of the kernel in special situations. The main goal of the abovementioned contribution was the numerical experiment by using a Maple software specially developed for that purpose. Since BKM is only one of a great variety of concrete numerical methods developed for mapping problems, our goal is to present a complete different from BKM approach to 3D-mappings. In fact, it is an extension of ideas of L. V. Kantorovich to the 3-dimensional case by using reduced quaternions and some suitable series of powers of a small parameter. Whereas until now in the Clifford case of BKM the recovering of the mapping function itself and its relation to the monogenic kernel function is still an open problem, this approach avoids such difficulties and leads to an approximation by monogenic polynomials depending on that small parameter.

The contribution focuses on the development of a basic computational scheme that provides a suitable calculation environment for the coupling of analytical near-field solutions with numerical standard procedures in the far-field of the singularity. The proposed calculation scheme uses classical methods of complex function theory, which can be generalized to 3-dimensional problems by using the framework of hypercomplex analysis. The adapted approach is mainly based on the factorization of the Laplace operator EMBED Equation.3 by the Cauchy-Riemann operator EMBED Equation.3 , where exact solutions of the respective differential equation are constructed by using an orthonormal basis of holomorphic and anti-holomorphic functions.

A Flexible Model for Incorporating Construction Product Data into Building Information Models
(2006)

When considering the integration and interoperability between AEC-FM software applications and construction products' data, it is essential to investigate the state-of-the-art and conduct an extensive review in the literature of both Building Information Models and electronic product catalogues. It was found that there are many reasons and key-barriers that hinder the developed solutions from being implemented. Among the reasons that are attributed to the failure of many previous research projects to achieve this integration aim are the proprietary developments of CAD vendors, the fragmented nature of construction product data i.e. commercial and technical data, the prefabrication versus on-site production, marketing strategies and brand-naming, the referencing of a product to the data of its constituents, availability of life-cycle data in a single point in time where it is needed all over the whole life-cycle of the product itself, taxonomy problems, the inability to extract search parameters from the building information model to participate in the conduction of parametric searches. Finally and most important is keeping the product data in the building information model consistent and up-to-date. Hence, it was found that there is a great potential for construction product data to be integrated to building information models by electronic means in a dynamic and extensible manner that prevents the model from getting obsolete. The study has managed to establish a solution concept that links continually updated and extensible life-cycle product data to a software independent building information model (IFC) all over the life span of the product itself. As a result, the solution concept has managed to reach a reliable building information model that is capable of overcoming the majority of the above mentioned barriers. In the meantime, the solution is capable of referencing, retrieving, updating, and merging product data at any point in time. A distributed network application that represents all the involved parties in the construction product value chain is simulated by real software tools to demonstrate the proof of concept of this research work.

Interactive visualization based on 3D computer graphics nowadays is an indispensable part of any simulation software used in engineering. Nevertheless, the implementation of such visualization software components is often avoided in research projects because it is a challenging and potentially time consuming task. In this contribution, a novel Java framework for the interactive visualization of engineering models is introduced. It supports the task of implementing engineering visualization software by providing adequate program logic as well as high level classes for the visual representation of entities typical for engineering models. The presented framework is built on top of the open source visualization toolkit VTK. In VTK, a visualization model is established by connecting several filter objects in a so called visualization pipeline. Although designing and implementing a good pipeline layout is demanding, VTK does not support the reuse of pipeline layouts directly. Our framework tailors VTK to engineering applications on two levels. On the first level it adds new – engineering model specific – filter classes to VTK. On the second level, ready made pipeline layouts for certain aspects of engineering models are provided. For instance there is a pipeline class for one-dimensional elements like trusses and beams that is capable of showing the elements along with deformations and member forces. In order to facilitate the implementation of a graphical user interface (GUI) for each pipeline class, there exists a reusable Java Swing GUI component that allows the user to configure the appearance of the visualization model. Because of the flexible structure, the framework can be easily adapted and extended to new problem domains. Currently it is used in (i) an object-oriented p-version finite element code for design optimization, (ii) an agent based monitoring system for dam structures and (iii) the simulation of destruction processes by controlled explosives based on multibody dynamics. Application examples from all three domains illustrates that the approach presented is powerful as well as versatile.

We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible types of light modulation. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear system that can be solved with respect to the projector patterns. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes interactive compensation rates possible. Our generalized method unifies existing approaches that address individual problems. Based on examples, we show that it is possible to project corrected images onto complex surfaces such as an inter-reflecting statuette, glossy wallpaper, or through highly-refractive glass. Furthermore, we illustrate that a side-effect of our approach is an increase in the overall sharpness of defocused projections.

In earlier research, generalized multidimensional Hilbert transforms have been constructed in m-dimensional Euclidean space, in the framework of Clifford analysis. Clifford analysis, centred around the notion of monogenic functions, may be regarded as a direct and elegant generalization to higher dimension of the theory of the holomorphic functions in the complex plane. The considered Hilbert transforms, usually obtained as a part of the boundary value of an associated Cauchy transform in m+1 dimensions, might be characterized as isotropic, since the metric in the underlying space is the standard Euclidean one. In this paper we adopt the idea of a so-called anisotropic Clifford setting, which leads to the introduction of a metric dependent m-dimensional Hilbert transform, showing, at least formally, the same properties as the isotropic one. The Hilbert transform being an important tool in signal analysis, this metric dependent setting has the advantage of allowing the adjustment of the co-ordinate system to possible preferential directions in the signals to be analyzed. A striking result to be mentioned is that the associated anisotropic (m+1)-dimensional Cauchy transform is no longer uniquely determined, but may stem from a diversity of (m+1)-dimensional "mother" metrics.

The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.

Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.

In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.

In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.

For the dynamic behavior of lightweight structures like thin shells and membranes exposed to fluid flow the interaction between the two fields is often essential. Computational fluid-structure interaction provides a tool to predict this interaction and complement or eventually replace expensive experiments. Partitioned analyses techniques enjoy great popularity for the numerical simulation of these interactions. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular analysis software. We use, for the fluid, GLS stabilized finite elements on a moving domain based on the incompressible instationary Navier-Stokes equations, where the formulation guarantees geometric conservation on the deforming domain. The structure is discretized by nonlinear, three-dimensional shell elements.
Commonly used sequential staggered coupling schemes may exhibit instabilities due to the so-called artificial added mass effect. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. Since iterative coupling algorithms are computationally very costly, their convergence rate is very decisive for their usability. To ensure and accelerate the convergence of this iteration the updates of the interface position are relaxed. The time dependent, 'optimal' relaxation parameter is determined automatically without any user-input via exploiting a gradient method or applying an Aitken iteration scheme.

We present an algebraically extended 2D image representation in this paper. In order to obtain more degrees of freedom, a 2D image is embedded into a certain geometric algebra. Combining methods of differential geometry, tensor algebra, monogenic signal and quadrature filter, the novel 2D image representation can be derived as the monogenic extension of a curvature tensor. The 2D spherical harmonics are employed as basis functions to construct the algebraically extended 2D image representation. From this representation, the monogenic signal and the monogenic curvature signal for modeling intrinsically one and two dimensional (i1D/i2D) structures are obtained as special cases. Local features of amplitude, phase and orientation can be extracted at the same time in this unique framework. Compared with the related work, our approach has the advantage of simultaneous estimation of local phase and orientation. The main contribution is the rotationally invariant phase estimation, which enables phase-based processing in many computer vision tasks.

Mikroelektronik und Mikrosystemtechnik in Kombination mit Informations- und Kommunikations-technik erlauben es mittlerweile, Rechenleistung und Kommunikationsfähigkeit in kleinsten Formaten, mit geringsten Energien und zu günstigen Preisen nutzbringend in unser privates und berufliches Umfeld einzubringen. Beispiele sind Notebook-PC, PDA, Handy und das Navigationßystem im Auto. Aber auch eingebettete Elektronik in Komponenten, Geräten und Systemen ist nunmehr zur Selbstverständlichkeit geworden. Bekannte Beispiele aus der Haustechnik sind Mikroprozeßoren in Heizungs- und Alarmanlagen und aber auch in Komponenten wie Brand- und Bewegungsmelder. Wir nähern uns dem vor einigen Jahren noch als Vision bezeichneten Zustand der überall vorhandenen elektronischen Rechenleistung (engl. ubiquitous computing) bzw. des von Informationsverarbeitung durchdrungenen täglichen Umfelds (engl. pervasive computing). Werden die TGA-Komponenten genau wie die größeren Computerkomponenten (z.B. PCs, Server) über Datenschnittstellen zu räumlich verteilten Netzwerken verknüpft (z.B. Internet, Intranet) und mit einer systemübergreifenden und adäquaten Intelligenz (Software) programmiert, so können neuartige Funktionalitäten im jeweiligen Anwendungsumfeld (engl. ambient intelligence, kurz AmI, [1]) entstehen. Hier liegt bei Gebäuden und Räumen speziell eine große Chance, die bislang einer ganzheitlichen Systemkonzeption unter Einschluß von Architektur, Gebäudephysik, technischer Gebäudeausrüstung (TGA) und Gebäudeautomation (GA) im Wege stehende Gewerketrennung zu überwinden. Es entstehen für div. Anwendungszwecke systemisch integrierte >smart areas< (nach Prof. Becker, FH Biberach). Im vorliegenden Beitrag erläuterte Beispiele für AmI-Lösungen im Immobilienbereich sind Raumsysteme zur automatischen und sicheren Erkennung von Notfällen, z.B. in Pflegeheimen; sich automatisch an die Nutzung und den Nutzer bzgl. Klima und Beleuchtung adaptierende Raumsysteme im Büro- oder Hotelbereich und die elektronische Aßistenz des Bau- und Betriebsprozeßes von Gebäuden. Im Duisburger inHaus-Innovationszentrum für Intelligente Raum- und Gebäudesysteme der Fraunhofer-Gesellschaft wurden in den letzten Jahren erste Lösungen mit diesem neuartigen Ansatz konzipiert, entwickelt und erprobt. Der Beitrag beschreibt nach einer kurzen Skizzierung des Ambient-Intelligence-Ansatzes an Beispielen Möglichkeiten für den Transfer dieser neuen Technologie in den Raum- und Gebäudebereich. Es folgt eine abschließende Zusammenfaßung und eine Einschätzung der Zukunftspotenziale der Ambient Intelligence in Raum und Bau.