### Refine

#### Document Type

- Conference Proceeding (159)
- Article (20)
- Doctoral Thesis (5)

#### Institute

- In Zusammenarbeit mit der Bauhaus-Universität Weimar (92)
- Professur Informatik im Bauwesen (41)
- Institut für Strukturmechanik (10)
- Professur Informatik in der Architektur (10)
- Professur Angewandte Mathematik (6)
- Professur Theorie und Geschichte der modernen Architektur (5)
- Institut für Konstruktiven Ingenieurbau (4)
- Juniorprofessur Computational Architecture (4)
- Institut für Mathematik-Bauphysik (2)
- Professur Bodenmechanik (2)
- Professur Informations- und Wissensverarbeitung (2)
- Professur Stahlbau (2)
- Professur Tragwerkslehre (2)
- Professur Bauphysik (1)
- Professur Massivbau I (1)
- Professur Systeme der Virtuellen Realität (1)

#### Keywords

- CAD (184) (remove)

... WITHOUT RIGHT ANGLE.
(2006)

Currently sculptural design is one of the most discussed themes in architecture. Due to their light weight, easy transportation and assembly, as well as an almost unlimited structural variety, parameterised spatial structures are excellently suited for constructive realisation of free formed claddings. They subdivide the continuous surface into a structure of small sized nodes, straight members and plane glass panels. Thus they provide an opportunity to realise arbitrary double-curved claddings with a high degree of transparency, using industrial semi-finished products (steel sections, flat glass). Digital design strategies and a huge number of similar looking but in detail unique structural members demand a continuous digital project handling. Within a research project, named MYLOMESH, a free-formed spatial structure was designed, constructed, fabricated and assembled. All required steps were carried out based on digital data. Different digital connections (scripts) between varying software tools, which are usually not used in the planning process of buildings, were created. They allow a completely digital workflow. The project, its design, meshing, constructive detailing and the above-mentioned scripts are described in this paper.

In classical complex function theory the geometric mapping property of conformality is closely linked with complex differentiability. In contrast to the planar case, in higher dimensions the set of conformal mappings is only the set of Möbius transformations. Unfortunately, the theory of generalized holomorphic functions (by historical reasons they are called monogenic functions) developed on the basis of Clifford algebras does not cover the set of Möbius transformations in higher dimensions, since Möbius transformations are not monogenic. But on the other side, monogenic functions are hypercomplex differentiable functions and the question arises if from this point of view they can still play a special role for other types of 3D-mappings, for instance, for quasi-conformal ones. On the occasion of the 16th IKM 3D-mapping methods based on the application of Bergman's reproducing kernel approach (BKM) have been discussed. Almost all authors working before that with BKM in the Clifford setting were only concerned with the general algebraic and functional analytic background which allows the explicit determination of the kernel in special situations. The main goal of the abovementioned contribution was the numerical experiment by using a Maple software specially developed for that purpose. Since BKM is only one of a great variety of concrete numerical methods developed for mapping problems, our goal is to present a complete different from BKM approach to 3D-mappings. In fact, it is an extension of ideas of L. V. Kantorovich to the 3-dimensional case by using reduced quaternions and some suitable series of powers of a small parameter. Whereas until now in the Clifford case of BKM the recovering of the mapping function itself and its relation to the monogenic kernel function is still an open problem, this approach avoids such difficulties and leads to an approximation by monogenic polynomials depending on that small parameter.

The synchronous distributed processing of common source code in the software development process is supported by well proven methods. The planning process has similarities with the software development process. However, there are no consistent and similarly successful methods for applications in construction projects. A new approach is proposed in this contribution.

The research of the best building design requires a concerted design approach of both structure and foundation. Our work is an application of this approach. Our objective is also to create an interactive tool, which will be able to define, at the early design stages, the orientations of structure and foundation systems that satisfy as well as possible the client and the architect. If the concerns of these two actors are primarily technical and economical, they also wish to apprehend the environmental and social dimensions of their projects. Thus, this approach bases on alternative studies and on a multi-criterion analysis. In this paper, we present the context of our work, the problem formulation, which allows a concerted design of Structure and Foundation systems and the feasible solutions identifying process.

The contribution focuses on the development of a basic computational scheme that provides a suitable calculation environment for the coupling of analytical near-field solutions with numerical standard procedures in the far-field of the singularity. The proposed calculation scheme uses classical methods of complex function theory, which can be generalized to 3-dimensional problems by using the framework of hypercomplex analysis. The adapted approach is mainly based on the factorization of the Laplace operator EMBED Equation.3 by the Cauchy-Riemann operator EMBED Equation.3 , where exact solutions of the respective differential equation are constructed by using an orthonormal basis of holomorphic and anti-holomorphic functions.

Interactive visualization based on 3D computer graphics nowadays is an indispensable part of any simulation software used in engineering. Nevertheless, the implementation of such visualization software components is often avoided in research projects because it is a challenging and potentially time consuming task. In this contribution, a novel Java framework for the interactive visualization of engineering models is introduced. It supports the task of implementing engineering visualization software by providing adequate program logic as well as high level classes for the visual representation of entities typical for engineering models. The presented framework is built on top of the open source visualization toolkit VTK. In VTK, a visualization model is established by connecting several filter objects in a so called visualization pipeline. Although designing and implementing a good pipeline layout is demanding, VTK does not support the reuse of pipeline layouts directly. Our framework tailors VTK to engineering applications on two levels. On the first level it adds new – engineering model specific – filter classes to VTK. On the second level, ready made pipeline layouts for certain aspects of engineering models are provided. For instance there is a pipeline class for one-dimensional elements like trusses and beams that is capable of showing the elements along with deformations and member forces. In order to facilitate the implementation of a graphical user interface (GUI) for each pipeline class, there exists a reusable Java Swing GUI component that allows the user to configure the appearance of the visualization model. Because of the flexible structure, the framework can be easily adapted and extended to new problem domains. Currently it is used in (i) an object-oriented p-version finite element code for design optimization, (ii) an agent based monitoring system for dam structures and (iii) the simulation of destruction processes by controlled explosives based on multibody dynamics. Application examples from all three domains illustrates that the approach presented is powerful as well as versatile.

In earlier research, generalized multidimensional Hilbert transforms have been constructed in m-dimensional Euclidean space, in the framework of Clifford analysis. Clifford analysis, centred around the notion of monogenic functions, may be regarded as a direct and elegant generalization to higher dimension of the theory of the holomorphic functions in the complex plane. The considered Hilbert transforms, usually obtained as a part of the boundary value of an associated Cauchy transform in m+1 dimensions, might be characterized as isotropic, since the metric in the underlying space is the standard Euclidean one. In this paper we adopt the idea of a so-called anisotropic Clifford setting, which leads to the introduction of a metric dependent m-dimensional Hilbert transform, showing, at least formally, the same properties as the isotropic one. The Hilbert transform being an important tool in signal analysis, this metric dependent setting has the advantage of allowing the adjustment of the co-ordinate system to possible preferential directions in the signals to be analyzed. A striking result to be mentioned is that the associated anisotropic (m+1)-dimensional Cauchy transform is no longer uniquely determined, but may stem from a diversity of (m+1)-dimensional "mother" metrics.

The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.

Der Beitrag basiert auf den Ansätzen und Ergebnissen des Forschungsprojekts >Prozessorientierte Vernetzung von Ingenieurplanungen am Beispiel der Geotechnik<, das im Rahmen des Schwerpunktprogramms 1103 >Vernetzt-kooperative Planungsprozesse im Konstruktiven Ingenieurbau< von der DFG gefördert wird. Ziel des gemeinsam mit dem Institut für Numerische Methoden und Informatik im Bauwesen an der TU Darmstadt durchgeführten Forschungsprojekts ist die Entwicklung einer netzwerkbasierten Kooperationsplattform zur Unterstützung von geotechnischen Ingenieurplanungen. Daher konzentriert sich das Forschungsprojekt auf die Abbildung und Koordination der Planungsprozesse für Projekte des Konstruktiven Ingenieurbaus vor dem Hintergrund der stark arbeitsteiligen Projektbearbeitung in einer verteilten Rechnerumgebung. Der Beitrag stellt die Abstraktion von Prozessmustern im Bauplanungsprozess als Basis für die dynamische Prozessmodellierung in einem Kooperationsmodell dar. Ziel ist es, durch die Identifikation der mit dem Entwurf und der Dimensionierung eines Bauteils verbundenen Planungs- und Abstimmungsprozesse einen bauteilbezogenen Katalog von Prozessmustern zu abstrahieren. Die einzelnen Prozessmuster werden in jedem Bauplanungsprozess dynamisch über geeignete Kopplungsmechanismen in das aktuelle Prozessmodell integriert, so dass die für den Bauplanungsprozess typischen Veränderungen der Konstruktion und der Zusammensetzung des Planungsteams im Prozessmodell berücksichtigt werden können. Dazu werden im Beitrag die bisherigen Ergebnisse der Analyse des Planungsprozesses eines großen innerstädtischen Bauvorhabens, das als Referenzobjekt dient, sowie typischer Planungsszenarien in der Geotechnik vorgestellt. Anschließend werden Grundlagen und methodische Ansätze zur Modellierung von Prozessen mit der Methode der farbigen Petri-Netze mit individuellen Marken vorgestellt. Anhand von Beispielen für bauteilorientierte Prozessmuster wird die Funktionalität der Prozessmuster in sich und im gegenseitigen Zusammenspiel erläutert

Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.