Refine
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (92)
- Institut für Strukturmechanik (10)
- Professur Angewandte Mathematik (6)
- Institut für Konstruktiven Ingenieurbau (4)
- Professur Informatik in der Architektur (3)
- Institut für Mathematik-Bauphysik (2)
- Professur Bodenmechanik (2)
- Professur Stahlbau (2)
- Professur Tragwerkslehre (2)
- Professur Bauphysik (1)
Keywords
- CAD (126) (remove)
Year of publication
- 2006 (126) (remove)
Interactive visualization based on 3D computer graphics nowadays is an indispensable part of any simulation software used in engineering. Nevertheless, the implementation of such visualization software components is often avoided in research projects because it is a challenging and potentially time consuming task. In this contribution, a novel Java framework for the interactive visualization of engineering models is introduced. It supports the task of implementing engineering visualization software by providing adequate program logic as well as high level classes for the visual representation of entities typical for engineering models. The presented framework is built on top of the open source visualization toolkit VTK. In VTK, a visualization model is established by connecting several filter objects in a so called visualization pipeline. Although designing and implementing a good pipeline layout is demanding, VTK does not support the reuse of pipeline layouts directly. Our framework tailors VTK to engineering applications on two levels. On the first level it adds new – engineering model specific – filter classes to VTK. On the second level, ready made pipeline layouts for certain aspects of engineering models are provided. For instance there is a pipeline class for one-dimensional elements like trusses and beams that is capable of showing the elements along with deformations and member forces. In order to facilitate the implementation of a graphical user interface (GUI) for each pipeline class, there exists a reusable Java Swing GUI component that allows the user to configure the appearance of the visualization model. Because of the flexible structure, the framework can be easily adapted and extended to new problem domains. Currently it is used in (i) an object-oriented p-version finite element code for design optimization, (ii) an agent based monitoring system for dam structures and (iii) the simulation of destruction processes by controlled explosives based on multibody dynamics. Application examples from all three domains illustrates that the approach presented is powerful as well as versatile.
Die Liquiditätsplanung von Bauunternehmen XE "Liquiditätsplanung" gilt als ein wesentliches Steuerungs-, Kontroll- sowie Informationsinstrument für interne und externe Adressaten und übt eine Entscheidungsunterstützungsfunktion aus. Da die einzelnen Bauprojekte einen wesentlichen Anteil an den Gesamtkosten des Unternehmens ausmachen, besitzen diese auch einen erheblichen Einfluß auf die Liquidität und die Zahlungsfähigkeit der Bauunternehmung. Dem folgend ist es in der Baupraxis eine übliche Verfahrensweise, die Liquiditätsplanung zuerst projektbezogen zu erstellen und anschließend auf Unternehmensebene zu verdichten. Ziel der Ausführungen ist es, die Zusammenhänge von Arbeitskalkulation XE "Arbeitskalkulation" , Ergebnisrechnung XE "Ergebnisrechnung" und Finanzrechnung XE "Finanzrechnung" in Form eines deterministischen XE "Erklärungsmodells" Planungsmodells auf Projektebene darzustellen. Hierbei soll das Verständnis und die Bedeutung der Verknüpfungen zwischen dem technisch-orientierten Bauablauf und dessen Darstellung im Rechnungs- und Finanzwesen herausgestellt werden. Die Vorgänge aus der Bauabwicklung, das heißt die Abarbeitung der Bauleistungsverzeichnispositionen und deren zeitliche Darstellung in einem Bauzeitenplan sind periodisiert in Größen der Betriebsbuchhaltung (Leistung, Kosten) zu transformieren und anschließend in der Finanzrechnung (Einzahlungen., Auszahlungen) nach Kreditoren und Debitoren aufzuschlüsseln.
In this paper we evaluate 2D models for soil-water characteristic curve (SWCC), that incorporate the hysteretic nature of the relationship between volumetric water content θ and suction ψ. The models are based on nonlinear least squares estimation of the experimental data for sand. To estimate the dependent variable θ the proposed models include two independent variables, suction and sensors reading position (depth d in the column test). The variable d represents not only the position where suction and water content are measured but also the initial suction distribution before each of the hydraulic loading test phases. Due to this the proposed 2D regression models acquire the advantage that they: (a) can be applied for prediction of θ for any position along the column and (b) give the functional form for the scanning curves.
ON THE NAVIER-STOKES EQUATION WITH FREE CONVECTION IN STRIP DOMAINS AND 3D TRIANGULAR CHANNELS
(2006)
The Navier-Stokes equations and related ones can be treated very elegantly with the quaternionic operator calculus developed in a series of works by K. Guerlebeck, W. Sproeossig and others. This study will be extended in this paper. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one basically needs to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. With special variants of quaternionic holomorphic multiperiodic functions we obtain explicit formulas for three dimensional parallel plate channels, rectangular block domains and regular triangular channels. The explicit knowledge of the integral kernels makes it then possible to evaluate the operator equations in order to determine the solutions of the boundary value problem explicitly.
In distributed project organisations and collaboration there is a need for integrating unstructured self-contained text information with structured project data. We consider this a process of text integration in which various text technologies can be used to externalise text content and consolidate it into structured information or flexibly interlink it with corresponding information bases. However, the effectiveness of text technologies and the potentials of text integration greatly vary with the type of documents, the project setup and the available background knowledge. The goal of our research is to establish text technologies within collaboration environments to allow for (a) flexibly combining appropriate text and data management technologies, (b) utilising available context information and (c) the sharing of text information in accordance to the most critical integration tasks. A particular focus is on Semantic Service Environments that leverage on Web service and Semantic Web technologies and adequately support the required systems integration and parallel processing of semi-structured and structured information. The paper presents an architecture for text integration that extends Semantic Service Environments with two types of integration services. Backbone to the Information Resource Sharing and Integration Service is a shared environment ontology that consolidates information on the project context and the available model, text and general linguistic resources. It also allows for the configuration of Semantic Text Analysis and Annotation Services to analyse the text documents as well as for capturing the discovered text information and sharing it through semantic notification and retrieval engines. A particular focus of the paper is the definition of the overall integration process configuring a complementary set of analyses and information sharing components.
Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.
The paper proposes a new method for general 3D measurement and 3D point reconstruction. Looking at its features, the method explicitly aims at practical applications. These features especially cover low technical expenses and minimal user interaction, a clear problem separation into steps that are solved by simple mathematical methods (direct, stable and optimal with respect to least error squares), and scalability. The method expects the internal and radial distortion parameters of the used camera(s) as inputs, and a plane quadrangle with known geometry within the scene. At first, for each single picture the 3D position of the reference quadrangle (with respect to each camera coordinate frame) is calculated. These 3D reconstructions of the reference quadrangle are then used to yield the relative external parameters of each camera regarding the first one. With known external parameters, triangulation is finally possible. The differences from other known procedures are outlined, paying attention to the stable mathematical methods (no usage of nonlinear optimization) and the low user interaction with good results at the same time.
The paper is devoted to the investigation of dynamical behavior of a cable under influence of various types of excitations. Such element has a low rigidity and is sensitive to dynamic effect. The structural scheme is a cable which ends are located at different level. The analysis of dynamical behavior of the cable under effect of kinematical excitation which is represented by the oscillations of the upper part of tower is given. The scheme of cable is accepted such, that lower end of an inclined cable is motionless. The motion of the upper end is assumed only in horizontal direction. The fourth-order Runge-Kutta method was realized in software. The fast Fourier transform was used for spectral analysis. Standard graphical software was adopted for presenting results of investigations. The mathematical model of oscillations of a cable was developed by the account of the viscous damping. The analysis of dynamical characteristics of a cable for various parameters of damping and kinematical excitation was carried out. The time series, spectral characteristics and amplitude-frequencies characteristics was obtained. The resonance amplitude for different oscillating regimes was estimated. It is noted that increasing of the coefficient of the viscous damping and decreasing of the amplitude of tower's oscillations reduces the value of the critical frequency and the resonant amplitudes.
The present study was designed to investigate the underlying factors determining the visual impressions of design-patterns that have complex textures. Design-patterns produced by "the dynamical system defined by iterations of discrete Laplacians on the plane lattice" were adopted as stimuli because they were not only complex, but also defined mathematically. In the experiment, 21 graduate and undergraduate students sorted 102 design-patterns into several groups by visual impressions. Those 102 patterns were classified into 12 categories by the cluster analysis. The results showed that the regularity of pattern was a most efficient factor for determining visual impressions of design-pattern, and there were some correspondence between visual impressions and mathematical variables of design-pattern. Especially, the visual impressions were influenced greatly by the neighborhood, and less influenced by steps of iterations.
The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.