56.03 Methoden im Bauingenieurwesen
Refine
Document Type
- Conference Proceeding (599) (remove)
Institute
- Professur Informatik im Bauwesen (331)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (173)
- Graduiertenkolleg 1462 (31)
- Institut für Strukturmechanik (ISM) (21)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (IKI) (8)
- Professur Stahlbau (4)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (3)
- Professur Baubetrieb und Bauverfahren (3)
- Professur Informatik in der Architektur (3)
Keywords
- Computerunterstütztes Verfahren (284)
- Architektur <Informatik> (198)
- CAD (156)
- Angewandte Informatik (143)
- Angewandte Mathematik (143)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (72)
- Modellierung (49)
- Bauwerk (40)
- Verteiltes System (36)
- Building Information Modeling (35)
Im Vortrag wird der Frage nachgegangen, inwieweit ein Zusammenhang zwischen der Zuverlässigkeit und gewissen strukturellen Parametern eines Verkehrsnetzes besteht. Das Verkehrsnetz wird hierzu als bewerteter Graph aufgefasst. Zur Analyse der Verkehrsnetzzuverlässigkeit werden die Kanten mit ihren Verfügbarkeiten (>Zuverlässigkeiten<) bewertet. Die Zuverlässigkeit des Verkehrsnetzes hat einen großen Einfluss auf die Beurteilung des Straßennetzes. Im Sinne einer Regressionsanalyse soll der Zusammenhang mit weiteren, speziell für die Betrachtung von Verkehrsnetzen entwickelten strukturellen Kenngrößen untersucht werden, das sind insbesondere die Dispersion des Verkehrsnetzes und die Unterentwicklung des Netzes. Aus der Vielzahl der theoretisch denkbaren Verkehrsnetzstrukturen spiegelt die Ring-Radius-Struktur die realen Straßennetze am besten wider. Solche Ring-Radius-Strukturen kommen in vielen Städten vor. Die Berechnung der Verkehrsnetzzuverlässigkeit erfolgt mittels eines Algorithmus, der auf dem Prinzip der vollständigen Enumeration aller möglichen Kombinationen der Verfügbarkeit bzw. Nichtverfügbarkeit der einzelnen Kanten basiert und die gewünschte Kenngröße exakt bestimmt. Obwohl der Algorithmus speziell auf eine möglichst geringe Rechenzeit ausgerichtet ist, bleibt das Verfahren doch numerisch recht aufwendig. An Hand von zehn verschiedenen Ring-Radius-Strukturen wird der Nachweis eines Zusammenhangs zwischen den strukturellen Kenngrößen und der Verkehrsnetzzuverlässigkeit geführt. Damit können mittels struktureller Bewertungen (die mit einem vergleichsweise geringen Aufwand an Input-Daten und Rechenzeit bestimmbar sind) Aussagen über die Zuverlässigkeit des Verkehrsnetzes getroffen werden.
Die Versagenswahrscheinlichkeit nach einem Grenzzustand wird gewöhnlich mit dem Integral I der Basisvariablen-Verteilungsdichte über den Versagensbereich bestimmt. Dabei ist eine geschlossene Lösung nur im Spezialfall normalverteilter Basisvariablen bei Linearität der Grenzzustandsgleichung möglich. In anderen Fällen sind verschiedene Näherungsverfahren gebräuchlich, die auf den Momenten der Basisvariablen und geeignet gewählten Indizes als Sicherheitskenngrößen beruhen. Eine größere Genauigkeit bieten die Zuverlässigkeitstheorien erster bzw. zweiter Ordnung, die ebenfalls von I ausgehen. Im Beitrag wird ein neuartiges Verfahren vorgestellt, dessen Ausgangspunkt nicht I, sondern das Kraftgrößenverfahren als einem Standardalgorithmus des konstruktiven Ingenieurbaus ist. Die Einbeziehung der maßgebenden Zufallsgrößen in die Matrix der Vorzahlen und die Belastungszahlen führt zur Verallgemeinerung des Systems der Elastizitätsgleichungen zum zufälligen System der Elastizitätsgleichungen. Dessen Lösung, die durch den Übergang zu einem deterministischen Ersatzsystem gewonnen wird, liefert die statisch Unbestimmten als Funktionen der im System wirkenden Zufallsgrößen (z.B. E-Modul der Stäbe und Belastung). Da dieser Zusammenhang analytisch vorliegt, kann die Wirkung einzelner Zufallseinflüsse auf die statisch Unbestimmten und die daraus folgenden sicherheitsrelevanten Zustandsgrößen beurteilt werden. Die Dichtefunktion der Grenzzustandsgleichung kann berechnet oder durch Simulation ermittelt werden. Daraus folgt . Nicht normalverteilte Zufallsgrößen werden durch Entwicklung in orthogonale Polynome Gaußscher Zufallsgrößen berücksichtigt.
Wir betrachten im ÖPNV (Öffentlichen Personennahverkehr) diejenige Situation, daß zwei Bus- oder Straßenbahnlinien gemeinsame Haltestellen haben. Ziel unserer Untersuchungen ist es, für beide Linien einen solchen Fahrplan zu finden, der für die Fahrgäste möglichst viel Bequemlichkeit bietet. Die Bedarfsstruktur - die Anzahl von Personen, die die beiden Linien benutzen - setzt dabei gewisse Beschränkungen für die Taktzeiten der beiden Linien. Die verbleibenden Entscheidungsfreiheiten sollen im Sinne der Zielstellung ausgenutzt werden. Im Vortrag wird folgenden Fragen nachgegangen: - nach welchen Kriterien kann man die "Bequemlichkeit" oder die "Synchonisationsgüte" messen? - wie kann man die einzelnen "Synchronisationsmaße" berechnen ? - wie kann man die verbleibenden Entscheidungsfreiheiten nutzen, um eine möglichst gute Synchronisation zu erreichen ? Die Ergebnisse werden dann auf einige Beispiele angewandt und mit den bereitgestellten Methoden Lösungsvorschläge unterbreitet.
Für den Entwurf von Ingenieurbauten ist eine zuverlässige Prognose über den Spannungsverlauf im Bauwerk und auf dessen Rand von großer Bedeutung. Eine geschlossene Lösung der elastischen Bestimmungsgleichungen des Bauwerks ist in der Regel nicht verfügbar. Es wird daher unter Verwendung der Methode der gewichteten Reste eine schwache Form der Gleichungen abgeleitet, die zu einem gemischten Arbeitsprinzip führt. Das zugehörige Finite-Elemente-Modell erlaubt es Spannungen am Rand des Bauwerks zu ermitteln, die im Gleichgewicht zu den angreifenden Lasten stehen.
Wirklichkeitsnahe Erfassung und Beschreibung des Trag- und Verformungsverhaltens von Strukturen baulicher Anlagen hat in den letzten Jahrzehnten ständig an Bedeutung gewonnen. Konstruktionen im Hoch- und Industriebau werden zunehmend multifunktional genutzt - die >Grenzen< zwischen Bauwerk und Tragwerk, zwischen Hüll- und Tragkonstruktion lösen sich auf. Werden raumabschließende Elemente (Wände, Decken, Dächer) gleichzeitig als Tragelemente und wärme- und schalldämmende Konstruktionen ausgeführt, so entstehen beispielsweise Sandwichplatten, deren Schichten sehr stark differierende Materialeigenschaften aufweisen. Beim Aufbau des FEM-Modells für vielschichtige Schalen können die Formänderungshypothesen für jede Schicht einzeln als auch für die Schale insgesamt gegeben werden. Im ersten Fall ist der Knotenfreiheitsgrad von der Schichtenzahl abhängig, im zweiten Fall nicht. Im weiteren wird eine Formänderungshypothese für das Schichtenpaket angenommen. Ausgegangen wird von den Gleichungen der 3D-Elastizitätstheorie. Die Berücksichtigung der Querkraftschubverformungen ergibt die Möglichkeit einer adäquaten Beschreibung der Verformungen sowohl dünner Schalen als auch von Schalen mittlerer Dicke; die Berechnung der Krümmungen und der LAMEschen Parameter der Bezugsfläche zu umgehen, was für komplizierte Schalenformen eine selbständige Aufgabe ist; eines natürlichen Übergangs von homogenen zu geschichteten Schalen. Das vielschichtige isoparametrische Schalen-FE wird vorgestellt, seine Implementierung in das in Entwicklung befindliche Programmsystem SLANG wird vorbereitet.
Seit mehr als fünfzig Jahren werden zur Untersuchung der Tragwerkssicherheit auch Methoden der Wahrscheinlichkeitsrechnung herangezogen. Ungeachtet der inzwischen erreichten Fortschritte und der offensichtlichen Vorzüge, konnte dieses Vorgehen in der Praxis bis jetzt noch nicht ausreichend Fuß fassen. Im Beitrag wird das Problem der Tragwerkssicherheit mit einem neuartigen Verfahren behandelt. Im Unterschied zu den üblichen probabilistischen Methoden geht es nicht von Verteilungsfunktionen aus. Vielmehr werden die maßgebenden Zufallsgrößen in den Mittelpunkt gestellt und direkt in die Rechenvorschrift eingeführt. Als mathematisches Hilfsmittel dienen die WIENERschen Chaos-Polynome. Sie stellen im Raum der Zufallsgrößen mit beschränkter Varianz eine Basis dar, mit der sich eine beliebige Zufallsgröße nach orthogonalen Polynomen GAUSSscher Zufallsgrößen entwickeln läßt. So entsteht ein effektiver Formalismus, der sich eng an die herkömmliche Deformationsmethode anlehnt und als deren probabilistische Verallgemeinerung angesprochen werden darf. Die Methode liefert die Grenzzustandsbedingung als Funktion der auf das Tragwerk wirkenden Zufallsgrößen. Die Versagenswahrscheinlichkeit kann daher durch Monte-Carlo-Simulation bestimmt werden. Die mit der Auswertung des Wahrscheinlichkeitsintegrals der First Order Reliability Method (FORM) verbundenen Schwierigkeiten werden vermieden. An einem Beispieltragwerk wird dargestellt, wie sich Veränderungen gewisser Konstruktionsparameter auf die Versagenswahrscheinlichkeit auswirken.
Im Vortrag wird der Frage nachgegangen, inwieweit zwischen den strukturellen Parametern Dispersion eines Verkehrsnetzes bzw. der Kennziffer der Unterentwicklung eines Verkehrsnetzes und den mittleren Fahrzeiten bzgl. unterschiedlicher Verkehrsbedarfsmatrizen ein Zusammenhang besteht. An Hand von 10 verschiedenen Ring-Radius-Strukturen für den MIV (Motorisierter Individual-Verkehr) und 3 verschiedenen Ring-Radius-Strukturen für den Busverkehr wird bei 5 unterschiedlichen O-D-Bedarfsmatrizen der Nachweiß eines solchen Zusammenhanges geführt. Die Ergebnisse erlauben es, auf Grund struktureller Analysen Aussagen über funktionelle Bewertungen des Verkehrsnetzes zu treffen. Da strukturelle Bewertungen mit wesentlich geringerem Aufwand an Input-Daten und an Rechenzeit als funktionelle Bewertungen bestimmbar sind, bringt dies deutliche Einsparungen in der Planungsphase von Verkehrsnetzen mit sich.
Es wird gezeigt, daß zur Aufstellung eines korrekten Momentengleichgewichts nach Theorie zweiter Ordnung für Querkräfte die Hebelarme des unverformten Systems und für Normalkräfte Hebelarme des verformten Systems zu benutzen sind. Im Allgemeinen ist es aber nicht möglich, die Knotenverformungen eines Rahmens in relevante und nicht relevante Anteile zu zerlegen, so daß ein Momentengleichgewicht bei Berechnungen nach Theorie zweiter Ordnung im Allgemeinen nicht sinnvoll ist.
Most retaining walls and box culverts built for arterial road construction are simple, and the design process of these structures is often repetitive and labor-intensive because they are so similar in structural configuration. Although some integrated design automation systems developed for retaining walls and box culverts have expedited the design process of these structures, the process of collecting and distributing the resultant engineering documents has not been fully integrated with the computer applications. We have been developing a Web-based design automation system to manage the resultant documents as well as to speed up the repetitive design process. Manipulation of engineering drawings in the Web page is one of the critical functions needed for Web-based design automation. eXtensible Markup Language (XML) and XML-based vector graphics are expected to facilitate the representation of engineering drawings in the Web page. In this paper, we present how we used XML and Scalable Vector Graphics (SVG) to compose engineering drawings and represent them in the Web page. XML Data Island we designed to define drawing components turned out effective in manipulating the engineering drawings in the Web page.
Many structures in different engineering applications suffer from cracking. In order to make reliable prognosis about the serviceability of those structures it is of utmost importance to identify cracks as precisely as possible by non-destructive testing. A novel approach (XIGA), which combines the Isogeometric Analysis (IGA) and the Extended Finite Element Method (XFEM) is used for the forward problem, namely the analysis of a cracked material, see [1]. Applying the NURBS (Non-Uniform Rational B-Spline) based approach from IGA together with the XFEM allows to describe effectively arbitrarily shaped cracks and avoids the necessity of remeshing during the crack identification problem. We want to exploit these advantages for the inverse problem of detecting existing cracks by non-destructive testing, see e.g. [2]. The quality of the reconstructed cracks however depends on two major issues, namely the quality of the measured data (measurement error) and the discretization of the crack model. The first one will be taken into account by applying regularizing methods with a posteriori stopping criteria. The second one is critical in the sense that too few degrees of freedom, i.e. the number of control points of the NURBS, do not allow for a precise description of the crack. An increased number of control points, however, increases the number of unknowns in the inverse analysis and intensifies the ill-posedness. The trade-off between accuracy and stability is aimed to be found by applying an inverse multilevel algorithm [3, 4] where the identification is started with short knot vectors which successively will be enlarged during the identification process.
The availability of the WWW technology and the introduction of the Internet as basic resource like water, electricity or gas changes dramatically normal live, business and of course civil engineering. New technologies enable innovative technical solutions and offer new potential for improvements towards support of humans nature appropriate ways of working. This demands a new culture of work and collaboration. To contribute to theses challenges is a matter the discipline 'Bauinformatik' by supporting related research, development, education and training in civil engineering. This contribution to the IKM Conference 2000 in Weimar sketches selected research and education activities of the institutes of the authors on the topics of WWW based simulation systems (example WEASEL) and of WWW based project platforms (projects MorWin and TaiGer as well as an European education experiment). Both topics supports the collaboration by new ways of tele cooperation in international, heterogeneous and interdisciplinary engineering. Demonstration from this developments and projects will be used to illustrate the dimension of changes in civil engineering in due to modern ICT.
This study aims at constructing the simulation system of distribution patterns of solar irradiance, sunshine duration and sky factor with the user-friendly interface and sharing this system with users on the Internet. The characteristics of this simulation system are as follows: (1) Web-based user interface, (2) Accessible system through the Internet and (3) Sunlight and skylight simulation based on physically accurate equations with considering various sky conditions and the reflection from the surface of objects. As a case study, impact of the development plans are analyzed and evaluated using this system. Solar irradiance, sunshine duration and sky factor are calculated at the surface of the buildings and open spaces every 10 minutes from 8 to 16 on the winter solstice. The values of these indices are expressed as the ratio to those calculated under the condition with no buildings. In conclusion, this web-based simulation system can be useful for visualizing the sunlight and skylight conditions on outdoor open spaces and the surface of the buildings to evaluate development plans objectively. Users can make access to the system from almost all of the computers connected with the Internet and modify parameters by themselves to get desirable results. Further improvement of the user interface is necessary using more advanced technology of CGI, Java and JavaScript.
This paper describes the concept, implementation and application of the Web-based Information System ‘Turtle’ for data monitoring, analysis, reporting and management in engineering projects. The system uses a generalised object-oriented approach for information modelling of physical state variables from measurements and simulations by sets of tensor objects and is implemented platform-independently as a Web application. This leads to a more flexible handling of measurement and simulation information in distributed and interdisciplinary engineering projects based on the concept of information sharing. The potential and advantages of Web-based information systems like ‘Turtle’ are described for one selected application example: a measurement programme dealing with the physical limnology of Lake Constance.
This paper describes the concept and experiences of the international Open Distance Learning Course ‘HydroWeb’. This course deals with the introduction of Web-based Collaborative Engineering in standard education programmes of water related engineering and civil engineering based on information sharing. Organized under the umbrella of IAHR and ETNET21 this course is collaboration from several universities from all over the world. Started in 1999 the course demonstrates the potential and innovative opportunities of Web-Technology in education, research and engineering: Students from the different partner universities form small distributed teams to solve a given engineering problem in a time window of two weeks. To overcome the spatial distribution the students apply modern Web technology such as video conferencing, application sharing and document management. All results as well as the final reports are presented as Web document on a shared Web-based project platform (http://www.hydro-web.org). Besides the experiences to apply standard Web tools and working methods based on information sharing instead the conventional information exchange in the daily engineering work the students improve their soft skills operate successfully in international and interdisciplinary project environments as part of the ‘Technical Culture’ of nowadays.
In this paper, wavelet energy damage indicator is used in response surface methodology to identify the damage in simulated filler beam railway bridge. The approximate model is addressed to include the operational and surrounding condition in the assessment. The procedure is split into two stages, the training and detecting phase. During training phase, a so-called response surface is built from training data using polynomial regression and radial basis function approximation approaches. The response surface is used to detect the damage in structure during detection phase. The results show that the response surface model is able to detect moderate damage in one of bridge supports while the temperatures and train velocities are varied.
In many applications such as parameter identification of oscillating systems in civil enginee-ring, speech processing, image processing and others we are interested in the frequency con-tent of a signal locally in time. As a start wavelet analysis provides a time-scale decomposition of signals, but this wavelet transform can be connected with an appropriate time-frequency decomposition. For instance in Matlab are defined pseudo-frequencies of wavelet scales as frequency centers of the corresponding bands. This frequency bands overlap more or less which depends on the choice of the biorthogonal wavelet system. Such a definition of frequency center is possible and useful, because different frequencies predominate at different dyadic scales of a wavelet decomposition or rather at different nodes of a wavelet packet decomposition tree. The goal of this work is to offer better algorithms for characterising frequency band behaviour and for calculating frequency centers of orthogonal and biorthogonal wavelet systems. This will be done with some product formulas in frequency domain. Now the connecting procedu-res are more analytical based, better connected with wavelet theory and more assessable. This procedures doesn’t need any time approximation of the wavelet and scaling functions. The method only works in the case of biorthogonal wavelet systems, where scaling functions and wavelets are defined over discrete filters. But this is the practically essential case, because it is connected with fast algorithms (FWT, Mallat Algorithm). At the end corresponding to the wavelet transform some closed formulas of pure oscillations are given. They can generally used to compare the application of different wavelets in the FWT regarding it’s frequency behaviour.
Wahrnehmung und Verarbeitung von Ereignissen bei der verteilten Planung im baulichen Brandschutz
(2003)
Der Bauplanungsprozess ist durch ein hohes Maß an Kooperation zwischen Planungsbeteiligten verschiedener Fachrichtungen gekennzeichnet. Hierbei werden zum einen Planungen auf der Basis von Planungsinformationen anderer Planungsbeteiligter detailliert, zum anderen geben Planungen einzelner auch wichtige Rahmenbedingungen für die Gesamtplanung vor. Der vorliegende Beitrag beschreibt einen Ansatz zur ganzheitlichen Unterstützungen verteilter Planungen am Beispiel des baulichen Brandschutzes. Der Antrag trägt hierbei der verteilten und parallelen Planung Rechnung, wie sie heute bei der Planung großer und mittlerer Bauwerke angewendet wird. Die Verteiltheit wird nicht nur für die Planungsbeteiligten modelliert, sondern auch die einzelnen Planungsinformationen liegen im gemeinsamen Kooperationsverbund verteilt vor. Der Fokus dieses Beitrags liegt auf der Wahrnehmung von Planungsänderungen und Ereignissen während der Planung und die Verarbeitung dieser Informationen um eine durchgängige Planung zu gewährleisten. Dies wird zum einen durch das CoBE Awarenessmodell erreicht, mit dem Ereignisse erkannt und dem Informationsverbund zur Verfügung gestellt werden können. Zum anderen werden die Ereignisbehandlung und die darauf folgende fachgerechte Informationsverarbeitung mit Hilfe eines Multi-Agentensystems beschrieben.
Nichtstationäre Baustellenprozesse sind dadurch gekennzeichnet, daß ihr Ablauf nicht vollständig vorhersehbar ist, sondern einer Vielzahl von Störungen unterworfen sein kann. Zur Planung und Steuerung dieser Prozesse sind daher Methoden erforderlich, welche u.a. eine große Anpassungsfähigkeit, die Darstellung paralleler Vorgänge, die Erfassung plötzlicher Störungen sowie stochastische bzw. unscharfe Parameter zulassen. Hiervon ausgehend wurden spezielle Simulationssysteme entwickelt. In der Praxis dient jedoch in den meisten Fällen das Balkendiagramm bzw. die Netzplantechnik als Planungshilfsmittel. Dabei ist der Informationsgehalt der Netzplantechnik eindeutig geringer gegenüber der Simulation. Auch fallen deterministische Berechnungen in der Regel zu optimistisch aus. Ein Wechsel von der Netzplantechnik zur Simulation ist möglich, indem zunächst Netzpläne auf der Grundlage von Petri-Netzen dargestellt werden und somit ein schrittweiser Übergang zur Simulation erfolgt. Neben der Modellbildung kommt der Bereitstellung realitätsnaher Parameter, die den Berechnungen zugrunde gelegt werden, eine große Bedeutung zu. Bei der Terminplanung sind Kenntnisse zu bestimmten äußeren Einflüssen (z. B. Niederschlag) oft nur in unscharfer Weise vorhanden. Hier bietet sich der Einsatz von Fuzzy-Verfahren an. Mit ihrer Hilfe lassen sich unscharfe Faktoren erfassen und in die Planung einbeziehen. Sowohl zur Darstellung von Petri-Netzen als zur Berechnung (Simulation) auf deren Grundlage sowie zur Aufbereitung der Daten mittels Fuzzy-Verfahren liegen entsprechende Rechenprogramme vor.
Im Bereich der Altbausanierung und der Bestandserfassung im Bauwesen ist es häufig notwendig, bestehende Pläne hinsichtlich des Bauwerkszustandes zu aktualisieren oder, wenn diese Pläne nicht (mehr) zugänglich sind, gänzlich neue Planunterlagen des Ist-Zustandes zu erstellen. Ein komfortabler Weg, diese Bauwerksdaten zu erheben, eröffnet die Technologie der Laservermessung. Der vorliegende Artikel stellt in diesem Zusammenhang Ansätze zur Teilautomatisierung der Generierung eines dreidimensionalen Computermodells eines Bauwerkes vor. Als Ergebnis wird ein Volumenmodell bereitgestellt, in dem zunächst die geometrischen und topologischen Informationen über Flächen, Kanten und Punkte im Sinne eines B-rep Modells beschrieben sind. Die Objekte dieses Volumenmodells werden mit Verfahren aus dem Bereich der künstlichen Intelligenz analysiert und in Bauteilklassen systematisch kategorisiert. Die Kenntnis der Bauteilsemantik erlaubt es somit, aus den Daten ein Bauwerks-Produktmodell abzuleiten und dieses einzelnen Fachplanern – etwa zur Erstellung eines Energiepasses – zugänglich zu machen. Der Aufsatz zeigt den erfolgreichen Einsatz virtueller neuronaler Netze im Bereich der Bestandserfassung anhand eines komplexen Beispiels.
Die digitale Unterstützung der Planungsprozesse ist ein aktueller Forschungs- und Arbeitsschwerpunkt der Professur Informatik in der Architektur (InfAR) und der Juniorprofessur Architekturinformatik der Fakultät Architektur an der Bauhaus-Universität Weimar. Verankert in dem DFG Sonderforschungsbereich 524 >Werkzeuge und Konstruktionen für die Revitalisierung von Bauwerken< entstehen Konzepte und Prototypen für eine fachlich orientierte Planungsunterstützung. Als ein Teilaspekt wird in diesem Beitrag die Vision eines mitwachsenden Geometriemodells für das computergestützte Bauaufmaß gezeigt, welches den Aufnehmenden von der Erstbegehung an begleitet. Die bei jeder Phase der Bauaufnahme gewonnenen Geometrieinformationen sollen in den anschließenden Phasen wiederverwendet, konkretisiert bzw. korrigiert werden. Aufmaßtechniken und Geometriemodell sind dabei eng gekoppelt. Verschiedene Sichten auf ein gemeinsames Geometriemodell haben zum Ziel, den Nutzer die Vorteile planarer Abbildungen nutzen zu lassen, ohne die dreidimensionale Übersicht zu verlieren oder entsprechende räumliche Manipulationen zu missen. Das Geometriemodell ist dabei in ein dynamisches Bauwerksmodell eingebettet. Der folgende Beitrag bezieht sich auf die Bauaufnahme mit folgenden Vorgaben: - die Bauaufnahme dient der Vorbereitung der Bauplanung im Bestand - es wird nur eine Genauigkeitsstufe (im Bereich von +/- 10 cm) unterstützt - die Geometrieabbildung des aufzunehmenden Bauwerkes beruht ausschließlich auf ebenen Oberflächen
The use of virtual reality techniques in the development of educational applications brings new perspectives to the teaching of subjects related to the field of civil construction in Civil Engineering domain. In order to obtain models, which would be able to visually simulate the construction process of two types of construction work, the research turned to the techniques of geometric modelling and virtual reality. The applications developed for this purpose are concerned with the construction of a cavity wall and a bridge. These models make it possible to view the physical evolution of the work, to follow the planned construction sequence and to visualize details of the form of every component of the works. They also support the study of the type and method of operation of the equipment necessary for these construction procedures. These models have been used to distinct advantage as educational aids in first-degree courses in Civil Engineering. Normally, three-dimensional geometric models, which are used to present architectural and engineering works, show only their final form, not allowing the observation of their physical evolution. The visual simulation of the construction process needs to be able to produce changes to the geometry of the project dynamically. In the present study, two engineering construction work models were created, from which it was possible to obtain three-dimensional models corresponding to different states of their form, simulating distinct stages in their construction. Virtual reality technology was applied to the 3D models. Virtual reality capacities allow the interactive real-time viewing of 3D building models and facilitate the process of visualizing, evaluating and communicating.
Some key facts about the economic environment of construction industry are explained. It is shown that construction industry is very heterogeneous and has changed drastically during the recent years due to a rapidly moving commercial environment. Two examples of todays’s use of virtual construction tools in construction projects are presented. The first example is the document control for a large international project. The second is the application of 4D modelling in the preconstruction phase of a dam project. It is shown that virtual construction, is a major international trend that currently takes up speed. Some generic industry needs for Research and Development which aims at short and medium term results are presented.
In modernen Gebäuden nimmt die Komplexität der Heizungstechnik ständig zu. Damit wird es auch immer schwieriger, ein ökologisch und ökonomisch vernünftiges Zusammenspiel der Komponenten zu gewährleisten. Die Vernetzung der verschiedenen Komponenten eines Heizsystems mittels Netzwerktechnik aus der EDV soll helfen, die Energieeffizienz zu erhöhen. Embedded Systems und Mikrocontroller fungieren als Regler für die Teilsysteme. Durch Kommunikation untereinander sollen sie ihr Regelverhalten aneinander anpassen. Eine Internetanbindung ermöglicht die Nutzung weiterer Informationen für die Betriebsführung. Außerdem kann der Internetanschluss für die Fernwartung der Anlage genutzt werden. Mit kleinen, im Webbrowser eines Rechners ausgeführten Java-Programmen, sogenannten Applets, können die Betriebszustände von Heizsystemen in Echtzeit visualisiert werden. Durch das Aufzeichnen von Betriebsdaten wird deren Analyse ermöglicht.
Der Beitrag basiert auf den Ansätzen und Ergebnissen des Forschungsprojekts >Prozessorientierte Vernetzung von Ingenieurplanungen am Beispiel der Geotechnik<, des DFG-Schwerpunktprogramms 1103 >Vernetzt-kooperative Planungsprozesse im Konstruktiven Ingenieurbau<. Ziel ist die Entwicklung einer netzwerkbasierten Kooperationsplattform zur Unterstützung von Ingenieurplanungen. Methodische Grundlagen hierfür stellen die Petri-Netze mit individuellen Marken in Verbindung mit einer semantischen Informationsbewertung dar. Der Beitrag zeigt an einem Beispiel die grundlegenden Möglichkeiten der Petri-Netze auf und stellt die Steuerung der Planungsprozesse aufgrund von Metainformationen dar. Darüber hinaus wird der Ansatz verfolgt, auf der Basis bauteilorientierter Prozessmuster für geotechnische Konstruktionselemente den veränderlichen Prozessablauf zu erfassen. Abschließend wird ein Weg zur Implementierung gezeigt.
In Anbetracht der leistungsstarken Computernetzwerke und der intensiven Globalisierung von Wissenschaft und Technik im Bauwesen ist es das Ziel der Bauinformatik, die Planungsprozesse des Konstruktiven Ingenieurbaus für die Nutzung verteilter Ressourcen zu gestalten, geeignete Kooperationsmodelle für die Fachplanung im Informationsverbund der Projektbeteiligten zu entwickeln und die kooperative Projektbearbeitung unter Nutzung verteilter Fachmodelle in Netzen zu ermöglichen. Dazu werden neue methodische Grundlagen geschaffen, welche die tiefgreifenden Veränderungen der Arbeits- und Organisationsstrukturen berücksichtigen. Vor diesem Hintergrund zeigt der Beitrag aktuelle Forschungsinhalte der Bauinformatik auf, die insbesondere im Rahmen des DFG-Schwerpunktprogramms 1103 'Vernetzt-kooperative Planungsprozesse im Konstruktiven Ingenieurbau' von verschiedenen Projekten arbeitsteilig entwickelt werden. Es wird ein integratives Prozessmodell vorgestellt, das die Grundlage für die Unterstützung der vernetzt-kooperativen Planung mit modernen Methoden der Informationsverarbeitung und Kommunikationstechnik zusammenführt. An einem Beispiel aus dem baulichen Brandschutz werden die methodischen Ansätze der Prozessmodellierung mit Petri-Netzen und der mobilen Softwareagenten zur Unterstützung der Kooperation in der Fachplanung des Bauwesens dargestellt und erläutert.
We consider a structural truss problem where all of the physical model parameters are uncertain: not just the material values and applied loads, but also the positions of the nodes are assumed to be inexact but bounded and are represented by intervals. Such uncertainty may typically arise from imprecision during the process of manufacturing or construction, or round-off errors. In this case the application of the finite element method results in a system of linear equations with numerous interval parameters which cannot be solved conventionally. Applying a suitable variable substitution, an iteration method for the solution of a parametric system of linear equations is firstly employed to obtain initial bounds on the node displacements. Thereafter, an interval tightening (pruning) technique is applied, firstly on the element forces and secondly on the node displacements, in order to obtain tight guaranteed enclosures for the interval solutions for the forces and displacements.
Der Schwerpunkt von Forschung und Entwicklung auf dem Gebiet der Tragwerksplanungs-Software lag in den letzten Jahren auf der Erweiterung des funktionalen Umfangs. In der Folge ist es notwendig, den gestiegenen Funktionsumfang einem möglichst breiten Anwenderkreis durch ingenieurgemäß gestaltete Arbeitsumgebungen zugänglich zu machen, so dass ein möglichst effizientes und fehlerarmes Arbeiten ermöglicht wird. Aus der Sicht der Tragwerksplaner muss eine ingenieurgemäß gestaltete Software eine dem spezifischen Arbeitsablauf angepasste Nutzer-Software-Interaktion aufweisen. Dabei sind die benötigten Funktionalitäten in ein einheitliches System zu integrieren und eine Anpassbarkeit durch den Anwender sicherzustellen. Die Berücksichtigung dieser Anforderungen mit herkömmlichen Mitteln würde einen unverhältnismäßig hohen Entwicklungsaufwand erfordern. Infolgedessen muss aus der Sicht der Software-Entwickler eine moderne Software-Architektur für die Tragwerksplanung eine Erhöhung des Wiederverwendungsgrades und eine unabhängige Erweiterbarkeit als zusätzliche Anforderungen erfüllen. In diesem Beitrag wird ein auf Verbunddokumenten basierendes Konzept vorgestellt, mit dem eine Zusammenführung von Standard-Software und fachspezifischen Software-Komponenten zu einer ingenieurgemäßen Arbeitsumgebung ermöglicht wird. Damit kann die Analyse und die Dokumentation eines Tragelementes einschließlich der zugehörigen Datenhaltung innerhalb eines Verbunddokumentes erfolgen. Gleichzeitig kann der software-technische Wiederverwendungsgrad durch die Definition eines Component Frameworks als unabhängig erweiterbare Software-Architektur und durch den Einsatz von Software-Komponenten mit eigener Nutzeroberfläche über das bisher erreichte Niveau hinaus gesteigert werden. Die Umsetzbarkeit des Konzeptes wird durch eine Pilotimplementierung demonstriert.
In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.
VARIATIONAL POSITING AND SOLUTION OF COUPLED THERMOMECHANICAL PROBLEMS IN A REFERENCE CONFIGURATION
(2015)
Variational formulation of a coupled thermomechanical problem of anisotropic solids for the case of non-isothermal finite deformations in a reference configuration is shown. The formulation of the problem includes: a condition of equilibrium flow of a deformation process in the reference configuration; an equation of a coupled heat conductivity in a variational form, in which an influence of deformation characteristics of a process on the temperature field is taken into account; tensor-linear constitutive relations for a hypoelastic material; kinematic and evolutional relations; initial and boundary conditions. Based on this formulation several axisymmetric isothermal and coupled problems of finite deformations of isotropic and anisotropic bodies are solved. The solution of coupled thermomechanical problems for a hollow cylinder in case of finite deformation showed an essential influence of coupling on distribution of temperature, stresses and strains. The obtained solutions show the development of stressstrain state and temperature changing in axisymmetric bodies in the case of finite deformations.
VARIATION OF ROTATIONAL RESTRAINT IN GRID DECK CONNECTION DUE TO CORROSION DAMAGE AND STRENGTHENING
(2006)
The approach to assessment of rotational restraint of stringer-to-crossbeam connection in a deck of 100-year old steel truss bridge is presented. Sensitivity of rotational restraint coefficient of the connection to corrosion damage and strengthening is analyzed. Two criteria of the assessment of the rotational restraint coefficient are applied: static and kinematic one. The former is based on bending moment distribution in the considered member, the latter one – on the member rotation at the given joint. 2D-element model of finite element method is described: webs and flanges are modeled with shell elements, while rivets in the connection – with system of beam and spring elements. The method of rivet modeling is verified by T-stub connection test results published in literature. FEM analyses proved that recorded extent of corrosion damage does not alter the initial rotational restraint of stringer-to-crossbeam connection. Strengthening of stringer midspan influences midspan bending moment and stringer end rotation in a different way. Usually restoring member load bearing capacity means strengthening its critical regions (where the highest stress levels occur). This alters flexural stiffness distribution over member length and influences rotational restraint at its connection to other members. The impact depends on criterion chosen for rotational restraint coefficient assessment.
Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.
Interval analysis extends the concept of computing with real numbers to computing with real intervals. As a consequence, some interesting properties appear, such as the delivery of guaranteed results or confirmed global values. The former property is given in the sense that unknown numerical values are in known to lie in a computed interval. The latter property states that the global minimum value, for example, of a given function is also known to be contained in a interval (or a finite set of intervals). Depending upon the amount computation effort invested in the calculation, we can often find tight bounds on these enclosing intervals. The downside of interval analysis, however, is the mathematically correct, but often very pessimistic size of the interval result. This is in particularly due to the so-called dependency effect, where a single variable is used multiple times in one calculation. Applying interval analysis to structural analysis problems, the dependency has a great influence on the quality of numerical results. In this paper, a brief background of interval analysis is presented and shown how it can be applied to the solution of structural analysis problems. A discussion of possible improvements as well as an outlook to parallel computing is also given.
This paper presents a generic methodology for measurement system configuration when the goal is to identify behaviour models that reasonably explain observations. For such tasks, the best measurement system provides maximum separation between candidate models. In this work, the degree of separation between models is measured using Shannon’s Entropy Function. The location and type of measurement devices are chosen such that the entropy of candidate models is greatest. This methodology is tested on a laboratory structure and, to demonstrate generality, an existing fresh water supply network in a city in Switzerland. In both cases, the methodology suggests an appropriate set of sensors for identifying the state of the system.
The paper describes further developments of the interactive evolutionary design concept relating to the emergence of mutually inclusive regions of high performance design solutions. These solutions are generated from cluster-oriented genetic algorithm (COGAs) output and relate to a number of objectives introduced during the preliminary design of military airframes. The data-mining of multi-objective COGA (moCOGA) output further defines these regions through the application of clustering algorithms, data reduction and variable attribute relevance analyses. A number of visual representations of the COGA output projected onto both variable and objective space are presented. The multi-objective output of the COGA is compared to output from a Strength Pareto Evolutionary Algorithm (SPEA-II) to illustrate the manner in which moCOGAs can generate good approximations to Pareto frontiers.
At present time neuronet's technologies have got a wide application in a different fields of technique. At the same time they give insufficient consideration to using neuron nets in the field of building. Use of approximating neuron nets will allow to definite the deflected mode of constructions elements using noticeably less computing facilities then by using universal methods, finite-element method for instance. Today neuron nets are used for calculation separate elements of building constructions. In this work use of neuron nets for calculation deflected mode of construction which consists of many elements is consider. The main idea of suggested analysis is using neuron nets for calculation internal intensities and transferences pieces of model which are selected by there functional destination. For example, a plate is destine for adoption intensity distributed among area, the purpose of core is taking up surface distributed intensity. Elements involved as intensity converter. Plate serve for intensities dispersion and their transfer. A template is associated with functional destination. A template regards as composition of model elements which has define functional destinations. A single template can incarnate several functional destinations. On receipt values of components transference the estimation of their permissibility is put into practice. In the case of detection a violation of permissible limit, in the component database is making a search for component with analogous functional destination, according to the type of violation. If such component is found than a change a previous component into new one is realized. Thus besides control a condition of construction by components there is a possibility to make a search for decisions of revealed problem....
Portugal is one of the European countries with higher spatial and population freeway network coverage. The sharp growth of this network in the last years instigates the use of methods of analysis and the evaluation of their quality of service in terms of the traffic performance, typically performed through internationally accepted methodologies, namely that presented in the Highway Capacity Manual (HCM). Lately, the use of microscopic traffic simulation models has been increasingly widespread. These models simulate the individual movement of the vehicles, allowing to perform traffic analysis. The main target of this study was to verify the possibility of using microsimulation as an auxiliary tool in the adaptation of the methodology by HCM 2000 to Portugal. For this purpose, were used the microscopic simulators AIMSUN and VISSIM for the simulation of the traffic circulation in the A5 Portuguese freeway. The results allowed the analysis of the influence of the main geometric and traffic factors involved in the methodology by HCM 2000. In conclusion, the study presents the main advantages and limitations of the microsimulators AIMSUN and VISSIM in modelling the traffic circulation in Portuguese freeways. The main limitation is that these microsimulators are not able to simulate explicitly some of the factors considered in the HCM 2000 methodology, which invalidates their direct use as a tool in the quantification of those effects and, consequently, makes the direct adaptation of this methodology to Portugal impracticable.
Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Im Stahlbeton- und Spannbetonbau kommen in zunehmendem Maße Verbundkonstruktionen aus Betonfertigteilen und Ortbetonergänzungen zum Einsatz. Die Fertigteile werden je nach Spannweite, Belastung und speziellen Anforderungen mit schlaffer oder vorgespannter Be-wehrung ausgeführt. Aufgrund der im Allgemeinen unterschiedlichen Betone der einzelnen Querschnittsanteile sowie bedingt durch den zeitlichen Versatz in der Herstellung der Fertig-teile und der Ortbetonergänzungen ergeben sich Unterschiede im Langzeitverhalten, die bei der Bemessung und Nachweisführung zu berücksichtigen sind. Das Kriechen und Schwinden der Betone als Folge des zeitabhängigen Materialverhaltens und die Spezifik des Verbund-querschnitts können für die Gebrauchstauglichkeit relevant sein, insbesondere wenn die Fer-tigteile vorgespannt sind. Eine hinreichend wirklichkeitsnahe Beschreibung des Langzeitverhaltens von Beton gestattet die Theorie des elastisch-kriechenden Körpers, wobei sich im Allgemeinen keine geschlosse-ne Lösung für bewehrte Betonverbundquerschnitte angeben lässt. Eine Alternative hierzu ergibt sich, wenn für die einzelnen Kriechintervalle ein vereinfachter Ansatz für den Verlauf der Kriechfunktion entsprechend der Theorie des Alterns getroffen wird. Dabei werden die im Zeitintervall (tj-1,tj) umgelagerten Schnittgrößen als neue Belastung der einzelnen Querschnittsanteile im Zeitintervall (tj,tj+1) mit der Kriech-funktion f(tj+1,tj) berücksichtigt. Bei hinreichend kleinen Zeitschritten können die Nachteile der getroffenen Vereinfachung vernachlässigt werden.
In der heutigen Informationsgesellschaft sind alle Geschäftsprozesse in einem Unternehmen geprägt durch die Verwendung der vorhandenen Informations- und Kommunikationstechnologien. Diese haben zur Folge, dass Unternehmen mit einer wahren Flut an Informationen konfrontiert werden, die von extern und intern in das Unternehmen eingetragen wurden. Die erfolgreiche Beherrschung dieser Informationsflut bildet die Basis für das effektive Arbeiten im Unternehmen. Grundvoraussetzung hierfür ist vor allem ein harmonisches Zusammenspiel zwischen Mensch ↔ Organisation ↔ Technik. Dokumentenmanagementsysteme haben sich zur Aufgabe gemacht, den Anwender bei der technologischen Bewältigung dieser Aufgabenstellung zu unterstützen. Der hier vorgestellte Artikel zeigt die Probleme auf, die während der Umsetzung eines Dokumentenmanagementsystems auftreten können und konzentriert sich dabei besonders auf Kleinen und Mittleren Unternehmen (KMU) im Bauwesen. Charakteristisch für diese ist die Schwäche die eigenen Unternehmensstrukturen betriebswirtschaftlich aufzubereiten, woraus später bei einer technologischen Umsetzung schwerwiegende Probleme resultieren. Genau an diesem Punkt setzt der vorgestellte Artikel an und versucht den Zusammenhang zwischen Organisation und Technik herauszuarbeiten, um darauf aufsetzend einen allgemeingültigen Ansatz für die Umsetzung eines Dokumentenmanagementsystems zu liefern.
An einem Teil der Topologie architektonischer Räume, dem Volumenadjazenzgraphen (VAG), wird gezeigt wie topologisches Modellieren Anwendungen der Bauplanung integrieren kann. Dazu wird ein Prototyp vorgestellt, der im wesentlichen aus drei Komponenten besteht: Mit dem Anforderungsmanager werden Anforderungen eigegeben, die formal gut handhabbar sind. Mit dem Topologiemanager werden diese Anforderungen mit gezeichneten Räumen kombiniert. Die topologischen Relationen in den Zeichnungen werden mit den entsprechenden Mitteln des GIS berechnet und in eine Datenbank exportiert. Der Anforderungsprüfer vergleicht dann die Anforderungsdaten, die mit Hilfe des Anforderungsmanagers erzeugt wurden, mit den Topologiedaten. Dieser Ansatz soll zeigen, wie topologische Modelle eine Formalisierung semantisch hochstehender Informationen ermöglichen, indem sie als Eigenschaften von Graphen dargestellt werden
Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.
This paper presents a methodology for uncertainty quantification in cyclic creep analysis. Several models- , namely BP model, Whaley and Neville model, modified MC90 for cyclic loading and modified Hyperbolic function for cyclic loading are used for uncertainty quantification. Three types of uncertainty are included in Uncertainty Quantification (UQ): (i) natural variability in loading and materials properties; (ii) data uncertainty due to measurement errors; and (iii) modelling uncertainty and errors during cyclic creep analysis. Due to the consideration of all type of uncertainties, a measure for the total variation of the model response is achieved. The study finds that the BP, modified Hyperbolic and modified MC90 are best performing models for cyclic creep prediction in that order. Further, global Sensitivity Analysis (SA) considering the uncorrelated and correlated parameters is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identifying the important parameters. The error in determining the input quantities and model itself can produce significant changes in creep prediction values. The variability influence of input random quantities on the cyclic creep was studied by means of the stochastic uncertainty and sensitivity analysis namely the Gartner et al. method and Saltelli et al. method. All input imperfections were considered to be random quantities. The Latin Hypercube Sampling (LHS) numerical simulation method (Monte Carlo type method) was used. It has been found by the stochastic sensitivity analysis that the cyclic creep deformation variability is most sensitive to the Elastic modulus of concrete, compressive strength, mean stress, cyclic stress amplitude, number of cycle, in that order.
Das studentische Projekt Umweltorientiertes Projektmanagement (UWoPM) eröffnet eine neue Generation von Arbeitsweisen im Projektmanagement. Umweltbelange wirken über Rechtsvorschriften immer stärker in das Projektgeschäft hinein. Durch Einbeziehung von rechtsrelevanten Daten in die Projektdatenbasis sowie von Rechtsbestimmungen in die Wissensbasis wird eine Softwareunterstützung für (wenn auch zunächst nur elementare) Rechtsfragen geboten. Eine spezielle Präsentation von Rechtswissen und eine komfortable Unterstützung der Antragstellung sollen dies in die Selbstverständlichkeiten des Projektmanagements einbringen. Gleichzeitig bietet der UWoPM-Prototyp zeitgemäße Systemlösungen für die Kommunikation Antragsteller - Behörde sowie für den Workflow innerhalb der Genehmigungsbehörde. Dieser Vortrag gibt Einführung und Übersicht zu insgesamt 18 Diplomarbeiten, in denen die Einzelkomponenten des UWoPM entwickelt wurden. Zentrales Problem war die Überführung gesetzlicher Vorschriften in Wissensbasen. Durch Beschränkung auf >Kernbestimmungen< und Ausweis von Interpretationsbedarf wurde ein prinzipieller Lösungsweg realisiert. Die technische Realisierung konnte zwar über einen Prototyp noch nicht hinausgehen, weist aber die Realisierbarkeit des Gesamtansatzes nach.
The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.
Die Lage oder der Standort eines Bauwerkes ist zweifellos charakteristisch mit diesem verbunden. Die räumliche Einordnung eines exponierten architektonischen Werkes, die Erschließung eines Gebäudes im innerstädtischen Umfeld oder die Verwaltung eines Bestands ist im Bauwesen oder der Architektur immer visuell. Die Interaktion mit Zeichnungen, die Orientierung anhand eines Lageplans oder die Dokumentation mit Fotos sind nur einige Beispiele. Die wirtschaftliche Optimierung unter Nutzung solcher Daten und deren nachfolgende Visualisierung soll hier mittels geeigneter Systeme gezeigt werden. Aber auch die Bewertung durch und Interaktion mit dem Benutzer unterstützt werden. So soll dieser Artikel beispielhaft den Transport und Verkehr fokussieren. Mathematik Graphen und Netze formen dabei Modelle zur Optimierung der zugrundeliegenden Logistikprozesse: Die Baustoffbedarfsplanung mit Bestellwesen, (Ab-)Transport und Lieferung von Material, Tourenzusammenstellung oder Standortauswahl. Informatik Weiterhin wird deren softwaretechnische Umsetzung und Einordung in begleitende Projekte der >Mathematischen Optimierung< vorgestellt.
In order to make control decisions, Smart Buildings need to collect data from multiple sources and bring it to a central location, such as the Building Management System (BMS). This needs to be done in a timely and automated fashion. Besides data being gathered from different energy using elements, information of occupant behaviour is also important for a building’s requirement analysis. In this paper, the parameter of Occupant Density was considered to help find behaviour of occupants towards a building space. Through this parameter, support for building energy consumption and requirements based on occupant need and demands was provided. The demonstrator presented provides information on the number of people present in a particular building space at any time, giving the space density. Such collections of density data made over a certain period of time represents occupant behaviour towards the building space, giving its usage patterns. Similarly, inventory items were tracked and monitored for moving out or being brought into a particular read zone. For both, people and inventory items, this was achieved using small, low-cost, passive Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) tags. Occupants were given the tags in a form factor of a credit card to be possessed at all times. A central database was built where occupant and inventory information for a particular building space was maintained for monitoring and providing a central data access.
A central issue for the autonomous navigation of mobile robots is to map unknown environments while simultaneously estimating its position within this map. This chicken-eggproblem is known as simultaneous localization and mapping (SLAM). Asctec’s quadrotor Pelican is a powerful and flexible research UAS (unmanned aircraft system) which enables the development of new real-time on-board algorithms for SLAM as well as autonomous navigation. The relative UAS pose estimation for SLAM, usually based on low-cost sensors like inertial measurement units (IMU) and barometers, is known to be affected by high drift rates. In order to significantly reduce these effects, we incorporate additional independent pose estimation techniques using exteroceptive sensors. In this article we present first pose estimation results using a stereo camera setup as well as a laser range finder, individually. Even though these methods fail in few certain configurations we demonstrate their effectiveness and value for the reduction of IMU drift rates and give an outlook for further works towards SLAM.
The process of matching data represented in two different data models is a longstanding issue in the exchange of data between different software systems. While the traditional manual matching approach cannot meet today’s demands on data exchange, research shows that a fully automated generic approach for model matching is not likely, and generic semi-automated approaches are not easy to implement. In this paper, we present an approach that focuses on matching data models in a specific domain. The approach combines a basic model matching approach and a version matching approach to deduce new matching rules to enable data transfer between two evolving data models.
This paper describes an ongoing research on the representation and reasoning about construction specifications, which is part of a bigger research project that aims at developing a formalism for automating the identification of deviations and defects on construction sites. We specifically describe the requirements on product and process models and an approach for representing and reasoning about construction specifications to enable automated detection and assessment of construction deviations and defects. This research builds on the previous research on modeling design specifications and extends and elaborates concept of contexts developed in that domain. The paper provides an overview of how the construction specifications are being modele d in this research and points out future steps that need to be accomplished to develop the envisioned automated deviation and defect detection system.
The aim of our contribution is to clarify the relation between totally regular variables and Appell sequences of hypercomplex holomorphic polynomials (sometimes simply called monogenic power-like functions) in Hypercomplex Function Theory. After their introduction in 2006 by two of the authors of this note on the occasion of the 17th IKM, the latter have been subject of investigations by different authors with different methods and in various contexts. The former concept, introduced by R. Delanghe in 1970 and later also studied by K. Gürlebeck in 1982 for the case of quaternions, has some obvious relationship with the latter, since it describes a set of linear hypercomplex holomorphic functions all power of which are also hypercomplex holomorphic. Due to the non-commutative nature of the underlying Clifford algebra, being totally regular variables or Appell sequences are not trivial properties as it is for the integer powers of the complex variable z=x+ iy. Simple examples show also, that not every totally regular variable and its powers form an Appell sequence and vice versa. Under some very natural normalization condition the set of all para-vector valued totally regular variables which are also Appell sequences will completely be characterized. In some sense the result can also be considered as an answer to a remark of K. Habetha in chapter 16: Function theory in algebras of the collection Complex analysis. Methods, trends, and applications, Akademie-Verlag Berlin, (Eds. E. Lanckau and W. Tutschke) 225-237 (1983) on the use of exact copies of several complex variables for the power series representation of any hypercomplex holomorphic function.
A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.
Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.
Toplogical Houses
(2003)
Many properties of houses are of topological nature. The problem of three-dimensional encoding is solved here by first giving an axiomatic description of a simplified concept of >house< as a certain generalisation of a cw-complex and, secondly, by generalising local observation structures of embedded unconnected planar graphs to the three-dimensional case and proving that they allow retrieving all topological properties of these simplified houses. In the more general case of an architectural complex (a certain generalisation of a >house<) still much topolgical information is kept in these structures still making them a useful approach to encoding topological spaces. Finally, a lossless representation of observation structures in a relational database scheme which we call PLAV (Points, Lines, Areas, Volumes) is given. We expect PLAV to be useful for encoding higher dimensional (architectural) space-time complexes.
TOOL TO CHECK TOPOLOGY AND GEOMETRY FOR SPATIAL STRUCTURES ON BASIS OF THE EXTENDED MAXWELL'S RULE
(2006)
One of the simplest principle in the design of light-weight structures is to avoid bending. This can be achieved by dissolving girders into members acting purely in axial tension or compression. The employment of cables for the tensioned members leads to even lighter structures which are called cable-strut structures. They constitute a subclass of spatial structures. To give fast information about the general feasibility of an architectural concept employing cable-strut structures is a challenging task due to their sophisticated mechanical behavior. In this regard it is essential to control if the structure is stable and if pre-stress can be applied. This paper presents a tool using the spreadsheet software Microsoft (MS) Excel which can give such information. Therefore it is not necessary to purchase special software and the according time consuming training is much lower. The tool was developed on basis of the extended Maxwell's rule, which besides topology also considers the geometry of the structure. For this the rank of the node equilibrium matrix is crucial. Significance and determination of the rank and the implementation of the corresponding algorithms in MS Excel are described in the following. The presented tool is able to support the structural designer in an early stage of the project in finding a feasible architectural concept for cable-strut structures. As examples for the application of the software tool two special cable-strut structures, so called tensegrity structures, were examined for their mechanical behavior.
Design activity could be treated as state transition computationally. In stepwise processing, in-between form-states are not easily observed. However, in this research time-based concept is introduced and applied in order to bridge the gap. In architecture, folding is one method of form manipulation and architects also want to search for alternatives by this operation. Besides, folding operation has to be defined and parameterized before time factor is involved as a variable of folding. As a result, time-based transformation provides sequential form states and redirects design activity.
The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.
A realistic and reliable model is an important precondition for the simulation of revitalization tasks and the estimation of system properties of existing buildings. Thereby, the main focus lies on the parameter identification, the optimization strategies and the preparation of experiments. As usual structures are modeled by the finite element method. This as well as other techniques are based on idealizations and empiric material properties. Within one theory the parameters of the model should be approximated by gradually performed experiments and their analysis. This approximation method is performed by solving an optimization problem, which is usually non-convex, of high dimension and possesses a non-differentiable objective function. Therefore we use an optimization procedure based on genetic algorithms which was implemented by using the program package SLang...
For analysis and planing of transport networks detailed information concerning travel sequences is required. The paper examines an activity chain model to determine stochastic travel demand which individual generates in order to participate in activity or sequence of activities over the day. The transition from one activity to another depends on the activity participation decision, which is made sequentially and is constrained in space and time. Activity chains derived from travel survey data are used as a base to develop a more realistic model than conventional discrete trip models. Probabilistic concepts and statistical procedures are used to estimate characteristics of travel demand such as the number of daily out-of-home activities and transition probability from one activity to another participated by homogeneous behavioural groups. In addition, the paper compares the model for two different sized towns.
For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.
The method of difference potentials can be used to solve discrete elliptic boundary value problems, where all derivatives are approximated by finite differences. Considering the classical potential theory, an integral equation on the boundary will be investigated, which is solved approximately by the help of a quadrature formula. The advantage of the discrete method consists in the establishment of a linear equation system on the boundary, which can be immediately solved on the computer. The described method of difference potentials is based on the discrete Laplace equation in the three-dimensional case. In the first step the integral representation of the discrete fundamental solution is presented and the convergence behaviour with respect to the continuous fundamental solution is discussed. Because the method can be used to solve boundary value problems in interior as well as in exterior domains, it is necessary to explain some geometrical aspects in relation with the discrete domain and the double-layer boundary. A discrete analogue of the integral representation for functions in will be presented. The main result consists in splitting the difference potential on the boundary into a discrete single- and double-layer potential, respectively. The discrete potentials are used to establish and solve a linear equation system on the boundary. The actual form of this equation systems and the conditions for solvability are presented for Dirichlet and Neumann problems in interior as well as in exterior domains
The Lucas-Kanade tracker has proven to be an efficient and accurate method for calculation of the optical flow. However, this algorithm can reliably track only suitable image features like corners and edges. Therefore, the optical flow can only be calculated for a few points in each image, resulting in sparse optical flow fields. Accumulation of these vectors over time is a suitable method to retrieve a dense motion vector field. However, the accumulation process limits application of the proposed method to fixed camera setups. Here, a histogram based approach is favored to allow more than a single typical flow vector per pixel. The resulting vector field can be used to detect roads and prescribed driving directions which constrain object movements. The motion structure can be modeled as a graph. The nodes represent entry and exit points for road users as well as crossings, while the edges represent typical paths.
It is well-known that the solution of the fundamental equations of linear elasticity for a homogeneous isotropic material in plane stress and strain state cases can be equivalently reduced to the solution of a biharmonic equation. The discrete version of the Theorem of Goursat is used to describe the solution of the discrete biharmonic equation by the help of two discrete holomorphic functions. In order to obtain a Taylor expansion of discrete holomorphic functions we introduce a basis of discrete polynomials which fulfill the so-called Appell property with respect to the discrete adjoint Cauchy-Riemann operator. All these steps are very important in the field of fracture mechanics, where stress and displacement fields in the neighborhood of singularities caused by cracks and notches have to be calculated with high accuracy. Using the sum representation of holomorphic functions it seems possible to reproduce the order of singularity and to determine important mechanical characteristics.
The increased implementation of site data capture technologies invariably results in an increase in data warehousing and database technologies to store captured data. However, restricted use of data beyond the initial application could potentially result in a loss of understanding of site processes. This could in turn lead to poor decision making at production, tactical and strategic levels. Concrete usage data have been collected from two piling processes. These data have been analysed and the results highlighted potential improvements that could be made to existing site management and estimating processes. A cost benefit analysis has been used to support decision making at the strategic level where the identified improvements require capital expenditure.
The stress state of a piecewise-homogeneous elastic body, which has a semi-infinite crack along the interface, under in-plane and antiplane loads is considered. One of the crack edges is reinforced by a rigid patch plate on a finite interval adjacent to the crack tip. The crack edges are loaded with specified stresses. The body is stretched at infinity by specified stresses. External forces with a given principal vector and moment act on the patch plate. The problem reduces to a Riemann-Hilbert boundary-value matrix problem with a piecewise-constant coefficient for two complex potentials in the plane case and for one in the antiplane case. The complex potentials are found explicitly using a Gaussian hypergeometric function. The stress state of the body close to the ends of the patch plate, one of which is also simultaneously the crack tip, is investigated. Stress intensity factors near the singular points are determined.
This paper is focused on the first numerical tests for coupling between analytical solution and finite element method on the example of one problem of fracture mechanics. The calculations were done according to ideas proposed in [1]. The analytical solutions are constructed by using an orthogonal basis of holomorphic and anti-holomorphic functions. For coupling with finite element method the special elements are constructed by using the trigonometric interpolation theorem.
The preliminary design of a wearable computer for supporting Construction Progress Monitoring
(2000)
Progress monitoring has become more and more important as owners have increasingly demanded shorter times for the delivery of their projects. This trend is even more evident in high technology industries, such as the computer industry and the chemical industry. Fast changing markets, such as the computer industry, force companies to have to build new facilities quickly. To make a statement about construction progress, the status of a building has to be determined and monitored over a period of time. Depicting the construction progress in a diagram over time, statements can be made about the anticipated completion of the project and delays and problems in certain areas. Having this information, measures can be taken to efficiently >catch up< on the schedule of the project. New technologies, such as wearable computers, speech recognition, touch screens and wireless networks could help to move electronic data processing to the construction site. Progress monitoring could very much take advantage of this move, as several intermediate steps of processing progress data can be made unnecessary. The processing of progress data could be entirely done by computers, which means that data for supporting decisions can be made available at the moment the construction progress is measured. This paper describes a project, that investigates how these new technologies can be linked to create a system that enhances the efficiency of progress monitoring. During the project a first prototype of a progress monitoring system was developed that allows construction companies and site supervisors to measure construction progress on site using wearable computers that are speech controlled and connected to a central database via a wireless network.
THE INFLUENCE OF THE LOCAL CONCAVITY ON THE FUNCTIONING OF BEARING SHELL OF HIGH-RISE CONSTRUCTION
(2012)
Areas with various defects and damages, which reduce carrying capacity, were examined in a study of metal chimneys. In this work, the influence of the local dimples on the function of metal chimneys was considered. Modeling tasks were completed in the software packages LIRA and ANSYS. Parameters were identified, which characterize the local dimples, and a numerical study of the influence of local dimples on the stress-strain state of shells of metal chimneys was conducted. A distribution field of circular and meridional tension was analyzed in a researched area. Zones of influence of dimples on the bearing cover of metal chimneys were investigated. The bearing capacities of high-rise structures with various dimple geometries and various cover parameters were determined with respect to specified areas of the trunk. Dependent relationships are represented graphically for the decrease in bearing capacity of a cover with respect to dimples. Diameter and thickness of covers of metal chimneys were constructed according to the resulting data.
This article presents the Rigid Finite Element Method in the calculation of reinforced concrete beam deflection with cracks. Initially, this method was used in the shipbuilding industry. Later, it was adapted in the homogeneous calculations of the bar structures. In this method, rigid mass discs serve as an element model. In the flat layout, three generalized coordinates (two translational and one rotational) correspond to each disc. These discs are connected by elastic ties. The genuine idea is to take into account a discrete crack in the Rigid Finite Element Method. It consists in the suitable reduction of the rigidity in rotational ties located in the spots, where cracks occurred. The susceptibility of this tie results from the flexural deformability of the element and the occurrence of the crack. As part of the numerical analyses, the influence of cracks on the total deflection of beams was determined. Furthermore, the results of the calculations were compared to the results of the experiment. Overestimations of the calculated deflections against the measured deflections were found. The article specifies the size of the overestimation and describes its causes.
In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.
In this note, we describe quite explicitly the Howe duality for Hodge systems and connect it with the well-known facts of harmonic analysis and Clifford analysis. In Section 2, we recall briefly the Fisher decomposition and the Howe duality for harmonic analysis. In Section 3, the well-known fact that Clifford analysis is a real refinement of harmonic analysis is illustrated by the Fisher decomposition and the Howe duality for the space of spinor-valued polynomials in the Euclidean space under the so-called L-action. On the other hand, for Clifford algebra valued polynomials, we can consider another action, called in Clifford analysis the H-action. In the last section, we recall the Fisher decomposition for the H-action obtained recently. As in Clifford analysis the prominent role plays the Dirac equation in this case the basic set of equations is formed by the Hodge system. Moreover, analysis of Hodge systems can be viewed even as a refinement of Clifford analysis. In this note, we describe the Howe duality for the H-action. In particular, in Proposition 1, we recognize the Howe dual partner of the orthogonal group O(m) in this case as the Lie superalgebra sl(2 1). Furthermore, Theorem 2 gives the corresponding multiplicity free decomposition with an explicit description of irreducible pieces.
The conventional way of describing an image is in terms of its canonical pixel-based representation. Other image description techniques are based on image transformations. Such an image transformation converts a canonical image representation into a representation in which specific properties of an image are described more explicitly. In most transformations, images are locally approximated within a window by a linear combination of a number of a priori selected patterns. The coefficients of such a decomposition then provide the desired image representation. The Hermite transform is an image transformation technique introduced by Martens. It uses overlapping Gaussian windows and projects images locally onto a basis of orthogonal polynomials. As the analysis filters needed for the Hermite transform are derivatives of Gaussians, Hermite analysis is in close agreement with the information analysis carried out by the human visual system. In this paper we construct a new higher dimensional Hermite transform within the framework of Quaternionic Analysis. The building blocks for this construction are the Clifford-Hermite polynomials rewritten in terms of Quaternionic analysis. Furthermore, we compare this newly introduced Hermite transform with the Quaternionic-Hermite Continuous Wavelet transform. The Continuous Wavelet transform is a signal analysis technique suitable for non-stationary, inhomogeneous signals for which Fourier analysis is inadequate. Finally the developed three dimensional filter functions of the Quaternionic-Hermite transform are tested with traditional scalar benchmark signals upon their selectivity at detecting pointwise singularities.
THE FOURIER-BESSEL TRANSFORM
(2010)
In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.
We briefly review and use the recent comprehensive research on the manifolds of square roots of −1 in real Clifford geometric algebras Cl(p,q) in order to construct the Clifford Fourier transform. Basically in the kernel of the complex Fourier transform the complex imaginary unit j is replaced by a square root of −1 in Cl(p,q). The Clifford Fourier transform (CFT) thus obtained generalizes previously known and applied CFTs, which replaced the complex imaginary unit j only by blades (usually pseudoscalars) squaring to −1. A major advantage of real Clifford algebra CFTs is their completely real geometric interpretation. We study (left and right) linearity of the CFT for constant multivector coefficients in Cl(p,q), translation (x-shift) and modulation (w -shift) properties, and signal dilations. We show an inversion theorem. We establish the CFT of vector differentials, partial derivatives, vector derivatives and spatial moments of the signal. We also derive Plancherel and Parseval identities as well as a general convolution theorem.
For planning in existing built contexts, the building survey is the starting point for initial planning proposals, for the diagnosis and documentation of building damages, for the creation of objectives catalogues, for the detailed design of renovation and conversion measures and for ensuring fulfilment of building legislation, particularly by change of use and refitting. An examination of currently available IT-tools shows insufficient support for planning within existing contexts, most notably a deficit with regard to information capture and administration. This paper discusses the concept for a modular surveying system (basic concept, separation of geometry from semantic data, and separation into sub-systems) and the prototypical realisation of a system for the complete support of the entire building surveying process for existing buildings. The project aims to contribute to the development of a planning system for existing buildings. ...
Non-destructive techniques for damage detection became the focus of engineering interests in the last few years. However, applying these techniques to large complex structures like civil engineering buildings still has some limitations since these types of structures are
unique and the methodologies often need a large number of specimens for reliable results. For this reason, cost and time can greatly influence the final results.
Model Assisted Probability Of Detection (MAPOD) has taken its place among the ranks of damage identification techniques, especially with advances in computer capacity and modeling tools. Nevertheless, the essential condition for a successful MAPOD is having a reliable model in advance. This condition is opening the door for model assessment and model quality problems. In this work, an approach is proposed that uses Partial Models (PM) to compute the Probability Of damage Detection (POD). A simply supported beam, that can be structurally modified and
tested under laboratory conditions, is taken as an example. The study includes both experimental and numerical investigations, the application of vibration-based damage detection approaches and a comparison of the results obtained based on tests and simulations.
Eventually, a proposal for a methodology to assess the reliability and the robustness of the models is given.
This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.
The development of a consistent material model for textile reinforced concrete requires the formulation and calibration of several sub-models on different resolution scales. Each of these models represents the material structure at the corresponding scale. While the models at the micro-level are able to capture the fundamental failure and damage mechanisms of the material components (e.g. filament rupture and debonding from the matrix) their computational costs limit their application to the small size representative unit cells of the material structure. On the other hand, the macro-level models provide a sufficient performance at the expense of limited range of applicability. Due to the complex structuring of the textile reinforced concrete at several levels (filament - yarn - textile - matrix) it is a non-trivial task to develop a multiscale model from scratch. It is rather more effective to develop a set of conceptually related sub-models for each structural level covering the selected phenomena of the material behavior. The homogenized effective material properties obtained at the lower level may be verified and validated using experiments and models at the higher level(s). In this paper the development of a consistent material model for textile reinforced concrete is presented. Load carrying and failure mechanisms at the micro, meso and macro scales are described and models with the focus on the specified scales are introduced. The models currently being developed in the framework of the collaborative research center are classified and evaluated with respect to the failure mechanisms being captured. The micromechanical modeling of the yarn and bonding behavior is discussed in detail and the correspondence with the experiments focused on the selected failure and interaction mechanisms is shown. The example of modeling the bond layer demonstrates the application of the presented strategy.
The goal of the collaborative research center (SFB 532) >Textile reinforced concrete (TRC): the basis for the development of a new material technology< installed in 1998 at the Aachen University is a complex assessment of mechanical, chemical, economical and productional aspects in an interdisciplinary environment. The research project involves 10 institutes performing parallel research in 17 projects. The coordination of such a research process requires effective software support for information sharing in form of data exchange, data analysis and data archival. Furthermore, the processes of experiment planning and design, modification of material compositions and design parameters and development of new material models in such an environment call for systematic coordination applying the concepts of operational research. Flexible organization of the data coming from several sources is a crucial premise for a transparent accumulation of knowledge and, thus, for a successful research in a long run. The technical information system (TRC-TIS) developed in the SFB 532 has been implemented as a database-powered web server with a transparent definition of the product and process model. It serves as an intranet server with access domains devoted to the involved research groups. At the same time, it allows the presentation of selected results just by granting a data object an access from the public area of the server via internet.
Due to the amount of flow simulation and measurement data, automatic detection, classification and visualization of features is necessary for an inspection. Therefore, many automated feature detection methods have been developed in recent years. However, only one feature class is visualized afterwards in most cases, and many algorithms have problems in the presence of noise or superposition effects. In contrast, image processing and computer vision have robust methods for feature extraction and computation of derivatives of scalar fields. Furthermore, interpolation and other filter can be analyzed in detail. An application of these methods to vector fields would provide a solid theoretical basis for feature extraction. The authors suggest Clifford algebra as a mathematical framework for this task. Clifford algebra provides a unified notation for scalars and vectors as well as a multiplication of all basis elements. The Clifford product of two vectors provides the complete geometric information of the relative positions of these vectors. Integration of this product results in Clifford correlation and convolution which can be used for template matching of vector fields. For frequency analysis of vector fields and the behavior of vector-valued filters, a Clifford Fourier transform has been derived for 2D and 3D. Convolution and other theorems have been proved, and fast algorithms for the computation of the Clifford Fourier transform exist. Therefore the computation of Clifford convolution can be accelerated by computing it in Clifford Fourier domain. Clifford convolution and Fourier transform can be used for a thorough analysis and subsequent visualization of flow fields.
Modellverwaltungssysteme sind eine geeignete technologische Basis zum Management digitaler Bauwerksmodelle bei Planungstätigkeiten für den Neubau als auch für die Revitalisierung von Bauwerken. Die Unterstützung von Revitalisierungsprozessen impliziert für den Entwurf integrierter Planungsumgebungen spezifische Anforderungen wie die Repräsentation von Informationen, die mit verschiedenen Typen von Vagheit behaftet sind, die Notwendigkeit, den Soll- sowie den Ist- Zustand des Bauwerks abzubilden und die Fähigkeit des Umgangs mit temporal inkonsistenten Modellzuständen. Die erforderliche Dynamik der Domänenmodelle und die erforderliche Nutzbarkeit in Virtual Enterprises stellen weitere Ansprüche an die Realisierungsbasis der Modellverwaltungssysteme. Zur Implementierung derartiger Systeme erweist es sich als vorteilhaft, Eigenschaften objektorientierter Programmiersprachen mit nichtstatischen Typsystemen auszunutzen, da diese durch die vorhandene Metaebene sowie Introspektions- und Reflektionsmechanismen eine effiziente Realisierungsbasis bereitstellen. Zur effektiven Unterstützung synchroner kooperativer Planungstätigkeiten innerhalb einzelner Fachdisziplinen wurde ein Benachrichtigungsmechanismus realisiert, der an das Modellverwaltungssystem angekoppelte Fachapplikationen über nebenläufig vorgenommene Modifikationen am zugehörigen Domänenmodell oder an Projektinformationen informiert. Weiterhin existiert ein Mechanismus zur vereinfachten Anbindung von existierenden Applikationen, die auf statischen Partialmodellen beruhen oder standardisierte, modellbasierte Austauschformate unterstützen. Abschließend wird eine aus einem zentralen Projektserver, Domänenservern und Domänenclients bestehende hybride Systemarchitektur vorgestellt, die geeignet ist, unter den Randbedingungen kooperativer und geographisch verteilter Arbeit bei Revitalisierungsvorhaben in Virtual Enterprises eingesetzt zu werden.
Safety operation of important civil structures such as bridges can be estimated by using fracture analysis. Since the analytical methods are not capable of solving many complicated engineering problems, numerical methods have been increasingly adopted. In this paper, a part of isotropic material which contains a crack is considered as a partial model and the proposed model quality is evaluated. EXtended IsoGeometric Analysis (XIGA) is a new developed numerical approach [1, 2] which benefits from advantages of its origins: eXtended Finite Element Method (XFEM) and IsoGeometric Analysis (IGA). It is capable of simulating crack propagation problems with no remeshing necessity and capturing singular field at the crack tip by using the crack tip enrichment functions. Also, exact representation of geometry is possible using only few elements. XIGA has also been successfully applied for fracture analysis of cracked orthotropic bodies [3] and for simulation of curved cracks [4]. XIGA applies NURBS functions for both geometry description and solution field approximation. The drawback of NURBS functions is that local refinement cannot be defined regarding that it is based on tensorproduct constructs unless multiple patches are used which has also some limitations. In this contribution, the XIGA is further developed to make the local refinement feasible by using Tspline basis functions. Adopting a recovery based error estimator in the proposed approach for evaluation of the model quality and performing the adaptive processes is in progress. Finally, some numerical examples with available analytical solutions are investigated by the developed scheme.
SYSWELD Forum 2011
(2011)
Am 25. und 26. Oktober 2011 trafen sich an der Bauhaus-Universität Weimar 70 nationale und internationale Fachleute aus Forschung und Praxis, um sich im Rahmen des vierten SYSWELD Forums über aktuelle Entwicklungen der numerischen Simulation auf dem Gebiet der Wärmebehandlung und des Schweißens auszutauschen. Die numerische Simulation im Bereich des Schweißens und der Wärmebehandlung hat sich in den letzten Jahren beachtlich weiterentwickelt und bietet ein zukunftsweisendes und innovatives Arbeitsfeld für Ingenieure.
This paper presents a robust model updating strategy for system identification of wind turbines. To control the updating parameters and to avoid ill-conditioning, the global sensitivity analysis using the elementary effects method is conducted. The formulation of the objective function is based on M¨uller-Slany’s strategy for multi-criteria functions. As a simulationbased optimization, a simulation adapter is developed to interface the simulation software ANSYS and the locally developed optimization software MOPACK. Model updating is firstly tested on the beam model of the rotor blade. The defect between the numerical model and the reference has been markedly reduced by the process of model updating. The effect of model updating becomes more pronounced in the comparison of the measured and the numerical properties of the wind turbine model. The deviations of the frequencies of the updated model are rather small. The complete comparison including the free vibration modes by the modal assurance criteria shows the excellent coincidence of the modal parameters of the updated model with the ones from the measurements. By successful implementation of the model validation via model updating, the applicability and effectiveness of the solution concept has been demonstrated.
Due to the complex interactions between the ground, the driving machine, the lining tube and the built environment, the accurate assignment of in-situ system parameters for numerical simulation in mechanized tunneling is always subject to tremendous difficulties. However, the more accurate these parameters are, the more applicable the responses gained from computations will be. In particular, if the entire length of the tunnel lining is examined, then, the appropriate selection of various kinds of ground parameters is accountable for the success of a tunnel project and, more importantly, will prevent potential casualties. In this context, methods of system identification for the adaptation of numerical simulation of ground models are presented. Hereby, both deterministic and probabilistic approaches are considered for typical scenarios representing notable variations or changes in the ground model.
SYSBAT - An Application to the Building ProductionBased on Computer Supported Cooperative Work
(2003)
Our proposed solution is to enable partners of a construction project to share all the technical data produced and handled during the building production process by building a system through the use of internet technology. The system links distributed databases and allows building partners to access remotely and manipulate specific information. It provides an updated building representation that is being enriched and refined all along the building production process. A recent collaboration with Nemetschek France (subsidiary company of Nemetschek AG, AEC CAD software leader) focus on a building product repository available in a web context. The aim is to help building project actors to choose a technical solution that fits its professional needs, and maintain our information system with up to date information. It starts with the possibility to build on line building product catalogs, in order to link Allplan CAD entities with building technical features. This paper presents the conceptual approaches on which our information system is built. Starting from a general organization diagram organization, we focus on the product and the description branches of construction works (including last IFC model specifications). Our aim is to add decisional support to the construction works selection process. To do so, we consider the actor's role upon the system and the pieces of information each one needs to achieve a given task.
The subject of this talk is the problem of surface design based upon a mesh that may contain both triangular and quadrangular domains. We investigate the cases when such a combined mesh occurs more preferable for bivariate data interpolation than a pure triangulation. First we describe a modification of the well-known flipping algorithm that constructs a locally optimal combined mesh with a predefined quality criterion. Then we introduce two quality measures for triangular and quadrangular domains and present the results of a computational experiment that compares integral interpolation errors and errors in gradients caused by the piecewise surface models produced by the flipping algorithm with the introduced quality measures. The experiment shows that triangular meshes with the Delaunay quality measure provide better interpolation accuracy only if the interpolated function is strictly convex, as well as a saddle-shaped function is better interpolated by bilinear patches within a combined mesh. For a randomly shaped function combined meshes demonstrate smaller error values and better stability in compare with pure triangulations. At the end we consider other resources for mesh improvement, such as excluding >bad< points from the input set for the mesh generating procedure. Because the function values at these points should not be lost, some linear or bilinear patches are replaced by nonlinear patches that pass through the excluded points.
The design of mobile IT systems, especially the design of wearable computer systems, is a complex task that requires computer science knowledge, such as that related to hardware configuration and software development, in addition to knowledge of the domain in which the system is intended to be used. Particularly in the AEC sector, it is necessary that the support from mobile information technology fit the work situation at hand. Ideally, the domain expert alone can adjust the wearable computer system to achieve this fit without having to consult IT experts. In this paper, we describe a model that helps in transferring existing design knowledge from non-AEC domains to new projects in the construction area. The base for this is a model and a methodology that describes the usage scenarios of said computer systems in an application-neutral and domain-independent way. Thus, the actual design information and experience will be transferable between different applications and domains.
Structural engineering projects are increasingly organized in networked cooperations due to a permanently enlarged competition pressure and a high degree of complexity while performing the concurrent design activities. Software that intends to support such collaborative structural design processes implicates enormous requirements. In the course of our common research work, we analyzed the pros and cons of the application of both the peer-to-peer (University of Bonn) and multiagent architecture style (University of Bochum) within the field of collaborative structural design. In this paper, we join the benefits of both architecture styles in an integrated conceptual approach. We demonstrate the surplus value of the integrated multiagent–peer-to-peer approach by means of an example scenario in which several structural engineers are co-operatively designing the basic structural elements of an arched bridge, applying heterogeneous CAD systems.
In the superelliptic shell joined to a circular cylinder bending stresses are absent when it is subjected to uniform pressure.Some geometrical characteristics have been found. Expressions for determining stresses in the shell crest(in the singular point of plane type) are suggested. The problem of a theoretical critical buckling load of an elongated shell supported by frames is studied. A critical buckling load for two shells with different specifications was found experimentally.
This contribution will be freewheeling in the domain of signal, image and surface processing and touch briefly upon some topics that have been close to the heart of people in our research group. A lot of the research of the last 20 years in this domain that has been carried out world wide is dealing with multiresolution. Multiresolution allows to represent a function (in the broadest sense) at different levels of detail. This was not only applied in signals and images but also when solving all kinds of complex numerical problems. Since wavelets came into play in the 1980's, this idea was applied and generalized by many researchers. Therefore we use this as the central idea throughout this text. Wavelets, subdivision and hierarchical bases are the appropriate tools to obtain these multiresolution effects. We shall introduce some of the concepts in a rather informal way and show that the same concepts will work in one, two and three dimensions. The applications in the three cases are however quite different, and thus one wants to achieve very different goals when dealing with signals, images or surfaces. Because completeness in our treatment is impossible, we have chosen to describe two case studies after introducing some concepts in signal processing. These case studies are still the subject of current research. The first one attempts to solve a problem in image processing: how to approximate an edge in an image efficiently by subdivision. The method is based on normal offsets. The second case is the use of Powell-Sabin splines to give a smooth multiresolution representation of a surface. In this context we also illustrate the general method of construction of a spline wavelet basis using a lifting scheme.
Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.
Information science researchers and developers have spent many years addressing the problem of retrieving the exact information needed and using it for analysis purposes. In informationseeking dialogues, the user, i.e. construction project manager or supplier, often asks questions about specific aspects of the tasks they want to perform. But most of the time it is difficult for the software systems to unambiguously understand their overall intentions. The existence of information tunnels (Tannenbaum 2002) aggravates this phenomenon. This study includes a detailed case study of the material management process in the construction industry. Based on this case study, the structure of a formal user model for information retrieval in construction management is proposed. This prototype user model will be incorporated into the system design for construction information management and retrieval. This information retrieval system is a user-centered product based on the development of a user configurable visitor mechanism for managing and retrieving project information without worrying too much about the underlying data structure of the database system. An executable UML model combined with OODB is used to reduce the ambiguity in the user's intentions and to achieve user satisfaction.
With the advances of the computer technology, structural optimization has become a prominent field in structural engineering. In this study an unconventional approach of structural optimization is presented which utilize the Energy method with Integral Material behaviour (EIM), based on the Lagrange’s principle of minimum potential energy. The equilibrium condition with the EIM, as an alternative method for nonlinear analysis, is secured through minimization of the potential energy as an optimization problem. Imposing this problem as an additional constraint on a higher cost function of a structural property, a bilevel programming problem is formulated. The nested strategy of solution of the bilevel problem is used, treating the energy and the upper objective function as separate optimization problems. Utilizing the convexity of the potential energy, gradient based algorithms are employed for its minimization and the upper cost function is minimized using the gradient free algorithms, due to its unknown properties. Two practical examples are considered in order to prove the efficiency of the method. The first one presents a sizing problem of I steel section within encased composite cross section, utilizing the material nonlinearity. The second one is a discrete shape optimization of a steel truss bridge, which is compared to a previous study based on the Finite Element Method.
The planning of projects in building engineering is a complex process which is characterized by a dynamical composition and many modifications during the definition and execution time of processes. For a computer-aided and network-based cooperation a formal description of the planning process is necessary. In the research project “Relational Process Modelling in Cooperative Building Planning” a process model is described by three parts: an organizational structure with participants, a building structure with states and a process structure with activities. This research project is part of the priority program 1103 “Network-Based Cooperative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). Planning processes in civil engineering can be described by workflow graphs. The process structure describes the logical planning process and can be formally defined by a bipartite graph. This structure consists of activities, transitions and relationships between activities and transitions. In order to minimize errors at execution time of a planning process a consistent and structurally correct process model must be guaranteed. This contribution considers the concept and the algorithms for checking the consistency and the correctness of the process structure.
This paper deals with the modelling and the analysis of masonry vaults. Numerical FEM analyses are performed using LUSAS code. Two vault typologies are analysed (barrel and cross-ribbed vaults) parametrically varying geometrical proportions and constraints. The proposed model and the developed numerical procedure are implemented in a computer analysis. Numerical applications are developed to assess the model effectiveness and the efficiency of the numerical procedure. The main object of the present paper is the development of a computational procedure which allows to define 3D structural behaviour of masonry vaults. For each investigated example, the homogenized limit analysis approach has been employed to predict ultimate load and failure mechanisms. Finally, both a mesh dependence study and a sensitivity analysis are reported. Sensitivity analysis is conducted varying in a wide range mortar tensile strength and mortar friction angle with the aim of investigating the influence of the mechanical properties of joints on collapse load and failure mechanisms. The proposed computer model is validated by a comparison with experimental results available in the literature.
Let the information of a civil engineering application be decomposed into objects of a given set of classes. Then the set of objects forms the data base of the application. The objects contain attributes and methods. Properties of the objects are stored in the attributes. Algorithms which the objects perform are implemented in the methods of the objects. If objects are modified by a user, the consistency of data in the base is destroyed. The data base must be modified in an update to restore its consistency. The sequence of the update operations is not arbitrary, but is governed by dependence between the objects. The situation can be described mathematically with graph theory. The available algorithms for the determination of the update sequence are not suitable when the data base is large. A new update algorithm for large data bases has been developed and is presented in this paper.
Am Beispiel eines 3-feldrigen Durchlaufträgers wird die Versagenswahrscheinlichkeit von wechselnd belasteten Stahlbetonbalken bezüglich des Grenzzustandes der Adaption (Einspielen, shakedown) untersucht. Die Adaptionsanalyse erfolgt unter Berücksichtigung der beanspruchungschabhängigen Degradation der Biegesteifigkeit infolge Rissbildung. Die damit verbundene mechanische Problemstellung kann auf die Adaptionsanalyse linear elastisch - ideal plastischer Balkentragwerke mit unbekannter aber begrenzter Biegesteifigkeit zurückgeführt werden. Die Versagenswahrscheinlichkeit wird unter Berücksichtigung stochastischer Tragwerks- und Belastungsgrößen berechnet. Tragwerkseigenschaften und ständige Lasten gelten als zeitunabhängige Zufallsgrößen. Zeitlich veränderliche Lasten werden als nutzungsdauerbezogene Extremwerte POISSONscher Rechteck-Pulsprozesse unter Berücksichtigung zeitlicher Überlagerungseffekte modelliert, so dass die Versagenswahrscheinlichkeit ebenfalls eine nutzungsdauerbezogene Größe ist. Die mechanischen Problemstellungen werden numerisch mit der mathematischen Optimierung gelöst. Die Versagenswahrscheinlichkeit wird auf statistischem Weg mit der Monte-Carlo-Methode geschätzt.
The ride of the tram along the line, defined by a time-table, consists of the travel time between the subsequent sections and the time spent by tram on the stops. In the paper, statistical data collected in the city of Krakow is presented and evaluated. In polish conditions, for trams the time spent on stops makes up the remarkable amount of 30 % of the total time of tram line operation. Moreover, this time is characterized by large variability. The time spent by tram on a stop consists of alighting and boarding time and time lost by tram on stop after alighting and boarding time ending, but before departure. Alighting and boarding time itself usually depends on the random number of alighting and boarding passengers and also on the number of passengers which are inside the vehicle. However, the time spent by tram on stop after alighting and boarding time ending is an effect of certain random events, mainly because of impossibility of departure from stop, caused by lack of priorities for public transport vehicles. The main focus of the talk lies on the description and the modelling of these effects. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.