Refine
Document Type
- Conference Proceeding (158) (remove)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (92)
- Professur Informatik im Bauwesen (31)
- Institut für Strukturmechanik (ISM) (10)
- Professur Angewandte Mathematik (6)
- Institut für Konstruktiven Ingenieurbau (IKI) (4)
- Professur Informatik in der Architektur (3)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (2)
- Junior-Professur Computational Architecture (2)
- Professur Bodenmechanik (2)
- Professur Stahlbau (2)
Keywords
- CAD (158) (remove)
In this paper we evaluate 2D models for soil-water characteristic curve (SWCC), that incorporate the hysteretic nature of the relationship between volumetric water content Θ and suction Ψ. The models are based on nonlinear least squares estimation of the experimental data for sand. To estimate the dependent variable Θ the proposed models include two independent variables, suction and sensors reading position (depth d in the column test). The variable d represents not only the position where suction and water content are measured but also the initial suction distribution before each of the hydraulic loading test phases. Due to this the proposed 2D regression models acquire the advantage that they: (a) can be applied for prediction of Θ for any position along the column and (b) give the functional form for the scanning curves.
The quaternionic operator calculus can be applied very elegantly to solve many important boundary value problems arising in fluid dynamics and electrodynamics in an analytic way. In order to set up fully explicit solutions. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one has to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. Recently the theory of quaternionic holomorphic multiperiodic functions and automorphic forms provided new impulses to set up explicit representation formulas for large classes of hyperbolic polyhedron type domains. These include block shaped domains, wedge shaped domains (with or without additional rectangular restrictions) and circular symmetric finite and infinite cylinders as particular subcases. In this talk we want to give an overview over the recent developments in this direction.
Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.
DECENTRALIZED APPROACHES TO ADAPTIVE TRAFFIC CONTROL AND AN EXTENDED LEVEL OF SERVICE CONCEPT
(2006)
Traffic systems are highly complex multi-component systems suffering from instabilities and non-linear dynamics, including chaos. This is caused by the non-linearity of interactions, delays, and fluctuations, which can trigger phenomena such as stop-and-go waves, noise-induced breakdowns, or slower-is-faster effects. The recently upcoming information and communication technologies (ICT) promise new solutions leading from the classical, centralized control to decentralized approaches in the sense of collective (swarm) intelligence and ad hoc networks. An interesting application field is adaptive, self-organized traffic control in urban road networks. We present control principles that allow one to reach a self-organized synchronization of traffic lights. Furthermore, vehicles will become automatic traffic state detection, data management, and communication centers when forming ad hoc networks through inter-vehicle communication (IVC). We discuss the mechanisms and the efficiency of message propagation on freeways by short-range communication. Our main focus is on future adaptive cruise control systems (ACC), which will not only increase the comfort and safety of car passengers, but also enhance the stability of traffic flows and the capacity of the road (“traffic assistance”). We present an automated driving strategy that adapts the operation mode of an ACC system to the autonomously detected, local traffic situation. The impact on the traffic dynamics is investigated by means of a multi-lane microscopic traffic simulation. The simulation scenarios illustrate the efficiency of the proposed driving strategy. Already an ACC equipment level of 10% improves the traffic flow quality and reduces the travel times for the drivers drastically due to delaying or preventing a breakdown of the traffic flow. For the evaluation of the resulting traffic quality, we have recently developed an extended level of service concept (ELOS). We demonstrate our concept on the basis of travel times as the most important variable for a user-oriented quality of service.
This contribution will be freewheeling in the domain of signal, image and surface processing and touch briefly upon some topics that have been close to the heart of people in our research group. A lot of the research of the last 20 years in this domain that has been carried out world wide is dealing with multiresolution. Multiresolution allows to represent a function (in the broadest sense) at different levels of detail. This was not only applied in signals and images but also when solving all kinds of complex numerical problems. Since wavelets came into play in the 1980's, this idea was applied and generalized by many researchers. Therefore we use this as the central idea throughout this text. Wavelets, subdivision and hierarchical bases are the appropriate tools to obtain these multiresolution effects. We shall introduce some of the concepts in a rather informal way and show that the same concepts will work in one, two and three dimensions. The applications in the three cases are however quite different, and thus one wants to achieve very different goals when dealing with signals, images or surfaces. Because completeness in our treatment is impossible, we have chosen to describe two case studies after introducing some concepts in signal processing. These case studies are still the subject of current research. The first one attempts to solve a problem in image processing: how to approximate an edge in an image efficiently by subdivision. The method is based on normal offsets. The second case is the use of Powell-Sabin splines to give a smooth multiresolution representation of a surface. In this context we also illustrate the general method of construction of a spline wavelet basis using a lifting scheme.
It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.
Some caad packages offer additional support for the optimization of spatial configurations, but the possibilities for applying optimization are usually limited either by the complexity of the data model or by the constraints of the underlying caad system. Since we missed a system that allows to experiment with optimization techniques for the synthesis of spatial configurations, we developed a collection of methods over the past years. This collection is now combined in the presented open source library for computational planning synthesis, called CPlan. The aim of the library is to provide an easy to use programming framework with a flat learning curve for people with basic programming knowledge. It offers an extensible structure that allows to add new customized parts for various purposes. In this paper the existing functionality of the CPlan library is described.
Die zunehmend erforderliche Kooperation verschiedener Beteiligter unterschiedlicher Fachbereiche und der Einsatz hochspezialisierter Fachapplikationen in heterogenen Systemumgebungen unterstreichen die Bedeutung und Notwendigkeit neuer Konzepte und Möglichkeiten zur Schaffung einer computergestützten Integrationsebene. Ziel einer computergestützten Integrationsebene ist die Verbesserung der Kooperation und Kommunikation unter den Beteiligten. Grundlage dafür ist die Etablierung eines effizienten und fehlerfreien Daten- und Informationsaustausches zwischen den verschiedenen Fachplanern und -applikationen. Die Basis für die Datenintegrationsebene bildet ein digitales Bauwerksmodell im Sinne eines >virtuellen Bauwerks<, welches alle relevanten Daten und Informationen über ein zu planendes oder real existierendes Bauwerk zur Verfügung stellt. Bei der Verwirklichung einer Bauwerksmodell-orientierten Datenintegrationsebene und deren Modellverwaltung erweist sich speziell die Definition des Bauwerksmodells also die Spezifikation der relevanten auszutauschenden Daten als äußerst komplex. Der hier vorzustellende Relationen-orientierte Ansatz, d.h. die Realisierung des Daten- und Informationsaustauschs mittels definierter Relationen und Beziehungen zwischen dynamisch modifizierbaren Domänenmodellen, bietet Ansätze zur: * Verringerung und Beherrschung der Komplexität des Bauwerksmodells (Teilmodellbildung) * Realisierung eines effizienten Datenaustauschs (Relationenmanagement) Somit stellt der Relationenorientierte Ansatz einen adäquaten Lösungsweg zur Modellierung eines digitalen Bauwerksmodells als Datenintegrationsebene für den Lebenszyklus eines Bauwerkes dar.
Datenaustausch, Daten resp. Produktdatenmodelle sind seit mehreren Jahren Themen in der Forschung. Verschiedene Forschungsprojekte und Initiativen diverser Firmen führten zu bereichsübergreifenden Ansätzen wie IFC und verschiedenen STEP-AP´s. Speziell im Stahlbau sind die Projekte >Produktschnittstelle Stahlbau< und >CIMsteel< entwickelt, weiterentwickelt und überarbeitet worden. Als Weiterentwicklung der bisher existierenden Austauschformate versuchen neuere Ansätze den Nutzen über die reine Datenübermittlung hinaus zu erweitern. So integrieren diese Lösungsvorschläge Aspekte der Kommunikation, der Zusammenarbeit und des Managements. Des weiteren übernehmen sie Aufgaben der Daten- und Modellverwaltung. Somit erfolgt eine digitale Abbildung unter Einbezug sämtlicher ermittelter Daten. Resultierend aus den besonderen Randbedingungen im Bauwesen, wird ein Bauwerksmodell aus untereinander in Beziehung gesetzten Domänenmodellen aufgebaut
n allen Stadien des Planungsprozesses von Gebäuden nehmen Entwurfsentscheidungen starken Einfluß auf die bauphysikalische Qualität eines Gebäudes. Im Rahmen dieses Beitrags wird deshalb die Integration bauphysikalischer Gesichtspunkte in den Planungsprozeß vorgestellt, bei welcher dem Fachingenieur geeignete Werkzeuge zur Verfügung gestellt werden, die es erlauben, das zu planende Gebäude als Einheit von baulicher Hülle, Anlagentechnik und Nutzung zu betrachten. Darauf aufbauend wird eine gezielte Überprüfung des Gebäudemodells mit Hilfe von bauphysikalischen Nachweisen und Simulationen durchgeführt, um eine bauphysikalische Entscheidungsunterstützung im Entwurfsprozeß vornehmen zu können. Das erarbeitete Programmsystem VAMOS (Verteilte Applikation zur Modellierung und Optimierung bauphysikalischer Systeme) nutzt die Middleware-Technologie CORBA konsequent für die dynamische, netzwerkweite Integration fünf verschiedener aufgabenspezifischer Komponenten: Die erste Komponente zur Modellerzeugung und -manipulation wurde auf Basis des CAD-Systems AutoCAD als ARX-Laufzeitmodul erstellt. Dadurch ist es einerseits möglich, bestehende Planungsabläufe unter Verwendung von Standardwerkzeugen des entwerfenden Ingenieurs zu erhalten, andererseits können die umfangreichen Fähigkeiten des AutoCAD-Geometriekerns für die Erstellung komplexer dreidimensionaler Bauteilgeometrien genutzt werden. In der zweiten Komponenten wurde eine objektorientiertes Datenbanksystem in das Gesamtsystem integriert, das auch für die Verwaltung verschiedener Versionen von Gebäudeentwürfen verwendet wird. Die bauphysikalischen Nachweise, die auf Basis der zentral im Netzwerk bereitgestellten Modelle automatisiert durchgeführt werden können, wurden auf Basis der Java-Applet-Technologie abgebildet, um die zentrale Wartbarkeit und Anpassbarkeit an Veränderungen der Vorschriften und Gesetzesgrundlagen zu ermöglichen. Dabei wurden sowohl die aktuelle Wärmeschutzverordnung (WSVO) als auch die Energieeinsparverordnung (EnEV) berücksichtigt. Für die ganzheitliche Erfassung des Gebäudeenergiehaushaltes wurde das Simulationsprogramm TRNSYS um ein Schnittstellenmodul unter Verwendung von IDL-Interfaces erweitert, so daß die direkte Integration der umfangreichen Funktionalitäten in das Gesamtsystem möglich wird. Um die Modellierung auf der Basis von realistischen Parametern durchführen zu können, wurde eine Komponente entwickelt, die unter Verwendung der Technologie mobiler Internet-Agenten die dynamische Recherche von herstellerspezifischen Parametern im Internet ermöglicht.