Refine
Has Fulltext
- yes (857) (remove)
Document Type
- Conference Proceeding (857) (remove)
Institute
- Professur Informatik im Bauwesen (336)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (174)
- Professur Baubetrieb und Bauverfahren (104)
- Professur Theorie und Geschichte der modernen Architektur (94)
- Graduiertenkolleg 1462 (32)
- Institut für Strukturmechanik (ISM) (23)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (IKI) (12)
- Junior-Professur Computational Architecture (11)
- Bauhaus-Institut für Geschichte und Theorie der Architektur und Planung (10)
Keywords
- Computerunterstütztes Verfahren (286)
- Architektur <Informatik> (200)
- CAD (158)
- Angewandte Informatik (145)
- Angewandte Mathematik (145)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (74)
- Bauhaus (68)
- Arbeitsschutz (61)
- Baustelle (53)
- Architekturtheorie (52)
Euclidean Clifford analysis is a higher dimensional function theory offering a refinement of classical harmonic analysis. The theory is centered around the concept of monogenic functions, i.e. null solutions of a first order vector valued rotation invariant differential operator called the Dirac operator, which factorizes the Laplacian. More recently, Hermitean Clifford analysis has emerged as a new and successful branch of Clifford analysis, offering yet a refinement of the Euclidean case; it focusses on the simultaneous null solutions, called Hermitean (or h-) monogenic functions, of two Hermitean Dirac operators which are invariant under the action of the unitary group. In Euclidean Clifford analysis, the Clifford-Cauchy integral formula has proven to be a corner stone of the function theory, as is the case for the traditional Cauchy formula for holomorphic functions in the complex plane. Previously, a Hermitean Clifford-Cauchy integral formula has been established by means of a matrix approach. This formula reduces to the traditional Martinelli-Bochner formula for holomorphic functions of several complex variables when taking functions with values in an appropriate part of complex spinor space. This means that the theory of Hermitean monogenic functions should encompass also other results of several variable complex analysis as special cases. At present we will elaborate further on the obtained results and refine them, considering fundamental solutions, Borel-Pompeiu representations and the Teoderescu inversion, each of them being developed at different levels, including the global level, handling vector variables, vector differential operators and the Clifford geometric product as well as the blade level were variables and differential operators act by means of the dot and wedge products. A rich world of results reveals itself, indeed including well-known formulae from the theory of several complex variables.
Image processing has been much inspired by the human vision, in particular with regard to early vision. The latter refers to the earliest stage of visual processing responsible for the measurement of local structures such as points, lines, edges and textures in order to facilitate subsequent interpretation of these structures in higher stages (known as high level vision) of the human visual system. This low level visual computation is carried out by cells of the primary visual cortex. The receptive field profiles of these cells can be interpreted as the impulse responses of the cells, which are then considered as filters. According to the Gaussian derivative theory, the receptive field profiles of the human visual system can be approximated quite well by derivatives of Gaussians. Two mathematical models suggested for these receptive field profiles are on the one hand the Gabor model and on the other hand the Hermite model which is based on analysis filters of the Hermite transform. The Hermite filters are derivatives of Gaussians, while Gabor filters, which are defined as harmonic modulations of Gaussians, provide a good approximation to these derivatives. It is important to note that, even if the Gabor model is more widely used than the Hermite model, the latter offers some advantages like being an orthogonal basis and having better match to experimental physiological data. In our earlier research both filter models, Gabor and Hermite, have been developed in the framework of Clifford analysis. Clifford analysis offers a direct, elegant and powerful generalization to higher dimension of the theory of holomorphic functions in the complex plane. In this paper we expose the construction of the Hermite and Gabor filters, both in the classical and in the Clifford analysis framework. We also generalize the concept of complex Gaussian derivative filters to the Clifford analysis setting. Moreover, we present further properties of the Clifford-Gabor filters, such as their relationship with other types of Gabor filters and their localization in the spatial and in the frequency domain formalized by the uncertainty principle.
The conventional way of describing an image is in terms of its canonical pixel-based representation. Other image description techniques are based on image transformations. Such an image transformation converts a canonical image representation into a representation in which specific properties of an image are described more explicitly. In most transformations, images are locally approximated within a window by a linear combination of a number of a priori selected patterns. The coefficients of such a decomposition then provide the desired image representation. The Hermite transform is an image transformation technique introduced by Martens. It uses overlapping Gaussian windows and projects images locally onto a basis of orthogonal polynomials. As the analysis filters needed for the Hermite transform are derivatives of Gaussians, Hermite analysis is in close agreement with the information analysis carried out by the human visual system. In this paper we construct a new higher dimensional Hermite transform within the framework of Quaternionic Analysis. The building blocks for this construction are the Clifford-Hermite polynomials rewritten in terms of Quaternionic analysis. Furthermore, we compare this newly introduced Hermite transform with the Quaternionic-Hermite Continuous Wavelet transform. The Continuous Wavelet transform is a signal analysis technique suitable for non-stationary, inhomogeneous signals for which Fourier analysis is inadequate. Finally the developed three dimensional filter functions of the Quaternionic-Hermite transform are tested with traditional scalar benchmark signals upon their selectivity at detecting pointwise singularities.
SENSUAL IS POLITICAL
(2011)
Daniela Brasil is currently a PhD candidate at the professorship of Spatial Planning and Research at Architecture Faculty of the Bauhaus-Universität Weimar, where she has also been teaching since 2007. Her seminars foster bodily experiments and critical thinking on “city’s sensuality”, where discussions on city marketing and affectivity are central. She was educated in architecture and urbanism in Brazil and Portugal and holds a Master of Fine Arts in Public Art and new artistic strategies. Idealizing and realizing artistic-oriented projects that intervene in relations between bodies and cities is her main concern since the mid-nineties; where she preferably works in transdisciplinary groups, as in “Lisbon Capital of Nothing: create, debate and intervene in public space, Marvila 2001”. Daniela currently runs the project “Baustelle M10 > gallery for contemporary experiments” within a collective of artists and students in Weimar.
Für eine gesicherte Planung im Bestand, sind eine Fülle verschiedenster Informationen zu berücksichtigen, welche oft erst während des Planungs- oder Bauprozesses gewonnen werden. Voraussetzung hierfür bildet immer eine Bestandserfassung. Zwar existieren Computerprogramme zur Unterstützung der Bestandserfassung, allerdings handelt es sich hierbei ausschließlich um Insellösungen. Der Export der aufgenommenen Daten in ein Planungssystem bedingt Informationsverluste. Trotz der potentiellen Möglichkeit aktueller CAAD/BIM Systeme zur Verwaltung von Bestandsdaten, sind diese vorrangig für die Neubauplanung konzipiert. Die durchgängige Bearbeitung von Sanierungsprojekten von der Erfassung des Bestandes über die Entwurfs- und Genehmigungsplanung bis zur Ausführungsplanung innerhalb eines CAAD/BIM Systems wird derzeit nicht adäquat unterstützt. An der Professur Informatik in der Architektur (InfAR) der Fakultät Architektur der Bauhaus-Universität Weimar entstanden im Rahmen des DFG Sonderforschungsbereich 524 "Werkzeuge und Konstruktionen für die Revitalisierung von Bauwerken" in den letzten Jahren Konzepte und Prototypen zur fachlich orientierten Unterstützung der Planung im Bestand. Der Fokus lag dabei in der Erfassung aller planungsrelevanter Bestandsdaten und der Abbildung dieser in einem dynamischen Bauwerksmodell. Aufbauend auf diesen Forschungsarbeiten befasst sich der Artikel mit der kontextbezogenen Weiterverwendung und gezielten Bereitstellung von Bestandsdaten im Prozess des Planens im Bestand und der Integration von Konzepten der planungsrelevanten Bestandserfassung in marktübliche CAAD/BIM Systeme.
ARCHITECTURE AND ATMOSPHERE
(2011)
Nathalie Bredella is an architect. She was educated at the TU Berlin and Cooper Union, New York. She received a PhD in Architectural Theory. She taught architectural design at the TU Berlin. She ist the author of Architekturen des Zuschauens. Imaginäre und reale Räume im Film (transcript-verlag). The work is based on an interdisciplinary approach incorporating architecture, film theory and philosophy. Her interests in architectural practice focus on the relationship between spatial strategies, film and media on an urban and architectural scale.
Iso-parametric finite elements with linear shape functions show in general a too stiff element behavior, called locking. By the investigation of structural parts under bending loading the so-called shear locking appears, because these elements can not reproduce pure bending modes. Many studies dealt with the locking problem and a number of methods to avoid the undesirable effects have been developed. Two well known methods are the >Assumed Natural Strain< (ANS) method and the >Enhanced Assumed Strain< (EAS) method. In this study the EAS method is applied to a four-node plane element with four EAS-parameters. The paper will describe the well-known linear formulation, its extension to nonlinear materials and the modeling of material uncertainties with random fields. For nonlinear material behavior the EAS parameters can not be determined directly. Here the problem is solved by using an internal iteration at the element level, which is much more efficient and stable than the determination via a global iteration. To verify the deterministic element behavior the results of common test examples are presented for linear and nonlinear materials. The modeling of material uncertainties is done by point-discretized random fields. To show the applicability of the element for stochastic finite element calculations Latin Hypercube Sampling was applied to investigate the stochastic hardening behavior of a cantilever beam with nonlinear material. The enhanced linear element can be applied as an alternative to higher-order finite elements where more nodes are necessary. The presented element formulation can be used in a similar manner to improve stochastic linear solid elements.
In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.
The presented work focuses on collaboration- experiences gathered with complex design and engineering projects, using the learning platform POLE- Europe. Within the POLE environment student-teams from different universities, disciplines and cultural backgrounds are assigned to real-world projects with clearly defined design - tasks, usually to be accomplished within one semester while working in a virtual environment for most of the time. The concept of POLE and the information and collaboration technology is described.
Selbstverdichtender Beton hat das Potenzial, in großer Breite in den Betonbau einzufließen. Seine besonderen Eigenschaften können vor allem einen zielsichereren und robusteren Einbau des Betons in das Bauwerk ermöglichen und damit die Qualität erheblich verbessern. Voraussetzung für eine erfolgreiche Umsetzung ist aber auch eine entsprechende Erfahrung und Sorgfalt beim Umgang mit diesem neuen Baustoff, sei es bei der Herstellung im Transportbetonwerk oder bei der Verarbeitung auf der Baustelle. Der Verfasser gibt einen Überblick über diese neue Technologie und schildert Versuchsergebnisse zum Einbau selbstverdichtenden Betons.
Current software solutions for real estate planning, construction and use, do not model the complete life cycle of a building. Well-integrated software tools exist for the planning and construction phases. Data integrity exists throughout the planning and construction phases, but problems occur at the transition to the use-phase. At this interface, the complete data set of planning and execution gets lost. Another software deficiency is that current software solutions don’t handle construction work and maintenance work equally. This is why a new software generation is demanded, which continuously covers the entire workflow process from the planning phase to the demolition of a building. New data concepts have to be developed, which allow bringing work items for construction together with work items for real estate use.
SENSORY TECTONICS
(2011)
Am Beispiel eines 3-feldrigen Durchlaufträgers wird die Versagenswahrscheinlichkeit von wechselnd belasteten Stahlbetonbalken bezüglich des Grenzzustandes der Adaption (Einspielen, shakedown) untersucht. Die Adaptionsanalyse erfolgt unter Berücksichtigung der beanspruchungschabhängigen Degradation der Biegesteifigkeit infolge Rissbildung. Die damit verbundene mechanische Problemstellung kann auf die Adaptionsanalyse linear elastisch - ideal plastischer Balkentragwerke mit unbekannter aber begrenzter Biegesteifigkeit zurückgeführt werden. Die Versagenswahrscheinlichkeit wird unter Berücksichtigung stochastischer Tragwerks- und Belastungsgrößen berechnet. Tragwerkseigenschaften und ständige Lasten gelten als zeitunabhängige Zufallsgrößen. Zeitlich veränderliche Lasten werden als nutzungsdauerbezogene Extremwerte POISSONscher Rechteck-Pulsprozesse unter Berücksichtigung zeitlicher Überlagerungseffekte modelliert, so dass die Versagenswahrscheinlichkeit ebenfalls eine nutzungsdauerbezogene Größe ist. Die mechanischen Problemstellungen werden numerisch mit der mathematischen Optimierung gelöst. Die Versagenswahrscheinlichkeit wird auf statistischem Weg mit der Monte-Carlo-Methode geschätzt.
Die Bearbeitung von Bauprojekten erfordert ein hohes Maß an Fachwissen verschiedener Disziplinen. Dabei kommt eine Vielzahl spezialisierter Fachmodelle zum Einsatz. Zur Übernahme der Daten von anderen Planern in das eigene, neu zu erstellende Fachmodell sind die verfügbaren Inhalte aus verschiedenen Modellen vom Fachplaner entsprechend seiner Anforderungen anzupassen und um spezifische Inhalte zu ergänzen. Dabei ergeben sich Beziehungen, welche die Zusammenhänge und Abhängigkeiten der Fachmodelle untereinander aufzeigen. Eine zugleich allgemeingültige sowie vollständige Vordefinition des durch die Beziehungen beschriebenen Modellverbundes ist kaum möglich. Zur rechnerinternen Abbildung erfolgt aus diesem Grund eine Zerlegung des Modellverbundes in Partialmodelle und entsprechende Verknüpfungen. Die Beschaffenheit des Beziehungsgeflechtes hängt sowohl von der Qualität der Datenmodelle als auch von der Beschreibungsgüte der Verknüpfungstypen, deren Definition ein hohes Maß an Fachwissen erfordern, ab. Mit einem Konzept zur Strukturierung und Zerlegung der Verbindungen in Basiselemente sowie der Integrationsmöglichkeit zu komplexeren Elementen wird eine einfachere Erstellung, Wartung und Anpassung von umfassenden baufachlichen Inhalten ermöglicht. Zur Sicherung einer hochwertigen Beschreibung des Modellverbundes ist ein an die Fähigkeiten des Ingenieurs ausgerichteter Zugang zur Spezifikation und Anpassung der Beziehungsdefinitionen unverzichtbar.
Datenaustausch, Daten resp. Produktdatenmodelle sind seit mehreren Jahren Themen in der Forschung. Verschiedene Forschungsprojekte und Initiativen diverser Firmen führten zu bereichsübergreifenden Ansätzen wie IFC und verschiedenen STEP-AP´s. Speziell im Stahlbau sind die Projekte >Produktschnittstelle Stahlbau< und >CIMsteel< entwickelt, weiterentwickelt und überarbeitet worden. Als Weiterentwicklung der bisher existierenden Austauschformate versuchen neuere Ansätze den Nutzen über die reine Datenübermittlung hinaus zu erweitern. So integrieren diese Lösungsvorschläge Aspekte der Kommunikation, der Zusammenarbeit und des Managements. Des weiteren übernehmen sie Aufgaben der Daten- und Modellverwaltung. Somit erfolgt eine digitale Abbildung unter Einbezug sämtlicher ermittelter Daten. Resultierend aus den besonderen Randbedingungen im Bauwesen, wird ein Bauwerksmodell aus untereinander in Beziehung gesetzten Domänenmodellen aufgebaut
This contribution will be freewheeling in the domain of signal, image and surface processing and touch briefly upon some topics that have been close to the heart of people in our research group. A lot of the research of the last 20 years in this domain that has been carried out world wide is dealing with multiresolution. Multiresolution allows to represent a function (in the broadest sense) at different levels of detail. This was not only applied in signals and images but also when solving all kinds of complex numerical problems. Since wavelets came into play in the 1980's, this idea was applied and generalized by many researchers. Therefore we use this as the central idea throughout this text. Wavelets, subdivision and hierarchical bases are the appropriate tools to obtain these multiresolution effects. We shall introduce some of the concepts in a rather informal way and show that the same concepts will work in one, two and three dimensions. The applications in the three cases are however quite different, and thus one wants to achieve very different goals when dealing with signals, images or surfaces. Because completeness in our treatment is impossible, we have chosen to describe two case studies after introducing some concepts in signal processing. These case studies are still the subject of current research. The first one attempts to solve a problem in image processing: how to approximate an edge in an image efficiently by subdivision. The method is based on normal offsets. The second case is the use of Powell-Sabin splines to give a smooth multiresolution representation of a surface. In this context we also illustrate the general method of construction of a spline wavelet basis using a lifting scheme.
Bei komplexen Gründungskonstruktionen sind Planungsfehler durch eine konsistente Modellierung vermeidbar. Manuelle Berechnungsmethoden ermöglichen im allgemeinen ein dreidimensionales Vorgehen nicht. Numerische Berechnungsmethoden, wie z.B. die Finite-Element-Methode, sind ein optimales Werkzeug zur ganzheitlichen Simulation des Problems. Die für die Finite-Element-Analyse notwendige Diskretisierung komplexer Bau- grundstrukturen ist manuell nicht zu bewältigen. Der vorliegende Beitrag zeigt wie ein Finite-Element-Modell automatisch aus einem geotechnischen Modell unter Berücksichtigung der spezifischen Anforderungen der Baugrund-Tragwerk-Struktur und des Bauablaufes erzeugt werden kann. Hierbei wird die Berücksichtigung der geometrischen und der mechanischen Besonderheiten bei der Netzgenerierung dargestellt.
This paper describes a framework for computer-aided conceptual design of building structures that results from building architectural considerations. The central task that is carried out during conceptual design is the synthesis of the structural system. This paper proposes a methodology for the synthesis of structural solutions. Given the nature of architectural constraints, user-model interactivity is devised as the most suitable computer methodology for driving the structural synthesis process. Taking advantage of the hierarchical organization of the structural system, this research proposes a top-down approach for structural synthesis. Through hierarchical refinement, the approach lends itself to the synthesis of global and local structural solutions. The components required for implementing the proposed methodology are briefly described. The main components have been incorporated in a proof-of-concept prototype that is being tested and validated with actual buildings.
Studium der Architektur an der Technischen Universität Wien. Von 2001 bis 2002 Lehrbeauftragte am Institut für künstlerische Gestaltung der TU Wien. Im Zeitraum 1997-2002 Wettbewerbe, Planungen und Realisationen in den Architekturbüros „the unit“ und „ckp“, 2002-2007 „limit architects“, Wien. Seit 2008 querkraft architekten wien. Dissertationsprojekt seit 2004: Design Strategies: Case Study of Six Projects of Rem Koohaas/ OMA. Seit 2008 wissenschaftliche Assistentin am Institut für Architekturtheorie, Kunst- und Kulturwissenschaften der TU Graz. Forschungsschwerpunkte: Architekturtheorie, Gegenwartsarchitektur, Entwurfsmethoden, Architectural Research/Design Studies, Wahrnehmung und Architektur.
The design of mobile IT systems, especially the design of wearable computer systems, is a complex task that requires computer science knowledge, such as that related to hardware configuration and software development, in addition to knowledge of the domain in which the system is intended to be used. Particularly in the AEC sector, it is necessary that the support from mobile information technology fit the work situation at hand. Ideally, the domain expert alone can adjust the wearable computer system to achieve this fit without having to consult IT experts. In this paper, we describe a model that helps in transferring existing design knowledge from non-AEC domains to new projects in the construction area. The base for this is a model and a methodology that describes the usage scenarios of said computer systems in an application-neutral and domain-independent way. Thus, the actual design information and experience will be transferable between different applications and domains.
For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).
As it is well known, the approximation theory of complex valued functions is one of the main fields in function theory. In general, several aspects of approximation and interpolation are only well understood by using methods of complex analysis. It seems natural to extend these techniques to higher dimensions by using Clifford Analysis methods or, more specific, in lower dimensions 3 or 4, by using tools of quaternionic analysis. One starting point for such attempts has to be the suitable choice of complete orthonormal function systems that should replace the holomorphic function systems used in the complex case. The aim of our contribuition is the construction of a complete orthonormal system of monogenic polynomials derived from a harmonic function system by using sistematically the generalized quaternionic derivative
A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)
Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.
In this paper we study the structure of the solutions to higher dimensional Dirac type equations generalizing the known λ-hyperholomorphic functions, where λ is a complex parameter. The structure of the solutions to the system of partial differential equations (D- λ) f=0 show a close connection with Bessel functions of first kind with complex argument. The more general system of partial differential equations that is considered in this paper combines Dirac and Euler operators and emphasizes the role of the Bessel functions. However, contrary to the simplest case, one gets now Bessel functions of any arbitrary complex order.
The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.
Electromagnetic wave propagation is currently present in the vast majority of situations which occur in veryday life, whether in mobile communications, DTV, satellite tracking, broadcasting, etc. Because of this the study of increasingly complex means of propagation of lectromagnetic waves has become necessary in order to optimize resources and increase the capabilities of the devices as required by the growing demand for such services.
Within the electromagnetic wave propagation different parameters are considered that characterize it under various circumstances and of particular importance are the reflectance and transmittance. There are several methods or the analysis of the reflectance and transmittance such as the method of approximation by boundary condition, the plane wave expansion method (PWE), etc., but this work focuses on the WKB and SPPS methods.
The implementation of the WKB method is relatively simple but is found to be relatively efficient only when working at high frequencies. The SPPS method (Spectral Parameter Powers Series) based on the theory of pseudoanalytic functions, is used to solve this problem through a new representation for solutions of Sturm Liouville equations and has recently proven to be a powerful tool to solve different boundary value and eigenvalue problems. Moreover, it has a very suitable structure for numerical implementation, which in this case took place in the Matlab software for the valuation of both conventional and turning points profiles.
The comparison between the two methods allows us to obtain valuable information about their perfor mance which is useful for determining the validity and propriety of their application for solving problems where these parameters are calculated in real life applications.
Hyperbolic Qp-scales
(2003)
The Qp-scales were first introduced in [1] as interpolation spaces between the Bloch and Dirichlet spaces in the complex space. ... However, such treatment presents the disadvantage of only considering the Euclidean case. In order to obtain an approach to homogeneous hyperbolic manifolds, the projective model of Gel'fand was retaken in [2]. With the help of a convenient fundamental solution for the hyperbolic (homogeneous of degree ®) D® (see [5]) it was introduced in [7] and [3] equivalent Qp scales for homogeneous hyperbolic spaces. In this talk we shall present and study some properties of this hyperbolic scale.
Design activity could be treated as state transition computationally. In stepwise processing, in-between form-states are not easily observed. However, in this research time-based concept is introduced and applied in order to bridge the gap. In architecture, folding is one method of form manipulation and architects also want to search for alternatives by this operation. Besides, folding operation has to be defined and parameterized before time factor is involved as a variable of folding. As a result, time-based transformation provides sequential form states and redirects design activity.
Spatial data acquisition, integration, and modeling for real-time project life-cycle applications
(2004)
Current methods for site modeling employs expensive laser range scanners that produce dense point clouds which require hours or days of post-processing to arrive at a finished model. While these methods produce very detailed models of the scanned scene, useful for obtaining as-built drawings of existing structures, the associated computational time burden precludes the methods from being used onsite for real-time decision-making. Moreover, in many project life-cycle applications, detailed models of objects are not needed. Results of earlier research conducted by the authors demonstrated novel, highly economical methods that reduce data acquisition time and the need for computationally intensive processing. These methods enable complete local area modeling in the order of a minute, and with sufficient accuracy for applications such as advanced equipment control, simple as-built site modeling, and real-time safety monitoring for construction equipment. This paper describes a research project that is investigating novel ways of acquiring, integrating, modeling, and analyzing project site spatial data that do not rely on dense, expensive laser scanning technology and that enable scalability and robustness for real-time, field deployment. Algorithms and methods for modeling objects of simple geometric shape (geometric primitives from a limited number of range points, as well as methods provide a foundation for further development required to address more complex site situations, especially if dynamic site information (motion of personnel and equipment). Field experiments are being conducted to establish performance parameters and validation for the proposed methods and models. Initial experimental work has demonstrated the feasibility of this approach.
The construction management has been under pressure to reduce operating costs and to improve productivity using innovative information technologe (IT) solutions conformed to structural characteristics, site conditions and past experiences. Given the growing emphasis on effectiveness and efficiency in construction projects, there is an imminent need to develop a formal procedure to select the best IT application for each proposed construction project and research and development (R&D) project. As there are numerous factors that have to be considered in selecting appropriate IT in a given situation, decision-makers need to have multicriteria decision-making ability. To enable them to make the most appropriate decision in any situation, it is important that effective tools incorporating multicriteria decision-making techniques are available. In this paper, an Analytic Network Process (ANP) model is conducted for the selection of appropriate IT application for innovative construction management prior to construction or research. The paper concludes that the ANP is a viable and capable tool for conducting IT application selection in multicriteria decision-making environment.
As computer programs become ever more complex, software development has shifted from focusing on programming towards focusing on integration. This paper describes a simulation access language (SimAL) that can be used to access and compose software applications over the Internet. Specifically, the framework is developed for the integration of tools for project management applications. The infrastructure allows users to specify and to use existing heterogeneous tools (e.g., Microsoft Project, Microsoft Excel, Primavera Project Planner, and AutoCAD) for simulation of project scenarios. This paper describes the components of the SimAL language and the implementation efforts required in the development of the SimAL framework. An illustration example bringing on-line weather forecasting service for project scheduling and management applications is provided to demonstrate the use of the simulation language and the infrastructure framework.
This paper will present a number of technical aspects for one of the most elaborate instrumentation and data acquisition projects ever undertaken in Canada. Confederation Bridge, the longest bridge built over ice covered seawater has been equipped with the state of the art data acquistition devices and systems as well as data transfer networks. The Bridge has been providing a fixed surface connection between Prince Edward Island and Province of New Brunswick in Canada since its opening in 1997. The Bridge has a rather long design service life of 100 years. Because of its large size and long span length, its design is not covered by any existing codes or standards worldwide. The focus of the paper is to introduce the data acquisition, transfer, processing and management systems. The instrumentation and communications infrastructure and devices will be presented in some details along with the data processing and management systems and techniques. Teams of engineers and researchers use the collected data to verify the analysis and design assumptions and parameters as well as investigate the short-term and long-term behaviour and health of the Bridge. The collected data are also used in furthering research activities in the field of bridge engineering and in elevating our knowledge about behaviour, reliability and durability of such complex structures, their components and materials.
This work presents a concept of interactive machine learning in a human design process. An urban design problem is viewed as a multiple-criteria optimization problem. The outlined feature of an urban design problem is the dependence of a design goal on a context of the problem. We model the design goal as a randomized fitness measure that depends on the context. In terms of multiple-criteria decision analysis (MCDA), the defined measure corresponds to a subjective expected utility of a user. In the first stage of the proposed approach we let the algorithm explore a design space using clustering techniques. The second stage is an interactive design loop; the user makes a proposal, then the program optimizes it, gets the user’s feedback and returns back the control over the application interface.
In the field of Civil Engineering, the content of reinforcement concrete design course (RC course) has complicated design procedures and many difficult specifications to recognize, so most of the students regard the RC course a tough course, and teachers very often find the class time insufficient. Also, teachers of the RC course usually spend a lot of time in organizing the examinations for handling tedious calculations and complicated logical reasoning. Furthermore, correcting examination papers with partial scoring takes even more time of the teacher’s. Therefore, the objective of this research is to design and develop a partial scoring assessment system to meet the needs in engineering design courses, such as the RC course. This assessment system can generate test items with variable parameters. It also supports inference diagnosis on the examinee’s misconceptions and gives partial scores in grading the examination. In this research, the example test subject is the analysis of rectangular reinforced concrete beam with single layer steel bars.
DETERMINATION OF THE DYNAMIC STRESS INTENSITY FACTOR USING ADVANCED ENERGY RELEASE EVALUATION
(2000)
In this study a simple effective procedure practically based upon the FEM for determination of the dynamic stress intensity factor (DSIF) depending on the input frequency and using an advanced strain energy release evaluation by the simultaneous release of a set of fictitious nodal spring links near the crack tip is developed and applied. The DSIF is expressed in terms of the released energy per unit crack length. The formulations of the linear fracture mechanics are accepted. This technique is theoretically based upon the eigenvalue problem for assessment of the spring stiffnesses and on the modal decomposition of the crack shape. The inertial effects are included into the released energy. A linear elastic material, time-dependent loading of sine type and steady state response of the structure are assumed. The procedure allows the opening, sliding and mixed modes of the structure fracture to be studied. This rational and powerful technique requires a mesh refinement near the crack tip. A numerical test example of a square notched steel plate under tension is given. Opening mode of fracture is studied only. The DSIF is calculated using a coarse mesh and a single node release for the released energy computation as well a fine mesh and simultaneous release of four links for more accurate values. The results are analyzed. Comparisons with the known exact results from a static loading are presented. Conclusions are derived. The values of the DSIF are significantly larger than the values of the corresponding static SIF. Significant peaks of the DSIF are observed near the natural frequences. This approach is general, practicable, reliable and versatile.
The paper describes a development of the analytical finite strip method (FSM) in displacements for linear elastic static analysis of simply supported at their transverse ends complex orthotropic prismatic shell structures with arbitrary open or closed deformable contour of the cross-section under general external loads. A number of bridge top structures, some roof structures and others are related to the studied class. By longitudinal sections the prismatic thin-walled structure is discretized to a limited number of plane straight strips which are connected continuously at their longitudinal ends to linear joints. As basic unknowns are assumed the three displacements of points from the joint lines and the rotation to these lines. In longitudinal direction of the strips the unknown quantities and external loads are presented by single Fourier series. In transverse direction of each strips the unknown values are expressed by hyperbolic functions presenting an exact solution of the corresponding differential equations of the plane straight strip. The basic equations and relations for the membrane state, for the bending state and for the total state of the finite strip are obtained. The rigidity matrix of the strip in the local and global co-ordinate systems is derived. The basic relations of the structure are given and the general stages of the analytical FSM are traced. For long structures FSM is more efficient than the classic finite element method (FEM), since the problem dimension is reduced by one and the number of unknowns decreases. In comparison with the semi-analytical FSM, the analytical FSM leads to a practically precise solution, especially for wider strips, and provides compatibility of the displacements and internal forces along the longitudinal linear joints.
COMPARISON OF SOME VARIANTS OF THE FINITE STRIP METHOD FOR ANALYSIS OF COMPLEX SHELL STRUCTURES
(2000)
The subject of this paper is to explore and evaluate the semi-analytical, analytical and numerical versions of the finite strip method (FSM) for static, dynamic and stability analyses of complex thin-walled structures. Many of bridge superstructures, some roof and floor structures, reservoirs, channels, tunnels, subways, layered shells and plates etc. can be analysed by this method. In both semi-analytical and analytical variants beam eigenvalue vibration or stability functions, orthogonal polynomials, products of these functions are used as longitudinal functions of the unknowns. In the numerical FSM spline longitudinal displacement functions are implemented. In the semi-analytical and numerical FSM conventional transverse shape functions for displacements are used. In the analytical FSM the accurate function of the strip normal displacement and the plane stress function are applied. These three basic variants of the FSM are compared in quality and quantity in view to the following: basic ideas, modelling, unknowns, DOF, a kind and order of the strips, longitudinal and transverse displacement and stress functions, compatibility requirements, boundary conditions, ways for obtaining of the strip stiffness and load matrices, a kind and size of the structure stiffness matrix and its band width, mesh density, necessary number of terms in length, accuracy and convergence of the stresses and displacements, approaches for refining results, input and output data, computer resources used, application area, closeness to other methods, options for future development. Numerical example is presented. Advantages and shortcomings are pointed. Conclusions are given.
The goal of the collaborative research center (SFB 532) >Textile reinforced concrete (TRC): the basis for the development of a new material technology< installed in 1998 at the Aachen University is a complex assessment of mechanical, chemical, economical and productional aspects in an interdisciplinary environment. The research project involves 10 institutes performing parallel research in 17 projects. The coordination of such a research process requires effective software support for information sharing in form of data exchange, data analysis and data archival. Furthermore, the processes of experiment planning and design, modification of material compositions and design parameters and development of new material models in such an environment call for systematic coordination applying the concepts of operational research. Flexible organization of the data coming from several sources is a crucial premise for a transparent accumulation of knowledge and, thus, for a successful research in a long run. The technical information system (TRC-TIS) developed in the SFB 532 has been implemented as a database-powered web server with a transparent definition of the product and process model. It serves as an intranet server with access domains devoted to the involved research groups. At the same time, it allows the presentation of selected results just by granting a data object an access from the public area of the server via internet.
MICROPLANE MODEL WITH INITIAL AND DAMAGE-INDUCED ANISOTROPY APPLIED TO TEXTILE-REINFORCED CONCRETE
(2010)
The presented material model reproduces the anisotropic characteristics of textile reinforced concrete in a smeared manner. This includes both the initial anisotropy introduced by the textile reinforcement, as well as the anisotropic damage evolution reflecting fine patterns of crack bridges. The model is based on the microplane approach. The direction-dependent representation of the material structure into oriented microplanes provides a flexible way to introduce the initial anisotropy. The microplanes oriented in a yarn direction are associated with modified damage laws that reflect the tension-stiffening effect due to the multiple cracking of the matrix along the yarn.
In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.
ON THE NAVIER-STOKES EQUATION WITH FREE CONVECTION IN STRIP DOMAINS AND 3D TRIANGULAR CHANNELS
(2006)
The Navier-Stokes equations and related ones can be treated very elegantly with the quaternionic operator calculus developed in a series of works by K. Guerlebeck, W. Sproeossig and others. This study will be extended in this paper. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one basically needs to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. With special variants of quaternionic holomorphic multiperiodic functions we obtain explicit formulas for three dimensional parallel plate channels, rectangular block domains and regular triangular channels. The explicit knowledge of the integral kernels makes it then possible to evaluate the operator equations in order to determine the solutions of the boundary value problem explicitly.
Designing lightings in a 3D-scene is a general complex task for building conception as it is submitted to many constraints such as aesthetics or ergonomics. This is often achieved by experimental trials until reaching an acceptable result. Several rendering softwares (such as Radiance) allow an accurate computation of lighting for each point in a scene, but this is a long process and any modification requires the whole scene to be rendered again to get the result. The first guess is empirical, provided by experience of the operator and rarely submitted to scientific considerations. Our aim is to provide a tool for helping designers to achieve this work in the scope of global illumination. We consider the problem when some data are asked for : on one hand the mean lighting in some zones (for example on a desktop) and on the other hand some qualitative information about location of sources (spotlights on the ceiling, halogens on north wall,...). The system we are conceiving computes the number of light sources, their position and intensities, in order to obtain the lighting effects defined by the user. The algorithms that we use bind together radiosity computations with resolution of a system of constraints.
The cost of keeping large area urban computer aided architectural design (CAAD) models up to date justifies wider use and access. This paper reviews the potential for collaborative groupwork creation and maintenance of such models and suggests an approach to data entry, data management and generation of appropriate levels of detail models from a Geographic Information System (GIS). Staff at the University of the West of England (UWE) modelled a large area of Bristol to demonstrate millennium landmark proposals. It became swiftly apparent that continued amendment of the model to keep it an accurate reflection of changes on the ground was a major data management problem. Piecing in new CAAD models received from Architectural Practices to visualise them in context as part of the planning negotiation process has often taken staff several days of work for each instance. The model is so complex and proprietary that Bristol City operates a specialist visualisation bureau service. UWE later modelled the environs of the Tower of London to support bids for funding and to provide the context for judging the visual impact of iterative design development. Further research continued to develop more effective approaches to. Data conversion and amalgamation from all the diverse sources was the major impediment to effective group working to create the models. It became apparent that a GIS would assist retrieving all the appropriate data that described the part of the model under creation. It was possible to predict that management of many historic part models stepping back through time, allowing for different expert interpretations to co-exist would be in itself a major task requiring a spatial database/GIS. UWE started afresh from the original source data, to explore the collaborative use of GIS and Virtual Reality Modelling Language (VRML) to integrate models and interventions from various sources and to generate an overall navigable interactive whole. Current exploration of the combination of event driven behaviours and Structured Query Language is seeking to define how appropriately to modify objects in the VRML model on demand. This is beginning to realise the potential for use of this process for: asynchronous group modelling on the lines of a collaborative virtual design studio; historic building maintenance management; visitor management; interpretation of historic sites to visitors and public planning information.
For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).
In classical complex function theory the geometric mapping property of conformality is closely linked with complex differentiability. In contrast to the planar case, in higher dimensions the set of conformal mappings is only the set of Möbius transformations. Unfortunately, the theory of generalized holomorphic functions (by historical reasons they are called monogenic functions) developed on the basis of Clifford algebras does not cover the set of Möbius transformations in higher dimensions, since Möbius transformations are not monogenic. But on the other side, monogenic functions are hypercomplex differentiable functions and the question arises if from this point of view they can still play a special role for other types of 3D-mappings, for instance, for quasi-conformal ones. On the occasion of the 16th IKM 3D-mapping methods based on the application of Bergman's reproducing kernel approach (BKM) have been discussed. Almost all authors working before that with BKM in the Clifford setting were only concerned with the general algebraic and functional analytic background which allows the explicit determination of the kernel in special situations. The main goal of the abovementioned contribution was the numerical experiment by using a Maple software specially developed for that purpose. Since BKM is only one of a great variety of concrete numerical methods developed for mapping problems, our goal is to present a complete different from BKM approach to 3D-mappings. In fact, it is an extension of ideas of L. V. Kantorovich to the 3-dimensional case by using reduced quaternions and some suitable series of powers of a small parameter. Whereas until now in the Clifford case of BKM the recovering of the mapping function itself and its relation to the monogenic kernel function is still an open problem, this approach avoids such difficulties and leads to an approximation by monogenic polynomials depending on that small parameter.
Hermann Czech studierte Architektur an der Technischen Hochschule und in der Meisterschule von Ernst Plischke an der Akademie der bildenden Künste in Wien. 1958 und 1959 war er Seminarteilnehmer bei Konrad Wachsmann an der Sommerakademie in Salzburg. An der Akademie für angewandte Kunst in Wien war er von 1974 bis 1980 Assistent bei Hans Hollein und Johannes Spalt, 1985/86 Gastprofessor an derselben Hochschule. 1988/89 und 1993/94 war er Gastprofessor an der Harvard University in Cambridge/USA, 2004-07 Gastprofessor an der ETH Zürich. Sein ungleichartiges architektonisches Werk umfasst Planungen, Wohn-, Schul- und Hotelbauten ebenso wie Interventionen in kleinem Maßstab und Ausstellungsgestaltungen. Seine Projekte haben starken Bezug zum Kontext und beinhalten bewusst die vorhandenen Widersprüche. Ab den 1970er Jahren (»Architektur ist Hintergrund«) wurde Hermann Czech zum Protagonisten einer neuen, »stillen« Architektur, die »nur spricht, wenn sie gefragt wird«. Er ist Autor zahlreicher kritischer und theoretischer Publikationen zur Architektur. In seiner Theorie spielen die Begriffe Umbau und Manierismus eine zentrale Rolle. Veröffentlichungen (Auswahl): ‚Zur Abwechslung. Ausgewählte Schriften zur Architektur. Wien‘, Wien 1996, ‚Das Looshaus‘, Wien 1976 (mit Wolfgang Mistelbauer), ‚Komfort – ein Gegenstand der Architekturtheorie?‘, in: werk,bauen+wohnen, Zürich, 3/2003, S. 10-15.
An einem Beispiel wird belegt, dass unzureichende Beachtung von sicherheitsrelevanten Aspekten in der Phase der Projektplanung nachteilige Auswirkungen für die Bewirtschaftung eines Objektes hat. Einziger praktikabler Ausweg für eine dauerhafte wirtschaftliche Lösung komplette Reinigung der Glasfassade, die den überwiegenden Teil der Gesamtfassadenfläche ausmacht, bleibt in diesem Beispiel, unter Abwägung aller Umstände und unter Berücksichtigung der einschlägigen arbeitsschutzrechtlichen Bestimmungen, das nachträgliche Installieren von so genannten Flachdachauslegern in Verbindung mit einer Arbeitsbühne, die der Beschaffenheit des Daches, der unterschiedlichen Zahl von Geschossebenen und der unmittelbaren Nähe der Gleisanlage der Straßenbahn Rechnung trägt.
Tobias Danielmeier teaches design at the Otago Polytechnic as well as at the University of Otago in Dunedin, New Zealand. He holds a Masters of Arts in Architecture from the Münster School of Architecture and is currently completing his PhD at the University of Otago. His research investigates the art, business and science of winery architecture and their interrelation with place and technology. Tobias Danielmeier’s practical experience includes projects for Reichardt Architekten, Essen, and Bolles+Wilson, Münster.
The steel structure design codes require to check up the member strength when evaluating plastic deformations. The model of perfectly plastic material is accepted. The strength criteria for simple cross-sections (I section, etc.) of steel members are given in design codes. The analytical strength criteria for steel cross-sections and numerical approaches based on stepwise procedure are investigated in many articles. Another way for checking the carrying capacity of cross-sections is the use of methods that are applied for defining strain-deformed state of elastic perfectly plastic systems. In this paper non-iterative methods are suggested for checking strength of cross-sections. Carrying capacity of cross section is verified according to extremum principle of plastic fail under monotonically loading and the strain-deformed state of cross-section is defined according to extremum energy principals of elastic potential of residual stresses and complementary work of residual displacements. The mathematical expressions of these principals for discrete cross-section are formulated as problems of convex mathematical programming. The cross-section of steel member using finite element method is divided into free form plane elements. The constant distribution of stresses along the finite element is accepted. The relationships of finite elements for static formulation of the problem are formed so, that kinematics formulation relationships could be obtained in a formal way using the theory of duality. Numerical examples of determination of cross-section strength, composition of interactive curves and composition of moment-curvature curves for different axial force levels are presented.
Bauschäden im Wohn- und Gewerbebau – eine Thüringer Bestandsaufnahme und Ansätze zur Problemlösung
(2000)
Der Verfasser gibt einen Überblick über die aktuelle Bauschadensforschung an Wohn-, Gewerbe- und Industriebauten (Schadensanfälligkeit, zeitliche Verteilung von Bauschäden, Verteiliung der Verursachung und des Verschuldens, Schadensintensität in Abhängigkeit von den Vertragsstrukturen, Mängel- und Schadensbeseitigungskosten). Es werden Schlussfolgerungen auf notwendige Maßnahmen zur Reduzierung der Schadensintensität gezogen.
For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).
The present research analyses the error on prediction obtained under different data availability scenarios to determine which measurements contribute to an improvement of model prognosis and which not. A fully coupled 2D hydromechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are ranked by scaled sensitivities, Particle Swarm Optimization is applied to determine the optimal parameter values and model validation is performed to determine the magnitude of error forecast. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain actual prediction errors.
The analyses presented here were performed to 31 data sets of 100 observations of varying data types. Calibrating with multiple information types instead of only one sort, brings better calibration results and improvement in model prognosis. However, when using several types of information the number of observations have to be increased to be able to cover a representative part of the model domain; otherwise a compromise between data availability and domain
coverage prove best. Which type of information for calibration contributes to the best prognoses, could not be determined in advance. For the error in model prognosis does not depends on the error in calibration, but on the parameter error, which unfortunately can not be determined in reality since we do not know its real value. Excellent calibration fits with parameters’ values near the limits of reasonable physical values, provided the highest prognosis errors. While models which included excess pore pressure values for calibration provided the best prognosis, independent of the calibration fit.
In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.
Fin-de-Siècle
(2000)
Wissenschaftliches Kolloquium vom 14. bis 16. Oktober 1999 in Weimar an der Bauhaus-Universität zum Thema: ‚global village - Perspektiven der Architektur'
WECHSELNDE ZUSCHREIBUNGEN
(2011)
Steffen de Rudder, Dr.-Ing., ist Architekt und wissenschaftlicher Mitarbeiter an der Bauhaus-Universität Weimar. Er lehrt städtebauliches Entwerfen und forscht zu Themen der Architektur- und Stadtbaugeschichte. 1996 Lehrbeauftragter am Kunsthistorischen Institut der Humboldt-Universität zu Berlin, 1990 bis 2001 selbstständiger Architekt in Potsdam und Berlin (Neubau, Sanierung, Denkmalpflege). Veröffentlichungen zur Bauhaus-Rezeption, zur amerikanischen Architekturgeschichte der Moderne und zur Stadtentwicklung in Berlin und in Thüringen. 2007 erschien: „Der Architekt Hugh Stubbins. Amerikanische Moderne der Fünfziger Jahre in Berlin“.
THE FOURIER-BESSEL TRANSFORM
(2010)
In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.
Designing a structure follows a pattern of creating a structural design concept, executing a finite element analysis and developing a design model. A project was undertaken to create computer support for executing these tasks within a collaborative environment. This study focuses on developing a software architecture that integrates the various structural design aspects into a seamless functional collaboratory that satisfies engineering practice requirements. The collaboratory is to support both homogeneous collaboration i.e. between users operating on the same model and heterogeneous collaboration i.e. between users operating on different model types. Collaboration can take place synchronously or asynchronously, and the information exchange is done either at the granularity of objects or at the granularity of models. The objective is to determine from practicing engineers which configurations they regard as best and what features are essential for working in a collaborative environment. Based on the suggestions of these engineers a specification of a collaboration configuration that satisfies engineering practice requirements will be developed.
Non-destructive techniques for damage detection became the focus of engineering interests in the last few years. However, applying these techniques to large complex structures like civil engineering buildings still has some limitations since these types of structures are
unique and the methodologies often need a large number of specimens for reliable results. For this reason, cost and time can greatly influence the final results.
Model Assisted Probability Of Detection (MAPOD) has taken its place among the ranks of damage identification techniques, especially with advances in computer capacity and modeling tools. Nevertheless, the essential condition for a successful MAPOD is having a reliable model in advance. This condition is opening the door for model assessment and model quality problems. In this work, an approach is proposed that uses Partial Models (PM) to compute the Probability Of damage Detection (POD). A simply supported beam, that can be structurally modified and
tested under laboratory conditions, is taken as an example. The study includes both experimental and numerical investigations, the application of vibration-based damage detection approaches and a comparison of the results obtained based on tests and simulations.
Eventually, a proposal for a methodology to assess the reliability and the robustness of the models is given.
Aufgrund der steigenden quantitativen und qualitativen Anforderungen an den Bauplanungsprozeß wird in den letzten Jahren nach neuen Lösungsansätzen gesucht, mit deren Hilfe der Bauplanungsprozeß den gestiegenen Anforderungen angepaßt werden kann. Ein Lösungsansatz hierzu stellt die durchgängig computergestützte, dreidimensionale und zeitabhängige Modellierung und Verwaltung des Bauplanungsprozesses beginnend bei der Vorplanung bis hin zum Recycling des Bauobjektes am Ende seiner Lebensdauer dar. Der vorliegende Beitrag zeigt auf, daß gerade der zeitliche Verlauf innerhalb einer geotechnischen Aufgabenstellung einen nicht unerheblichen Einfluß auf die verwendeten Modelle bzw. die Durchführung von Sicherheitsnachweisen ausübt. Für die Entwicklung geotechnischer Softwaresysteme ergibt sich daraus schon innerhalb der Analysephase die Anforderung, die zeitkritischen Abhängigkeiten zu modellieren und entsprechend im Entwurf zu berücksichtigen. Hierfür hat sich die objektorientierte Methode in Form des Objektmodells und des dynamischen Modells nach Rumbaugh als ein geeignetes Werkzeug herausgestellt. Die dabei gewonnenen Erkenntnisse und Ergebnisse können bereits sehr früh in die Konzeption des Gesamtsystems mit einbezogen werden. Am Beispiel des Geotechnischen Informationssystems (GTIS) führte dies zu einer raum- und zeitabhängigen Verwaltung des Boden- und Konstruktionsmodells und zu einer Bauablaufsteuerung, innerhalb derer die einzelnen Bauzustände verwaltet und mit den entsprechenden Ausprägungen innerhalb des dreidimensionalen Boden- und Konstruktionsmodells verknüpft werden können.
This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.
Reasonably accurate cost estimation of the structural system is quite desirable at the early stages of the design process of a construction project. However, the numerous interactions among the many cost-variables make the prediction difficult. Artificial neural networks (ANN) and case-based reasoning (CBR) are reported to overcome this difficulty. This paper presents a comparison of CBR and ANN augmented by genetic algorithms (GA) conducted by using spreadsheet simulations. GA was used to determine the optimum weights for the ANN and CBR models. The cost data of twenty-nine actual cases of residential building projects were used as an example application. Two different sets of cases were randomly selected from the data set for training and testing purposes. Prediction rates of 84% in the GA/CBR study and 89% in the GA/ANN study were obtained. The advantages and disadvantages of the two approaches are discussed in the light of the experiments and the findings. It appears that GA/ANN is a more suitable model for this example of cost estimation where the prediction of numerical values is required and only a limited number of cases exist. The integration of GA into CBR and ANN in a spreadsheet format is likely to improve the prediction rates.
The technique of Acoustic travel-time TOMography (ATOM) allows for measuring the distribution of air temperatures throughout the entire room based on the determined sound-travel-times of early reflections, currently up to second order reflections. The number of detected early reflections in the room impulse response (RIR) which stands for the desired sound paths inside the room, has a significant impact on the resolution of reconstructed temperatures. This study investigates the possibility of utilizing an array of directional sound sources for ATOM measurements instead of a single omnidirectional loudspeaker used in the previous studies [1–3]. The developed measurement setup consists of two directional sound sources placed near the edge of the floor in the climate chamber of the Bauhaus-University Weimar and one omnidirectional receiver at center of the room near the ceiling. In order to compensate for the reduced number of sound paths when using directional sound sources, it is proposed to take high-energy early reflections up to third order into account. For this purpose, the simulated travel times up to third-order image sources were implemented in the image source model (ISM) algorithm, by which these early reflections can be detected effectively for air temperature reconstructions. To minimize the uncertainties of travel-times estimation due to the positioning of the sound transducers inside the room, measurements were conducted to determine the exact emitting point of the utilized sound source i.e. its acoustic center (AC). For these measurements, three types of excitation signals (MLS, linear and logarithmic chirp signals) with various frequency ranges were used considering that the acoustic center of a sound source is a frequency dependent parameter [4]. Furthermore, measurements were conducted to determine an optimum excitation signal based on the given condition of the ATOM measurement set-up which defines an optimum method for the RIR estimation correspondingly. Finally, the uncertainty of the measuring system utilizing an array of directional sound sources was analyzed.
Reconstruction of the indoor air temperature distribution using acoustic travel-time tomography
(2021)
Acoustic travel-time tomography (ATOM) is being increasingly considered recently as a remote sensing methodology to determine the indoor air temperatures distribution. It employs the relationship between the sound velocities along sound-paths and their related travel-times through measured room-impulse-response (RIR). Thus, the precise travel-time estimation is of critical importance which can be performed by applying an analysis time-window method. In this study, multiple analysis time-windows with different lengths are proposed to overcome the challenge of accurate detection of the travel-times at RIR. Hence, the ATOM-temperatures distribution has been measured at the climate chamber lab of the Bauhaus-University Weimar. As a benchmark, the temperatures of NTC thermistors are compared to the reconstructed temperatures derived from the ATOM technique illustrating this technique can be a reliable substitute for traditional thermal sensors. The numerical results indicate that the selection of an appropriate analysis time-window significantly enhances the accuracy of the reconstructed temperatures distribution.
Acoustic travel-time tomography (ATOM) determines the distribution of the temperature in a propagation medium by measuring the travel-time of acoustic signals between transmitters and receivers. To employ ATOM for indoor climate measurements, the impulse responses have been measured in the climate chamber lab of the Bauhaus-University Weimar and compared with the theoretical results of its image source model (ISM). A challenging task is distinguishing the reflections of interest in the reflectogram when the sound rays have similar travel-times. This paper presents a numerical method to address this problem by finding optimal positions of transmitter and receiver, since they have a direct impact on the distribution of travel times. These optimal positions have the minimum number of simultaneous arrival time within a threshold level. Moreover, for the tomographic reconstruction, when some of the voxels remain empty of sound-rays, it leads to inaccurate determination of the air temperature within those voxels. Based on the presented numerical method, the number of empty tomographic voxels are minimized to ensure the best sound-ray coverage of the room. Subsequently, a spatial temperature distribution is estimated by simultaneous iterative reconstruction technique (SIRT). The experimental set-up in the climate chamber verifies the simulation results.
Im Rahmen des Sonderforschungsbereiches 524 <Werkstoffe und Konstruktionen für die Revitalisierung von Bauwerken 1> ist das primäre Anliegen des Teilprojektes D2 <Bauplanungsrelevantes digitales Gebäudeaufnahme- und Informationssystem> die Entwicklung von Methoden und Techniken zur Aufnahme von Bestandsdaten vor Ort oder durch Auswertung vorhandener Dokumentationen und deren direkte Integration in ein Bauwerksmodell. [15] Das Vorhaben erarbeitet Grundlagen zu Aspekten der fachplanerischen Nutzung und der wissenschaftlichen Auswertungen arbeitsmethodischer Vorgehensweisen in der Bestandsaufnahme unter Einbeziehung softwaretechnischer Methoden. Dabei finden Sachverhalte der Strukturierung, die Herausarbeitung von Systematiken der wesentlichen Informations-/Datenmengen, die Ableitung von Methoden zur zerstörungsfreien Erfassung und die Darstellung planungsrelevanter Gebäudeinformationen in digitalen Systemen Berücksichtigung. Beim Bauaufmaß werden neben traditionellen Methoden und Techniken längst geodätische Verfahren wie die Tachymetrie, die Photogrammetrie und die Handlaserentfernungsmessung einbezogen. In der Praxis des Bestandsaufmaßes repräsentiert gegenwärtig die Tachymetrie, das am häufigsten zur Innen- und Außenaufnahme von Gebäuden eingesetzte geodätische Vermessungsverfahren. [9] [3] Ausgehend von der heutigen Situation in der Bestandsaufnahme wird aufgezeigt, inwieweit es nach dem gegenwärtigen Stand der Technik möglich ist, die in der Geodäsie verwendeten Tachymeter direkt in der Bestandsaufnahme einzusetzen. In einem weiteren Schwerpunkt wird die Konzeption eines rechnergestützten Bauaufnahmesystems basierend auf reflektorlos messenden tachymetrischen Geräten beschrieben. Das Konzept berücksichtigt nicht nur das Bauaufmaß, sondern unterstützt adäquat den gesamten Prozeß der Bauaufnahme – von der Erstbegehung bis hin zur konstruktiven Gliederung. Abschließend werden tendenzielle Möglichkeiten in der Bauaufnahme diskutiert.
Im Rahmen des Sonderforschungsbereiches 524 “Konstruktionen und Werkstoffe für die Revitalisierung von Bauwerken“1 ist das primäre Anliegen des Teilprojektes D2 „Bauplanungsrelevantes digitales Gebäudeaufnahme- und Informationssystem“ die Entwicklung von Methoden und Techniken zur Aufnahme von Bestandsdaten vor Ort oder durch Auswertung vorhandener Dokumentationen und deren direkte Integration in ein Bauwerksmodell [11]. Das Vorhaben erarbeitet Grundlagen zu Aspekten der fachplanerischen Nutzung und der wissenschaftlichen Auswertungen arbeitsmethodischer Vorgehensweisen in der Bestandsaufnahme unter Einbeziehung softwaretechnischer Methoden. Dabei finden Sachverhalte der Strukturierung, die Herausarbeitung von Systematiken der wesentlichen Informations-/Datenmengen, die Ableitung von Methoden zur zerstörungsfreien Erfassung und die Darstellung planungsrelevanter Gebäudeinformationen in digitalen Systemen Berücksichtigung. In diesem Artikel werden ausgehend von Anforderungen der planungsrelevanten architektonischen Bauaufnahme eine Konzeption und die prototypische Realisierung zur flexiblen geometrischen Erfassung – dem Bauaufmaß – vorgestellt. Planungsrelevante Bauaufnahme Die Bauaufnahme dient der Modellbildung durch Erfassung und Wiedergabe eines real existenten Bauwerkes in seinem zum Zeitpunkt der Aufnahme angetroffenen Zustand. Die quasi vollständige Aufnahme ist aufgrund der großen zu ermittelnden Datenmengen und den hierdurch entstehenden Aufwendungen und Kosten nicht möglich [8]. Vielmehr muß eine Auswahl und Abstrahierung der aufzunehmenden Daten, ihrer Repräsentation und die Wahl ihrer Genauigkeit nach den Erfordernissen des jeweiligen Verwendungszweckes der Bauaufnahme erfolgen. So vielfältig die Erfordernisse einer Bauaufnahme ausfallen, so vielfältig sind auch die jeweiligen Forderungen und Auswahlen der aufzunehmenden Daten. In der überwiegenden Mehrheit aller Fälle ist auch die Wiedergabe der vorgefundenen geometrischen Ausprägung des Bauwerkes gewünscht oder erforderlich. Dabei werden unterschiedliche Genauigkeitsanforderungen an verschiedene Bereiche des Bauwerkes gestellt [9] [10]. So reicht vielleicht im Einzelfall eine skizzenhafte Wiedergabe des Umfeldes eines geplanten Ladenbereiches, während die umzunutzenden Bereiche in Ausführungsgenauigkeit zu erfassen sind. Die exakte geometrische Ausprägung eingebauter Bauelemente ist oft nur teilweise bekannt. In der Bauaufnahme muß ein enger Zusammenhang zwischen Geometrie und bauteilorientierter Konstruktion, sowie den verwendeten Ordnungssystemen bestehen. Um diesen Ansprüchen genüge zu tragen, werden entsprechende Aufnahmetechniken, rechnerinterne Abbildungen und Funktionalität zur Interaktion Nutzer - Modell gesucht.
The approach discussed here is part of research into an overall concept for digital instruments which support the entire planning process and help in enabling planning decisions to be based upon clear reasoning and plausible arguments. Such specialist systems must take into account currently available technology, such as networked working patterns, object-orientation, building and product models as well as the working method of the planner. The paper describes a plausibility instrument for the formulation of colour scheme proposals for building interiors and elevations. With the help of intuitively usable light simulations, colour, material and spatial concepts can be assessed realistically. The software prototype “Coloured Architecture” is conceived as a professional extension to conventional design tools for the modelling of buildings. As such it can be used by the architect in the earliest design phases of the planning process as well as for colour implementation on location.
Die Eisenbahnbrücken des Lehrter Bahnhofs in Berlin - Ein ganzheitliches FE-Berechnungskonzept
(2000)
Der Komplexität moderner Brückenbauwerke scheinen die verwendeten Berechungsmodelle oft nicht angemessen. Tragwerksberechnungen basieren in vielen Fällen noch auf der Vorgehensweise, das Brückenbauwerk in Einzelbauteile zu zerlegen und mit unterschiedlichen Teilmodellen zu behandeln. Das erscheint, auch vor dem Hintergrund ständig wachsender Rechnerleistung, nicht mehr zeitgemäß. Dies gilt zum Beispiel auch für die gängige Praxis, flächenhafte Brückenüberbauten mit Balkenmodellen zu berechnen. Der vorliegende Beitrag stellt ein ganzheitliches Berech-nungskonzept vor, welches auf der Basis eines einzigen FE-Modells die Berechnung des Gesamtbauwerks erlaubt. Damit wird für alle Bauteile neben der Zustandsgrößenberechnung auch die Bemessung von Stahl- und Spannbetonbauteilen bis hin zu Nachweisen wie zur Beschränkung der Rissbreite geführt. Die Anwendung dieses Berechnungskonzeptes wird am Beispiel der Eisenbahnüberführung des neuen Lehrter Bahn-hofs in Berlin gezeigt. Das verwendete FE-Modell umfasst Baugrund, Fundamente, Stahl- bzw. Gußstahlunterkonstruktion sowie den Stahl- bzw. Spannbetonüberbau. Besonderheiten sind unter anderem die Modellierung des plattenbalkenartigen Überbaus durch exzentrische, vorspannbare Schalenelemente und das getrennte Vorhalten von tragwerks- und lastbezogenen Eingabefiles. Damit gelingt die sequentielle Erfassung unterschiedlicher Bettungsmoduli zur Simulation statischer und dynamischer Beanspruchungen, die Berücksichtigung des Anspannens und der Interaktion zwischen vorgespannten Stahlverbänden zur Aufnahme von Horizontallasten sowie die Berücksichtigung unterschiedlicher statischer Systeme bei der Herstellung des Spannbetonüberbaus.
Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Im Vortrag wird der Frage nachgegangen, inwieweit ein Zusammenhang zwischen der Zuverlässigkeit und gewissen strukturellen Parametern eines Verkehrsnetzes besteht. Das Verkehrsnetz wird hierzu als bewerteter Graph aufgefasst. Zur Analyse der Verkehrsnetzzuverlässigkeit werden die Kanten mit ihren Verfügbarkeiten (>Zuverlässigkeiten<) bewertet. Die Zuverlässigkeit des Verkehrsnetzes hat einen großen Einfluss auf die Beurteilung des Straßennetzes. Im Sinne einer Regressionsanalyse soll der Zusammenhang mit weiteren, speziell für die Betrachtung von Verkehrsnetzen entwickelten strukturellen Kenngrößen untersucht werden, das sind insbesondere die Dispersion des Verkehrsnetzes und die Unterentwicklung des Netzes. Aus der Vielzahl der theoretisch denkbaren Verkehrsnetzstrukturen spiegelt die Ring-Radius-Struktur die realen Straßennetze am besten wider. Solche Ring-Radius-Strukturen kommen in vielen Städten vor. Die Berechnung der Verkehrsnetzzuverlässigkeit erfolgt mittels eines Algorithmus, der auf dem Prinzip der vollständigen Enumeration aller möglichen Kombinationen der Verfügbarkeit bzw. Nichtverfügbarkeit der einzelnen Kanten basiert und die gewünschte Kenngröße exakt bestimmt. Obwohl der Algorithmus speziell auf eine möglichst geringe Rechenzeit ausgerichtet ist, bleibt das Verfahren doch numerisch recht aufwendig. An Hand von zehn verschiedenen Ring-Radius-Strukturen wird der Nachweis eines Zusammenhangs zwischen den strukturellen Kenngrößen und der Verkehrsnetzzuverlässigkeit geführt. Damit können mittels struktureller Bewertungen (die mit einem vergleichsweise geringen Aufwand an Input-Daten und Rechenzeit bestimmbar sind) Aussagen über die Zuverlässigkeit des Verkehrsnetzes getroffen werden.
Im Vortrag wird der Frage nachgegangen, inwieweit zwischen den strukturellen Parametern Dispersion eines Verkehrsnetzes bzw. der Kennziffer der Unterentwicklung eines Verkehrsnetzes und den mittleren Fahrzeiten bzgl. unterschiedlicher Verkehrsbedarfsmatrizen ein Zusammenhang besteht. An Hand von 10 verschiedenen Ring-Radius-Strukturen für den MIV (Motorisierter Individual-Verkehr) und 3 verschiedenen Ring-Radius-Strukturen für den Busverkehr wird bei 5 unterschiedlichen O-D-Bedarfsmatrizen der Nachweiß eines solchen Zusammenhanges geführt. Die Ergebnisse erlauben es, auf Grund struktureller Analysen Aussagen über funktionelle Bewertungen des Verkehrsnetzes zu treffen. Da strukturelle Bewertungen mit wesentlich geringerem Aufwand an Input-Daten und an Rechenzeit als funktionelle Bewertungen bestimmbar sind, bringt dies deutliche Einsparungen in der Planungsphase von Verkehrsnetzen mit sich.
Eine integrierte Gesamtplanung, ein konzentriertes Planungs- und Freigabesystem, ein konsequentes Baustellenmanagement sowie die Einbeziehung und Bündelung des Know how und der Leistungsfähigkeit solventer Partner sind Grundlage für die Optimierung der Kommunikations- und Entscheidungswege für die systemgeführte Prozesslenkung. Das wird am Beispiel der Errichtung des Innerstädtischen Einkaufszentrums Anger 1 belegt.
Um die entsprechende Qualität der ÖPNV zu erreichen, sollte man das Verhältnis zwischen der Bedienungsqualität und dem Kostenaufwand analysieren. In den Bedingungen der anwachsenden Konkurenz aus der Seite der PKW und anderen Fuhrunternehmern soll man die Handlungen, die grösste Effektivität gewährleisten, aufnehmen. Es gibt viele Möglichkeiten die diese Qualität verbessern können, wie z.B. steigernde Frequenz der Fahrzeuge, Vergrößerung der Geschwindigkeit, Pünktlichkeit und Regelmäßigkeit, Einführung der Niederfußfahrzeuge und vieles mehr. Ich versuche in meinem Referat die Aspekte zu analysieren, die mit der Frequenz verbunden sind, d.h. die Erhöhung der Frequenz und das Erhalten der konstanten Häufigkeit der Linienbusse. Das Referat umfasst die Modelle und die Beispiele. In Verbindung mit den Untersuchungen, die die Bereitschaft der Bezahlung für die Verbesserung der Qualität betreffen, kann man schon Entscheidungen treffen, die unterschiedlichen Standard der Fahrt bestimmen.
Wir betrachten im ÖPNV (Öffentlichen Personennahverkehr) diejenige Situation, daß zwei Bus- oder Straßenbahnlinien gemeinsame Haltestellen haben. Ziel unserer Untersuchungen ist es, für beide Linien einen solchen Fahrplan zu finden, der für die Fahrgäste möglichst viel Bequemlichkeit bietet. Die Bedarfsstruktur - die Anzahl von Personen, die die beiden Linien benutzen - setzt dabei gewisse Beschränkungen für die Taktzeiten der beiden Linien. Die verbleibenden Entscheidungsfreiheiten sollen im Sinne der Zielstellung ausgenutzt werden. Im Vortrag wird folgenden Fragen nachgegangen: - nach welchen Kriterien kann man die "Bequemlichkeit" oder die "Synchonisationsgüte" messen? - wie kann man die einzelnen "Synchronisationsmaße" berechnen ? - wie kann man die verbleibenden Entscheidungsfreiheiten nutzen, um eine möglichst gute Synchronisation zu erreichen ? Die Ergebnisse werden dann auf einige Beispiele angewandt und mit den bereitgestellten Methoden Lösungsvorschläge unterbreitet.
Restelo Neighbourhood: Expanding the Capital of the Empire with the First Portuguese Urban Planner
(2015)
For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).
EXTRA-STATECRAFT
(2011)
Keller Easterling is an architect, urbanist, and writer. Her latest book, Enduring Innocence: Global Architecture and Its Political Masquerades (MIT, 2005), researches familiar spatial products that have landed in difficult or hyperbolic political situations around the world. Her previous book, Organization Space: Landscapes, Highways and Houses in America, applies network theory to a discussion of American infrastructure and development formats. A forthcoming book, Extrastatecraft, researches global infrastructure as a medium of polity. Ms. Easterling is also the author of Call It Home, a laser disc history of suburbia, and American Town Plans. She has recently completed two research installations on the Web: “Wildcards: A Game of Orgman” and “Highline: Plotting NYC.” Her work has been widely published in journals such as Grey Room, Volume, Cabinet, Assemblage, Log, Praxis, Harvard Design Magazine, Perspecta, Metalocus, and ANY. Her work is also included as chapters in numerous publications. She has lectured widely in the United States as well as internationally. Ms. Easterling’s work has been exhibited at the Queens Museum, the Architectural League, the Municipal Arts Society, and the Wexner Center. Easterling is a professor at Yale’s School of Architecture.
In civil engineering it is very difficult and often expensive to excite constructions such as bridges and buildings with an impulse hammer or shaker. This problem can be avoided with the output-only method as special feature of stochastic system identification. The permanently existing ambient noise (e.g. wind, traffic, waves) is sufficient to excite the structures in their operational conditions. The output-only method is able to estimate the observable part of a state-space-model which contains the dynamic characteristics of the measured mechanical system. Because of the assumption that the ambient excitation is white there is no requirement to measure the input. Another advantage of the output-only method is the possibility to get high detailed models by a special method, called polyreference setup. To pretend the availability of a much larger set of sensors the data from varying sensor locations will be collected. Several successive data sets are recorded with sensors at different locations (moving sensors) and fixed locations (reference sensors). The covariance functions of the reference sensors are bases to normalize the moving sensors. The result of the following subspace-based system identification is a high detailed black-box-model that contains the weighting function including the well-known dynamic parameters eigenfrequencies and mode shapes of the mechanical system. Emphasis of this lecture is the presentation of an extensive damage detection experiment. A 53-year old prestressed concrete tied-arch-bridge in Hünxe (Germany) was deconstructed in 2005. Preliminary numerous vibration measurements were accomplished. The first experiment for system modification was an additional support near the bridge bearing of one main girder. During a further experiment one hanger from one tied arch was cut through as an induced damage. Some first outcomes of the described experiments will be presented.
Dynamic testing for damage assessment as non-destructive method has attracted growing in-terest for systematic inspections and maintenance of civil engineering structures. In this con-text the paper presents the Stochastic Finite Element (SFE) Modeling of the static and dy-namic results of own four point bending experiments with R/C beams. The beams are dam-aged by an increasing load. Between the load levels the dynamic properties are determined. Calculated stiffness loss factors for the displacements and the natural frequencies show differ-ent histories. A FE Model for the beams is developed with a discrete crack formulation. Cor-related random fields are used for structural parameters stiffness and tension strength. The idea is to simulate different crack evolutions. The beams have the same design parameters, but because of the stochastic material properties their undamaged state isn't yet the same. As the structure is loaded a stochastic first crack occurs on the weakest place of the structure. The further crack evolution is also stochastic. These is a great advantage compared with de-terministic formulations. To reduce the computational effort of the Monte Carlo simulation of this nonlinear problem the Latin-Hypercube sampling technique is applied. From the results functions of mean value and standard deviation of displacements and frequencies are calcu-lated. Compared with the experimental results some qualitative phenomena are good de-scribed by the model. Differences occurs especially in the dynamic behavior of the higher load levels. Aim of the investigations is to assess the possibilities of dynamic testing under consideration of effects from stochastic material properties
NONZONAL WAVELETS ON S^N
(2010)
In the present article we will construct wavelets on an arbitrary dimensional sphere S^n due the approach of approximate Identities. There are two equivalently approaches to wavelets. The group theoretical approach formulates a square integrability condition for a group acting via unitary, irreducible representation on the sphere. The connection to the group theoretical approach will be sketched. The concept of approximate identities uses the same constructions in the background, here we select an appropriate section of dilations and translations in the group acting on the sphere in two steps. At First we will formulate dilations in terms of approximate identities and than we call in translations on the sphere as rotations. This leads to the construction of an orthogonal polynomial system in L²(SO(n+1)). That approach is convenient to construct concrete wavelets, since the appropriate kernels can be constructed form the heat kernel leading to the approximate Identity of Gauss-Weierstra\ss. We will work out conditions to functions forming a family of wavelets, subsequently we formulate how we can construct zonal wavelets from a approximate Identity and the relation to admissibility of nonzonal wavelets. Eventually we will give an example of a nonzonal Wavelet on $S^n$, which we obtain from the approximate identity of Gauss-Weierstraß.
Due to the amount of flow simulation and measurement data, automatic detection, classification and visualization of features is necessary for an inspection. Therefore, many automated feature detection methods have been developed in recent years. However, only one feature class is visualized afterwards in most cases, and many algorithms have problems in the presence of noise or superposition effects. In contrast, image processing and computer vision have robust methods for feature extraction and computation of derivatives of scalar fields. Furthermore, interpolation and other filter can be analyzed in detail. An application of these methods to vector fields would provide a solid theoretical basis for feature extraction. The authors suggest Clifford algebra as a mathematical framework for this task. Clifford algebra provides a unified notation for scalars and vectors as well as a multiplication of all basis elements. The Clifford product of two vectors provides the complete geometric information of the relative positions of these vectors. Integration of this product results in Clifford correlation and convolution which can be used for template matching of vector fields. For frequency analysis of vector fields and the behavior of vector-valued filters, a Clifford Fourier transform has been derived for 2D and 3D. Convolution and other theorems have been proved, and fast algorithms for the computation of the Clifford Fourier transform exist. Therefore the computation of Clifford convolution can be accelerated by computing it in Clifford Fourier domain. Clifford convolution and Fourier transform can be used for a thorough analysis and subsequent visualization of flow fields.
In nonlinear simulations the loading is, in general, applied in an incremental way. Path-following algorithms are used to trace the equilibrium path during the failure process. Standard displacement controlled solution strategies fail if snap-back phenomena occur. In this contribution, a path-following algorithm based on the dissipation of the inelastic energy is presented which allows for the simulation of snap-backs. Since the constraint is defined in terms of the internal energy, the algorithm is not restricted to continuum damage models. Furthermore, no a priori knowledge about the final damage distribution is required. The performance of the proposed algorithm is illustrated using nonlinear mesoscale simulations.
In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.
Die Liquiditätsplanung von Bauunternehmen XE "Liquiditätsplanung" gilt als ein wesentliches Steuerungs-, Kontroll- sowie Informationsinstrument für interne und externe Adressaten und übt eine Entscheidungsunterstützungsfunktion aus. Da die einzelnen Bauprojekte einen wesentlichen Anteil an den Gesamtkosten des Unternehmens ausmachen, besitzen diese auch einen erheblichen Einfluß auf die Liquidität und die Zahlungsfähigkeit der Bauunternehmung. Dem folgend ist es in der Baupraxis eine übliche Verfahrensweise, die Liquiditätsplanung zuerst projektbezogen zu erstellen und anschließend auf Unternehmensebene zu verdichten. Ziel der Ausführungen ist es, die Zusammenhänge von Arbeitskalkulation XE "Arbeitskalkulation" , Ergebnisrechnung XE "Ergebnisrechnung" und Finanzrechnung XE "Finanzrechnung" in Form eines deterministischen XE "Erklärungsmodells" Planungsmodells auf Projektebene darzustellen. Hierbei soll das Verständnis und die Bedeutung der Verknüpfungen zwischen dem technisch-orientierten Bauablauf und dessen Darstellung im Rechnungs- und Finanzwesen herausgestellt werden. Die Vorgänge aus der Bauabwicklung, das heißt die Abarbeitung der Bauleistungsverzeichnispositionen und deren zeitliche Darstellung in einem Bauzeitenplan sind periodisiert in Größen der Betriebsbuchhaltung (Leistung, Kosten) zu transformieren und anschließend in der Finanzrechnung (Einzahlungen., Auszahlungen) nach Kreditoren und Debitoren aufzuschlüsseln.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
Available construction time-cost trade-off analysis models can be used to generate trade-offs between these two important objectives, however, their application is limited in large-scale construction projects due to their impractical computational requirements. This paper presents the development of a scalable and multi-objective genetic algorithm that provides the capability of simultaneously optimizing construction time and cost large-scale construction projects. The genetic algorithm was implemented in a distributed computing environment that utilizes a recent standard for parallel and distributed programming called the message passing interface (MPI). The performance of the model is evaluated using a set of measures of performance and the results demonstrate the capability of the present model in significantly reducing the computational time required to optimize large-scale construction projects.
In many branches companies often lose the visibility of their human and technical resources of their field service. On the one hand the people in the fieldservice are often free like kings on the other hand they do not take part of the daily communication in the central office and suffer under the lacking involvement in the decisions inside the central office. The result is inefficiency. Reproaches in both directions follow. With the radio systems and then mobile phones the ditch began to dry up. But the solutions are far from being productive.
We study the Weinstein equation u on the upper half space R3+. The Weinstein equation is connected to the axially symmetric potentials. We compute solutions of the Weinstein equation depending on the hyperbolic distance and x2. These results imply the explicit mean value properties. We also compute the fundamental solution. The main tools are the hyperbolic metric and its invariance properties.
HYPERMONOGENIC POLYNOMIALS
(2006)
It is well know that the power function is not monogenic. There are basically two ways to include the power function into the set of solutions: The hypermonogenic functions or holomorphic Cliffordian functions. L. Pernas has found out the dimension of the space of homogenous holomorphic Cliffordian polynomials of degree m, but his approach did not include a basis. It is known that the hypermonogenic functions are included in the space of holomorphic Cliffordian functions. As our main result we show that we can construct a basis for the right module of homogeneous holomorphic Cliffordian polynomials of degree m using hypermonogenic polynomials and their derivatives. To that end we first recall the function spaces of monogenic, hypermonogenic and holomorphic Cliffordian functions and give the results needed in the proof of our main theorem. We list some basic polynomials and their properties for the various function spaces. In particular, we consider recursive formulas, rules of differentiation and properties of linear independency for the polynomials.
Traffic simulation is a valuable tool for the design and evaluation of road networks. Over the years, the level of detail to which urban and freeway traffic can be simulated has increased steadily, shifting from a merely qualitative macroscopic perspective to a very detailed microscopic view, where the behavior of individual vehicles is emulated realistically. With the improvement of behavioral models, however, the computational complexity has also steadily increased, as more and more aspects of real-life traffic have to be considered by the simulation environment. Despite the constant increase in computing power of modern personal computers, microscopic simulation stays computationally expensive, limiting the maximum network size than can be simulated on a single-processor computer in reasonable time. Parallelization can distribute the computing load from a single computer system to a cluster of several computing nodes. To this end, the exisiting simulation framework had to be adapted to allow for a distributed approach. As the simulation is ultimately targeted to be executed in real-time, incorporating real traffic data, only a spatial partition of the simulation was considered, meaning the road network has to be partitioned into subnets of comparable complexity, to ensure a homogenous load balancing. The partition process must also ensure, that the division between subnets does only occur in regions, where no strong interaction between the separated road segments occurs (i.e. not in the direct vicinity of junctions). In this paper, we describe a new microscopic reasoning voting strategy, and discuss in how far the increasing computational costs of these more complex behaviors lend themselves to a parallelized approach. We show the parallel architecture employed, the communication between computing units using MPIJava, and the benefits and pitfalls of adapting a single computer application to be used on a multi-node computing cluster.
The scientific transfer of key technology features to developing countries, together with adequate competence, localisation and adaptation, is the primary purpose of the proposed investigation. It is evident that introducing high-level CAD design and detailing will improve the planning process in developing countries. Successful utilization of applied information technology for the planning process, however, depends on the user-interface of individual software. Therefore, to open the great opportunity embedded in CAD software for clients globally, the language and character-set barrier of traditional user-interfaces must be overcome. A proposal for a research program is given here to address such issue in favour of global civil engineering.
The use of process models in the analysis, optimization and simulation of processes has proven to be extremely beneficial in the instances where they could be applied appropriately. However, the Architecture/Engineering/Construction (AEC) industries present unique challenges that complicate the modeling of their processes. A simple Engineering process model, based on the specification of Tasks, Datasets, Persons and Tools, and certain relations between them, have been developed, and its advantages over conventional techniques have been illustrated. Graph theory is used as the mathematical foundation mapping Tasks, Datasets, Persons and Tools to vertices and the relations between them to edges forming a directed graph. The acceptance of process modeling in AEC industries not only depends on the results it can provide, but the ease at which these results can be attained. Specifying a complex AEC process model is a dynamic exercise that is characterized by many modifications over the process model's lifespan. This article looks at reducing specification complexity, reducing the probability for erroneous input and allowing consistent model modification. Furthermore, the problem of resource leveling is discussed. Engineering projects are often executed with limited resources and determining the impact of such restrictions on the sequence of Tasks is important. Resource Leveling concerns itself with these restrictions caused by limited resources. This article looks at using Task shifting strategies to find a near-optimal sequence of Tasks that guarantees consistent Dataset evolution while resolving resource restrictions.
We present a software prototype for fluid flow problems in civil engineering, which combines essential features of Computational Steering approaches with efficient methods for model transfer and high performance computing. The main components of the system are described: - The modeler with a focus on the data management of the product model - The pre-processing and the post-processing toolkit - The simulation kernel based on the Lattice Boltzmann method - The required hardware for real-time computing
In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.
We establish the basis of a discrete function theory starting with a Fischer decomposition for difference Dirac operators. Discrete versions of homogeneous polynomials, Euler and Gamma operators are obtained. As a consequence we obtain a Fischer decomposition for the discrete Laplacian. For the sake of simplicity we consider in the first part only Dirac operators which contain only forward or backward finite differences. Of course, these Dirac operators do not factorize the classic discrete Laplacian. Therefore, we will consider a different definition of a difference Dirac operator in the quaternionic case which do factorizes the discrete Laplacian.