Filtern
Dokumenttyp
- Konferenzveröffentlichung (857) (entfernen)
Institut
- Professur Informatik im Bauwesen (336)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (174)
- Professur Baubetrieb und Bauverfahren (104)
- Professur Theorie und Geschichte der modernen Architektur (94)
- Graduiertenkolleg 1462 (32)
- Institut für Strukturmechanik (ISM) (23)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (IKI) (12)
- Junior-Professur Computational Architecture (11)
- Bauhaus-Institut für Geschichte und Theorie der Architektur und Planung (10)
Schlagworte
- Computerunterstütztes Verfahren (286)
- Architektur <Informatik> (200)
- CAD (158)
- Angewandte Informatik (145)
- Angewandte Mathematik (145)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (74)
- Bauhaus (68)
- Arbeitsschutz (61)
- Baustelle (53)
- Architekturtheorie (52)
Der Beitrag basiert auf den Ansätzen und Ergebnissen des Forschungsprojekts >Prozessorientierte Vernetzung von Ingenieurplanungen am Beispiel der Geotechnik<, des DFG-Schwerpunktprogramms 1103 >Vernetzt-kooperative Planungsprozesse im Konstruktiven Ingenieurbau<. Ziel ist die Entwicklung einer netzwerkbasierten Kooperationsplattform zur Unterstützung von Ingenieurplanungen. Methodische Grundlagen hierfür stellen die Petri-Netze mit individuellen Marken in Verbindung mit einer semantischen Informationsbewertung dar. Der Beitrag zeigt an einem Beispiel die grundlegenden Möglichkeiten der Petri-Netze auf und stellt die Steuerung der Planungsprozesse aufgrund von Metainformationen dar. Darüber hinaus wird der Ansatz verfolgt, auf der Basis bauteilorientierter Prozessmuster für geotechnische Konstruktionselemente den veränderlichen Prozessablauf zu erfassen. Abschließend wird ein Weg zur Implementierung gezeigt.
In Anbetracht der leistungsstarken Computernetzwerke und der intensiven Globalisierung von Wissenschaft und Technik im Bauwesen ist es das Ziel der Bauinformatik, die Planungsprozesse des Konstruktiven Ingenieurbaus für die Nutzung verteilter Ressourcen zu gestalten, geeignete Kooperationsmodelle für die Fachplanung im Informationsverbund der Projektbeteiligten zu entwickeln und die kooperative Projektbearbeitung unter Nutzung verteilter Fachmodelle in Netzen zu ermöglichen. Dazu werden neue methodische Grundlagen geschaffen, welche die tiefgreifenden Veränderungen der Arbeits- und Organisationsstrukturen berücksichtigen. Vor diesem Hintergrund zeigt der Beitrag aktuelle Forschungsinhalte der Bauinformatik auf, die insbesondere im Rahmen des DFG-Schwerpunktprogramms 1103 'Vernetzt-kooperative Planungsprozesse im Konstruktiven Ingenieurbau' von verschiedenen Projekten arbeitsteilig entwickelt werden. Es wird ein integratives Prozessmodell vorgestellt, das die Grundlage für die Unterstützung der vernetzt-kooperativen Planung mit modernen Methoden der Informationsverarbeitung und Kommunikationstechnik zusammenführt. An einem Beispiel aus dem baulichen Brandschutz werden die methodischen Ansätze der Prozessmodellierung mit Petri-Netzen und der mobilen Softwareagenten zur Unterstützung der Kooperation in der Fachplanung des Bauwesens dargestellt und erläutert.
We consider a structural truss problem where all of the physical model parameters are uncertain: not just the material values and applied loads, but also the positions of the nodes are assumed to be inexact but bounded and are represented by intervals. Such uncertainty may typically arise from imprecision during the process of manufacturing or construction, or round-off errors. In this case the application of the finite element method results in a system of linear equations with numerous interval parameters which cannot be solved conventionally. Applying a suitable variable substitution, an iteration method for the solution of a parametric system of linear equations is firstly employed to obtain initial bounds on the node displacements. Thereafter, an interval tightening (pruning) technique is applied, firstly on the element forces and secondly on the node displacements, in order to obtain tight guaranteed enclosures for the interval solutions for the forces and displacements.
Der Schwerpunkt von Forschung und Entwicklung auf dem Gebiet der Tragwerksplanungs-Software lag in den letzten Jahren auf der Erweiterung des funktionalen Umfangs. In der Folge ist es notwendig, den gestiegenen Funktionsumfang einem möglichst breiten Anwenderkreis durch ingenieurgemäß gestaltete Arbeitsumgebungen zugänglich zu machen, so dass ein möglichst effizientes und fehlerarmes Arbeiten ermöglicht wird. Aus der Sicht der Tragwerksplaner muss eine ingenieurgemäß gestaltete Software eine dem spezifischen Arbeitsablauf angepasste Nutzer-Software-Interaktion aufweisen. Dabei sind die benötigten Funktionalitäten in ein einheitliches System zu integrieren und eine Anpassbarkeit durch den Anwender sicherzustellen. Die Berücksichtigung dieser Anforderungen mit herkömmlichen Mitteln würde einen unverhältnismäßig hohen Entwicklungsaufwand erfordern. Infolgedessen muss aus der Sicht der Software-Entwickler eine moderne Software-Architektur für die Tragwerksplanung eine Erhöhung des Wiederverwendungsgrades und eine unabhängige Erweiterbarkeit als zusätzliche Anforderungen erfüllen. In diesem Beitrag wird ein auf Verbunddokumenten basierendes Konzept vorgestellt, mit dem eine Zusammenführung von Standard-Software und fachspezifischen Software-Komponenten zu einer ingenieurgemäßen Arbeitsumgebung ermöglicht wird. Damit kann die Analyse und die Dokumentation eines Tragelementes einschließlich der zugehörigen Datenhaltung innerhalb eines Verbunddokumentes erfolgen. Gleichzeitig kann der software-technische Wiederverwendungsgrad durch die Definition eines Component Frameworks als unabhängig erweiterbare Software-Architektur und durch den Einsatz von Software-Komponenten mit eigener Nutzeroberfläche über das bisher erreichte Niveau hinaus gesteigert werden. Die Umsetzbarkeit des Konzeptes wird durch eine Pilotimplementierung demonstriert.
In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.
VARIATIONAL POSITING AND SOLUTION OF COUPLED THERMOMECHANICAL PROBLEMS IN A REFERENCE CONFIGURATION
(2015)
Variational formulation of a coupled thermomechanical problem of anisotropic solids for the case of non-isothermal finite deformations in a reference configuration is shown. The formulation of the problem includes: a condition of equilibrium flow of a deformation process in the reference configuration; an equation of a coupled heat conductivity in a variational form, in which an influence of deformation characteristics of a process on the temperature field is taken into account; tensor-linear constitutive relations for a hypoelastic material; kinematic and evolutional relations; initial and boundary conditions. Based on this formulation several axisymmetric isothermal and coupled problems of finite deformations of isotropic and anisotropic bodies are solved. The solution of coupled thermomechanical problems for a hollow cylinder in case of finite deformation showed an essential influence of coupling on distribution of temperature, stresses and strains. The obtained solutions show the development of stressstrain state and temperature changing in axisymmetric bodies in the case of finite deformations.
VARIATION OF ROTATIONAL RESTRAINT IN GRID DECK CONNECTION DUE TO CORROSION DAMAGE AND STRENGTHENING
(2006)
The approach to assessment of rotational restraint of stringer-to-crossbeam connection in a deck of 100-year old steel truss bridge is presented. Sensitivity of rotational restraint coefficient of the connection to corrosion damage and strengthening is analyzed. Two criteria of the assessment of the rotational restraint coefficient are applied: static and kinematic one. The former is based on bending moment distribution in the considered member, the latter one – on the member rotation at the given joint. 2D-element model of finite element method is described: webs and flanges are modeled with shell elements, while rivets in the connection – with system of beam and spring elements. The method of rivet modeling is verified by T-stub connection test results published in literature. FEM analyses proved that recorded extent of corrosion damage does not alter the initial rotational restraint of stringer-to-crossbeam connection. Strengthening of stringer midspan influences midspan bending moment and stringer end rotation in a different way. Usually restoring member load bearing capacity means strengthening its critical regions (where the highest stress levels occur). This alters flexural stiffness distribution over member length and influences rotational restraint at its connection to other members. The impact depends on criterion chosen for rotational restraint coefficient assessment.
Das Innovationsmanagement von Medienorganisationen unterliegt derzeit erheblichen Veränderungen: Im veränderten Marktumfeld erweisen sich Flexibilität, schnelle Richtungswechsel und Anpassungsfähigkeit als zentral. Darauf muss auch die Medienmanagement-Forschung reagieren: Um die Agilität der gegenwärtigen Unternehmenspraxis valide zu erforschen, ist eine ebenso agile, adaptive Forschung gefordert. Zu diesem Zweck schlägt der Beitrag eine praxistheoretische Perspektive auf das Innovationsmanagement von Medienorganisationen vor. Empirische Forschungsdesigns, die aus einem solchen Zugriff resultieren, werden sowohl hinsichtlich ihrer methodischen Herausforderungen als auch ihres Forschungsprojektmanagements diskutiert. Der Beitrag greift außerdem neue Möglichkeitsräume des wissenschaftlichen Publizierens, des Universitätsmanagements sowie der Forschungsorganisation auf, die praxistheoretisch gegründete, empirische Innovationsforschung in der Medienwirtschaft einfordert.
Utilizing Modern FIB/SEM Technology and EDS for 3D Imaging of Hydrated Alite and its Pore Space
(2021)
The exploration of cementitious materials using scanning electron microscopes (SEM) is mainly done using fractured or polished surfaces. This leads to high-resolution 2D-images that can be combined using EDX and EBSD to unveil details of the microstructure and composition of materials. Nevertheless, this does not provide a quantitative insight into the three-dimensional fine structure of for example C-S-H phases.
The focused ion beam (FIB) technology can cut a block of material in thin layers of less than 10 nm. This gives us a volume of 1000 μm³ with a voxel resolution of down to 4 x 4 x 10 nm³. The results can be combined with simultaneously acquired EDX data to improve image segmentation. Results of the investigation demonstrate that it is possible to obtain close-to-native 3D-visualisation of the spatial distribution of unreacted C3S, C-S-H and CH. Additionally, an optimized preparation method allows us to quantify the fine structure of C-S-H phases (length, aspect ratio, …) and the pore space.
Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.
Interval analysis extends the concept of computing with real numbers to computing with real intervals. As a consequence, some interesting properties appear, such as the delivery of guaranteed results or confirmed global values. The former property is given in the sense that unknown numerical values are in known to lie in a computed interval. The latter property states that the global minimum value, for example, of a given function is also known to be contained in a interval (or a finite set of intervals). Depending upon the amount computation effort invested in the calculation, we can often find tight bounds on these enclosing intervals. The downside of interval analysis, however, is the mathematically correct, but often very pessimistic size of the interval result. This is in particularly due to the so-called dependency effect, where a single variable is used multiple times in one calculation. Applying interval analysis to structural analysis problems, the dependency has a great influence on the quality of numerical results. In this paper, a brief background of interval analysis is presented and shown how it can be applied to the solution of structural analysis problems. A discussion of possible improvements as well as an outlook to parallel computing is also given.
This paper presents a generic methodology for measurement system configuration when the goal is to identify behaviour models that reasonably explain observations. For such tasks, the best measurement system provides maximum separation between candidate models. In this work, the degree of separation between models is measured using Shannon’s Entropy Function. The location and type of measurement devices are chosen such that the entropy of candidate models is greatest. This methodology is tested on a laboratory structure and, to demonstrate generality, an existing fresh water supply network in a city in Switzerland. In both cases, the methodology suggests an appropriate set of sensors for identifying the state of the system.
The paper describes further developments of the interactive evolutionary design concept relating to the emergence of mutually inclusive regions of high performance design solutions. These solutions are generated from cluster-oriented genetic algorithm (COGAs) output and relate to a number of objectives introduced during the preliminary design of military airframes. The data-mining of multi-objective COGA (moCOGA) output further defines these regions through the application of clustering algorithms, data reduction and variable attribute relevance analyses. A number of visual representations of the COGA output projected onto both variable and objective space are presented. The multi-objective output of the COGA is compared to output from a Strength Pareto Evolutionary Algorithm (SPEA-II) to illustrate the manner in which moCOGAs can generate good approximations to Pareto frontiers.
At present time neuronet's technologies have got a wide application in a different fields of technique. At the same time they give insufficient consideration to using neuron nets in the field of building. Use of approximating neuron nets will allow to definite the deflected mode of constructions elements using noticeably less computing facilities then by using universal methods, finite-element method for instance. Today neuron nets are used for calculation separate elements of building constructions. In this work use of neuron nets for calculation deflected mode of construction which consists of many elements is consider. The main idea of suggested analysis is using neuron nets for calculation internal intensities and transferences pieces of model which are selected by there functional destination. For example, a plate is destine for adoption intensity distributed among area, the purpose of core is taking up surface distributed intensity. Elements involved as intensity converter. Plate serve for intensities dispersion and their transfer. A template is associated with functional destination. A template regards as composition of model elements which has define functional destinations. A single template can incarnate several functional destinations. On receipt values of components transference the estimation of their permissibility is put into practice. In the case of detection a violation of permissible limit, in the component database is making a search for component with analogous functional destination, according to the type of violation. If such component is found than a change a previous component into new one is realized. Thus besides control a condition of construction by components there is a possibility to make a search for decisions of revealed problem....
Portugal is one of the European countries with higher spatial and population freeway network coverage. The sharp growth of this network in the last years instigates the use of methods of analysis and the evaluation of their quality of service in terms of the traffic performance, typically performed through internationally accepted methodologies, namely that presented in the Highway Capacity Manual (HCM). Lately, the use of microscopic traffic simulation models has been increasingly widespread. These models simulate the individual movement of the vehicles, allowing to perform traffic analysis. The main target of this study was to verify the possibility of using microsimulation as an auxiliary tool in the adaptation of the methodology by HCM 2000 to Portugal. For this purpose, were used the microscopic simulators AIMSUN and VISSIM for the simulation of the traffic circulation in the A5 Portuguese freeway. The results allowed the analysis of the influence of the main geometric and traffic factors involved in the methodology by HCM 2000. In conclusion, the study presents the main advantages and limitations of the microsimulators AIMSUN and VISSIM in modelling the traffic circulation in Portuguese freeways. The main limitation is that these microsimulators are not able to simulate explicitly some of the factors considered in the HCM 2000 methodology, which invalidates their direct use as a tool in the quantification of those effects and, consequently, makes the direct adaptation of this methodology to Portugal impracticable.
Urban design played a central role for the European dictatorships during the 20th century, it served to legitimate the regime, to produce agreement, to demonstrate power, efficiency and speed, it communicated the social, as well as design projects, of the dictatorial regimes domestically and internationally, it tied old experts, as well as new, to the regime. Dictatorial urban design also played an important role after the fall of the dictatorships: It became the object of structural and verbal handling strategies: of demolition, of transformation, of reconstruction, of forgetting, of suppressing, of re-interpretation and of glorification. The topic area is, therefore, both historical and relevant to the present day. The discussion of the topic area is, like it or not, always embedded in the present state of societal engagement with dictatorships.
In order to even be able to discuss all of these aspects, different conceptual decisions are necessary. In retrospect, these may seem to many as self-evident, although they are anything but. Our thesis is that there are three methodological imperatives, especially, which allow an expanded approach to the topic area “urban design and dictatorship”. First and above all, the tunnel view, focused on individual dictatorships and neglecting the international dimension, must be overcome. Second, the differences in urban design over the course of a dictatorship, through an appropriate periodisation, should be emphasised. Third, we must strive for an open, flexible, but complex concept of urban design. The main focus lies on the urban design of the most influential dictatorships of the first half of the 20th century: Soviet Union, Fascist Italy and Nazi Germany, including the urban design of the autarky periods in Portugal and Spain.
After all, urban design is not just a product of specific historic circumstances. It is a form that continues to have long-term effects, which demonstrates its usefulness and adaptability throughout this process. The urban design products undoubtedly still recall the dictatorial rule under which they were created. However, they are more than a memory space. They are also a living space of the present. They can and should be discussed with respect to their spatial and functional utility for today and tomorrow. Such a perspective is a given for the citizens of a city, but also for city marketing, having marvellous consequences. Only when we do not exclude this dimension a priori, even in academic discussions, can we do justice to the products of dictatorships.
And finally, the view of the urban design of dictatorships can and must contribute to the questioning of simplified and naive conceptions of dictatorships. With urban design in mind, we can observe how dictatorships work and how they were able to prevail. In Europe, these questions are of the highest actuality.
Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Im Stahlbeton- und Spannbetonbau kommen in zunehmendem Maße Verbundkonstruktionen aus Betonfertigteilen und Ortbetonergänzungen zum Einsatz. Die Fertigteile werden je nach Spannweite, Belastung und speziellen Anforderungen mit schlaffer oder vorgespannter Be-wehrung ausgeführt. Aufgrund der im Allgemeinen unterschiedlichen Betone der einzelnen Querschnittsanteile sowie bedingt durch den zeitlichen Versatz in der Herstellung der Fertig-teile und der Ortbetonergänzungen ergeben sich Unterschiede im Langzeitverhalten, die bei der Bemessung und Nachweisführung zu berücksichtigen sind. Das Kriechen und Schwinden der Betone als Folge des zeitabhängigen Materialverhaltens und die Spezifik des Verbund-querschnitts können für die Gebrauchstauglichkeit relevant sein, insbesondere wenn die Fer-tigteile vorgespannt sind. Eine hinreichend wirklichkeitsnahe Beschreibung des Langzeitverhaltens von Beton gestattet die Theorie des elastisch-kriechenden Körpers, wobei sich im Allgemeinen keine geschlosse-ne Lösung für bewehrte Betonverbundquerschnitte angeben lässt. Eine Alternative hierzu ergibt sich, wenn für die einzelnen Kriechintervalle ein vereinfachter Ansatz für den Verlauf der Kriechfunktion entsprechend der Theorie des Alterns getroffen wird. Dabei werden die im Zeitintervall (tj-1,tj) umgelagerten Schnittgrößen als neue Belastung der einzelnen Querschnittsanteile im Zeitintervall (tj,tj+1) mit der Kriech-funktion f(tj+1,tj) berücksichtigt. Bei hinreichend kleinen Zeitschritten können die Nachteile der getroffenen Vereinfachung vernachlässigt werden.
In der heutigen Informationsgesellschaft sind alle Geschäftsprozesse in einem Unternehmen geprägt durch die Verwendung der vorhandenen Informations- und Kommunikationstechnologien. Diese haben zur Folge, dass Unternehmen mit einer wahren Flut an Informationen konfrontiert werden, die von extern und intern in das Unternehmen eingetragen wurden. Die erfolgreiche Beherrschung dieser Informationsflut bildet die Basis für das effektive Arbeiten im Unternehmen. Grundvoraussetzung hierfür ist vor allem ein harmonisches Zusammenspiel zwischen Mensch ↔ Organisation ↔ Technik. Dokumentenmanagementsysteme haben sich zur Aufgabe gemacht, den Anwender bei der technologischen Bewältigung dieser Aufgabenstellung zu unterstützen. Der hier vorgestellte Artikel zeigt die Probleme auf, die während der Umsetzung eines Dokumentenmanagementsystems auftreten können und konzentriert sich dabei besonders auf Kleinen und Mittleren Unternehmen (KMU) im Bauwesen. Charakteristisch für diese ist die Schwäche die eigenen Unternehmensstrukturen betriebswirtschaftlich aufzubereiten, woraus später bei einer technologischen Umsetzung schwerwiegende Probleme resultieren. Genau an diesem Punkt setzt der vorgestellte Artikel an und versucht den Zusammenhang zwischen Organisation und Technik herauszuarbeiten, um darauf aufsetzend einen allgemeingültigen Ansatz für die Umsetzung eines Dokumentenmanagementsystems zu liefern.
An einem Teil der Topologie architektonischer Räume, dem Volumenadjazenzgraphen (VAG), wird gezeigt wie topologisches Modellieren Anwendungen der Bauplanung integrieren kann. Dazu wird ein Prototyp vorgestellt, der im wesentlichen aus drei Komponenten besteht: Mit dem Anforderungsmanager werden Anforderungen eigegeben, die formal gut handhabbar sind. Mit dem Topologiemanager werden diese Anforderungen mit gezeichneten Räumen kombiniert. Die topologischen Relationen in den Zeichnungen werden mit den entsprechenden Mitteln des GIS berechnet und in eine Datenbank exportiert. Der Anforderungsprüfer vergleicht dann die Anforderungsdaten, die mit Hilfe des Anforderungsmanagers erzeugt wurden, mit den Topologiedaten. Dieser Ansatz soll zeigen, wie topologische Modelle eine Formalisierung semantisch hochstehender Informationen ermöglichen, indem sie als Eigenschaften von Graphen dargestellt werden
Ungarn und das Bauhaus
(1976)
Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.
This paper presents a methodology for uncertainty quantification in cyclic creep analysis. Several models- , namely BP model, Whaley and Neville model, modified MC90 for cyclic loading and modified Hyperbolic function for cyclic loading are used for uncertainty quantification. Three types of uncertainty are included in Uncertainty Quantification (UQ): (i) natural variability in loading and materials properties; (ii) data uncertainty due to measurement errors; and (iii) modelling uncertainty and errors during cyclic creep analysis. Due to the consideration of all type of uncertainties, a measure for the total variation of the model response is achieved. The study finds that the BP, modified Hyperbolic and modified MC90 are best performing models for cyclic creep prediction in that order. Further, global Sensitivity Analysis (SA) considering the uncorrelated and correlated parameters is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identifying the important parameters. The error in determining the input quantities and model itself can produce significant changes in creep prediction values. The variability influence of input random quantities on the cyclic creep was studied by means of the stochastic uncertainty and sensitivity analysis namely the Gartner et al. method and Saltelli et al. method. All input imperfections were considered to be random quantities. The Latin Hypercube Sampling (LHS) numerical simulation method (Monte Carlo type method) was used. It has been found by the stochastic sensitivity analysis that the cyclic creep deformation variability is most sensitive to the Elastic modulus of concrete, compressive strength, mean stress, cyclic stress amplitude, number of cycle, in that order.
Das studentische Projekt Umweltorientiertes Projektmanagement (UWoPM) eröffnet eine neue Generation von Arbeitsweisen im Projektmanagement. Umweltbelange wirken über Rechtsvorschriften immer stärker in das Projektgeschäft hinein. Durch Einbeziehung von rechtsrelevanten Daten in die Projektdatenbasis sowie von Rechtsbestimmungen in die Wissensbasis wird eine Softwareunterstützung für (wenn auch zunächst nur elementare) Rechtsfragen geboten. Eine spezielle Präsentation von Rechtswissen und eine komfortable Unterstützung der Antragstellung sollen dies in die Selbstverständlichkeiten des Projektmanagements einbringen. Gleichzeitig bietet der UWoPM-Prototyp zeitgemäße Systemlösungen für die Kommunikation Antragsteller - Behörde sowie für den Workflow innerhalb der Genehmigungsbehörde. Dieser Vortrag gibt Einführung und Übersicht zu insgesamt 18 Diplomarbeiten, in denen die Einzelkomponenten des UWoPM entwickelt wurden. Zentrales Problem war die Überführung gesetzlicher Vorschriften in Wissensbasen. Durch Beschränkung auf >Kernbestimmungen< und Ausweis von Interpretationsbedarf wurde ein prinzipieller Lösungsweg realisiert. Die technische Realisierung konnte zwar über einen Prototyp noch nicht hinausgehen, weist aber die Realisierbarkeit des Gesamtansatzes nach.
The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.
Die Lage oder der Standort eines Bauwerkes ist zweifellos charakteristisch mit diesem verbunden. Die räumliche Einordnung eines exponierten architektonischen Werkes, die Erschließung eines Gebäudes im innerstädtischen Umfeld oder die Verwaltung eines Bestands ist im Bauwesen oder der Architektur immer visuell. Die Interaktion mit Zeichnungen, die Orientierung anhand eines Lageplans oder die Dokumentation mit Fotos sind nur einige Beispiele. Die wirtschaftliche Optimierung unter Nutzung solcher Daten und deren nachfolgende Visualisierung soll hier mittels geeigneter Systeme gezeigt werden. Aber auch die Bewertung durch und Interaktion mit dem Benutzer unterstützt werden. So soll dieser Artikel beispielhaft den Transport und Verkehr fokussieren. Mathematik Graphen und Netze formen dabei Modelle zur Optimierung der zugrundeliegenden Logistikprozesse: Die Baustoffbedarfsplanung mit Bestellwesen, (Ab-)Transport und Lieferung von Material, Tourenzusammenstellung oder Standortauswahl. Informatik Weiterhin wird deren softwaretechnische Umsetzung und Einordung in begleitende Projekte der >Mathematischen Optimierung< vorgestellt.
TRANSFORMING TO EXPERT
(2011)
Katharina Richter holds a degree in Architecture, Urban- and Regional Planning from the Bauhaus-University Weimar, where she is appointed as Assistant Professor at the Chair of Computer Science in Architecture since 2000. She teaches architecture studio in undergraduate and graduate programs and has been supervising several international teaching projects. In Fall 2004 she was teaching and researching at the Washington Alexandria Architecture Consortium - Virginia Polytechnic and State University, Alexandria, VA. Her current research focuses on the investigation of the potential of computer based exchange of experiential knowledge in architecture. Between 2000 and 2006 she coordinated third party verification procedures at the Collaborative Research Center SFB 524 „Materials and Structures in Revitalization of Buildings“, Bauhaus-University Weimar, Germany. Her work has been published at various international conferences as well as in related reviewed journals.
TRANSFORMING TACIT KNOWLEDGE
(2011)
Sabine Ammon studied architecture and philosophy at the Technische Universität Berlin. Study and research visits led her to the University of London, Harvard University and ETH Zürich. Furthermore, she practised building design as a freelance architect. Her dissertation “Wissen verstehen. Perspektiven einer prozessualen Theorie der Erkenntnis”, Weilerswist 2009, develops a theory of knowledge, based on the philosophy of symbols. In her current research project she explores the epistemic dimension of architectural design processes.
Professor Jane Rendell is Director of Architectural Research at the Bartlett, UCL. An architectural designer and historian, art critic and writer, her work has explored various interdisciplinary intersections: feminist theory and architectural history, ne art and architectural design, autobiographical writing and criticism. She is author of Site-Writing: The Architecture of Art Criticism (2010), Art and Architecture (2006), The Pursuit of Pleasure (2002) and co-editor of Pattern (2007), Critical Architecture (2007), Spatial Imagination (2005), The Unknown City (2001), Intersections (2000), Gender Space Architecture (1999) and Strangely Familiar (1995). Her talks and texts have been commissioned by artists such as Daniel Arsham and Bik Van Der Pol, and galleries, as the Baltic, the Hayward, Kunstmuseum Thon, the Serpentine, the Tate and the Whitechapel. She is on the Editorial Board for ARQ (Architectural Research Quarterly), Haecceity, The Happy Hypocrite, The Issues and the Journal of Visual Culture in Britain.
In order to make control decisions, Smart Buildings need to collect data from multiple sources and bring it to a central location, such as the Building Management System (BMS). This needs to be done in a timely and automated fashion. Besides data being gathered from different energy using elements, information of occupant behaviour is also important for a building’s requirement analysis. In this paper, the parameter of Occupant Density was considered to help find behaviour of occupants towards a building space. Through this parameter, support for building energy consumption and requirements based on occupant need and demands was provided. The demonstrator presented provides information on the number of people present in a particular building space at any time, giving the space density. Such collections of density data made over a certain period of time represents occupant behaviour towards the building space, giving its usage patterns. Similarly, inventory items were tracked and monitored for moving out or being brought into a particular read zone. For both, people and inventory items, this was achieved using small, low-cost, passive Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) tags. Occupants were given the tags in a form factor of a credit card to be possessed at all times. A central database was built where occupant and inventory information for a particular building space was maintained for monitoring and providing a central data access.
A central issue for the autonomous navigation of mobile robots is to map unknown environments while simultaneously estimating its position within this map. This chicken-eggproblem is known as simultaneous localization and mapping (SLAM). Asctec’s quadrotor Pelican is a powerful and flexible research UAS (unmanned aircraft system) which enables the development of new real-time on-board algorithms for SLAM as well as autonomous navigation. The relative UAS pose estimation for SLAM, usually based on low-cost sensors like inertial measurement units (IMU) and barometers, is known to be affected by high drift rates. In order to significantly reduce these effects, we incorporate additional independent pose estimation techniques using exteroceptive sensors. In this article we present first pose estimation results using a stereo camera setup as well as a laser range finder, individually. Even though these methods fail in few certain configurations we demonstrate their effectiveness and value for the reduction of IMU drift rates and give an outlook for further works towards SLAM.
The process of matching data represented in two different data models is a longstanding issue in the exchange of data between different software systems. While the traditional manual matching approach cannot meet today’s demands on data exchange, research shows that a fully automated generic approach for model matching is not likely, and generic semi-automated approaches are not easy to implement. In this paper, we present an approach that focuses on matching data models in a specific domain. The approach combines a basic model matching approach and a version matching approach to deduce new matching rules to enable data transfer between two evolving data models.
This paper describes an ongoing research on the representation and reasoning about construction specifications, which is part of a bigger research project that aims at developing a formalism for automating the identification of deviations and defects on construction sites. We specifically describe the requirements on product and process models and an approach for representing and reasoning about construction specifications to enable automated detection and assessment of construction deviations and defects. This research builds on the previous research on modeling design specifications and extends and elaborates concept of contexts developed in that domain. The paper provides an overview of how the construction specifications are being modele d in this research and points out future steps that need to be accomplished to develop the envisioned automated deviation and defect detection system.
Leslie Kavanaugh is both an architect and a philosopher. She is a licensed architect in America and the Netherlands, as well as a member of the AIA, but studied philosophy from undergraduate to doctorate at the University of Amsterdam. She has taught philosophy and design at various institutions, including twelve years at TUDelft, and as a guest professor at the Tokyo Science University and Milano Politecnico. Her publications include The Architectonic of Philosophy: Plato, Aristotle, Leibniz (Amsterdam: University of Amsterdam Press, 2007), Crossovers (with A.Graafland), Meditations on Space (2010), Aggregates (2010), and Chronotopologies: Hybrid Spatialities and Multiple Temporalities (Amsterdam: Rodopi Press, forthcoming). Presently she is the founder and director of studiokav.com in Amsterdam, a multi-disciplinary and collaborative atelier. In addition, Kavanaugh is an affiliated Senior Scholar at the Philosophy Institute, Leiden University, the Netherlands.
The aim of our contribution is to clarify the relation between totally regular variables and Appell sequences of hypercomplex holomorphic polynomials (sometimes simply called monogenic power-like functions) in Hypercomplex Function Theory. After their introduction in 2006 by two of the authors of this note on the occasion of the 17th IKM, the latter have been subject of investigations by different authors with different methods and in various contexts. The former concept, introduced by R. Delanghe in 1970 and later also studied by K. Gürlebeck in 1982 for the case of quaternions, has some obvious relationship with the latter, since it describes a set of linear hypercomplex holomorphic functions all power of which are also hypercomplex holomorphic. Due to the non-commutative nature of the underlying Clifford algebra, being totally regular variables or Appell sequences are not trivial properties as it is for the integer powers of the complex variable z=x+ iy. Simple examples show also, that not every totally regular variable and its powers form an Appell sequence and vice versa. Under some very natural normalization condition the set of all para-vector valued totally regular variables which are also Appell sequences will completely be characterized. In some sense the result can also be considered as an answer to a remark of K. Habetha in chapter 16: Function theory in algebras of the collection Complex analysis. Methods, trends, and applications, Akademie-Verlag Berlin, (Eds. E. Lanckau and W. Tutschke) 225-237 (1983) on the use of exact copies of several complex variables for the power series representation of any hypercomplex holomorphic function.
A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.
Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.
Toplogical Houses
(2003)
Many properties of houses are of topological nature. The problem of three-dimensional encoding is solved here by first giving an axiomatic description of a simplified concept of >house< as a certain generalisation of a cw-complex and, secondly, by generalising local observation structures of embedded unconnected planar graphs to the three-dimensional case and proving that they allow retrieving all topological properties of these simplified houses. In the more general case of an architectural complex (a certain generalisation of a >house<) still much topolgical information is kept in these structures still making them a useful approach to encoding topological spaces. Finally, a lossless representation of observation structures in a relational database scheme which we call PLAV (Points, Lines, Areas, Volumes) is given. We expect PLAV to be useful for encoding higher dimensional (architectural) space-time complexes.
TOOL TO CHECK TOPOLOGY AND GEOMETRY FOR SPATIAL STRUCTURES ON BASIS OF THE EXTENDED MAXWELL'S RULE
(2006)
One of the simplest principle in the design of light-weight structures is to avoid bending. This can be achieved by dissolving girders into members acting purely in axial tension or compression. The employment of cables for the tensioned members leads to even lighter structures which are called cable-strut structures. They constitute a subclass of spatial structures. To give fast information about the general feasibility of an architectural concept employing cable-strut structures is a challenging task due to their sophisticated mechanical behavior. In this regard it is essential to control if the structure is stable and if pre-stress can be applied. This paper presents a tool using the spreadsheet software Microsoft (MS) Excel which can give such information. Therefore it is not necessary to purchase special software and the according time consuming training is much lower. The tool was developed on basis of the extended Maxwell's rule, which besides topology also considers the geometry of the structure. For this the rank of the node equilibrium matrix is crucial. Significance and determination of the rank and the implementation of the corresponding algorithms in MS Excel are described in the following. The presented tool is able to support the structural designer in an early stage of the project in finding a feasible architectural concept for cable-strut structures. As examples for the application of the software tool two special cable-strut structures, so called tensegrity structures, were examined for their mechanical behavior.
Design activity could be treated as state transition computationally. In stepwise processing, in-between form-states are not easily observed. However, in this research time-based concept is introduced and applied in order to bridge the gap. In architecture, folding is one method of form manipulation and architects also want to search for alternatives by this operation. Besides, folding operation has to be defined and parameterized before time factor is involved as a variable of folding. As a result, time-based transformation provides sequential form states and redirects design activity.
The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.
Albert Narath is a doctoral candidate in modern architectural history at Columbia University in New York and a Paul Mellon Pre-doctoral Fellow at the Center for Advanced Research in the Visual Arts at the National Gallery of Art in Washington, DC. He holds an MA degree from the Architectural Association in London. His dissertation concerns architectural and art historical debates surrounding the Neo-baroque at the end of the nineteenth century in Germany.
A realistic and reliable model is an important precondition for the simulation of revitalization tasks and the estimation of system properties of existing buildings. Thereby, the main focus lies on the parameter identification, the optimization strategies and the preparation of experiments. As usual structures are modeled by the finite element method. This as well as other techniques are based on idealizations and empiric material properties. Within one theory the parameters of the model should be approximated by gradually performed experiments and their analysis. This approximation method is performed by solving an optimization problem, which is usually non-convex, of high dimension and possesses a non-differentiable objective function. Therefore we use an optimization procedure based on genetic algorithms which was implemented by using the program package SLang...
For analysis and planing of transport networks detailed information concerning travel sequences is required. The paper examines an activity chain model to determine stochastic travel demand which individual generates in order to participate in activity or sequence of activities over the day. The transition from one activity to another depends on the activity participation decision, which is made sequentially and is constrained in space and time. Activity chains derived from travel survey data are used as a base to develop a more realistic model than conventional discrete trip models. Probabilistic concepts and statistical procedures are used to estimate characteristics of travel demand such as the number of daily out-of-home activities and transition probability from one activity to another participated by homogeneous behavioural groups. In addition, the paper compares the model for two different sized towns.
For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.
The method of difference potentials can be used to solve discrete elliptic boundary value problems, where all derivatives are approximated by finite differences. Considering the classical potential theory, an integral equation on the boundary will be investigated, which is solved approximately by the help of a quadrature formula. The advantage of the discrete method consists in the establishment of a linear equation system on the boundary, which can be immediately solved on the computer. The described method of difference potentials is based on the discrete Laplace equation in the three-dimensional case. In the first step the integral representation of the discrete fundamental solution is presented and the convergence behaviour with respect to the continuous fundamental solution is discussed. Because the method can be used to solve boundary value problems in interior as well as in exterior domains, it is necessary to explain some geometrical aspects in relation with the discrete domain and the double-layer boundary. A discrete analogue of the integral representation for functions in will be presented. The main result consists in splitting the difference potential on the boundary into a discrete single- and double-layer potential, respectively. The discrete potentials are used to establish and solve a linear equation system on the boundary. The actual form of this equation systems and the conditions for solvability are presented for Dirichlet and Neumann problems in interior as well as in exterior domains
The Lucas-Kanade tracker has proven to be an efficient and accurate method for calculation of the optical flow. However, this algorithm can reliably track only suitable image features like corners and edges. Therefore, the optical flow can only be calculated for a few points in each image, resulting in sparse optical flow fields. Accumulation of these vectors over time is a suitable method to retrieve a dense motion vector field. However, the accumulation process limits application of the proposed method to fixed camera setups. Here, a histogram based approach is favored to allow more than a single typical flow vector per pixel. The resulting vector field can be used to detect roads and prescribed driving directions which constrain object movements. The motion structure can be modeled as a graph. The nodes represent entry and exit points for road users as well as crossings, while the edges represent typical paths.