Refine
Has Fulltext
- yes (168) (remove)
Document Type
- Conference Proceeding (132)
- Doctoral Thesis (15)
- Master's Thesis (7)
- Report (4)
- Diploma Thesis (3)
- Preprint (3)
- Article (2)
- Book (1)
- Part of a Book (1)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (93)
- Institut für Strukturmechanik (ISM) (11)
- Professur Baubetrieb und Bauverfahren (10)
- Professur Angewandte Mathematik (6)
- Institut für Europäische Urbanistik (5)
- Junior-Professur Augmented Reality (5)
- Professur Stahlbau (5)
- Institut für Konstruktiven Ingenieurbau (IKI) (4)
- Professur Informatik in der Architektur (4)
- Professur Computergestütztes kooperatives Arbeiten (3)
Keywords
- Architektur <Informatik> (125)
- CAD (125)
- Computerunterstütztes Verfahren (124)
- Baubetriebslehre (8)
- Nachtragsvereinbarung (8)
- Prozessmanagement (8)
- Weimar / Bauhaus-Universität / Professur Baubetrieb und Bauverfahren (8)
- CGI <Computergraphik> (4)
- Angewandte Informatik (3)
- Maschinelles Sehen (3)
- Association for Computing Machinery / Special Interest Group on Graphics (2)
- Bauablauf (2)
- Bildbasiertes Rendering (2)
- China (2)
- Erweiterte Realität <Informatik> (2)
- Finite-Elemente-Methode (2)
- MAG-Schweißen (2)
- Modellierung (2)
- Projektionsverfahren (2)
- Ubiquitous Computing (2)
- Virtuelle Realität (2)
- projector-camera systems (2)
- Abbinderegler (1)
- Abfälle (1)
- Alkalisulfate (1)
- Analyse (1)
- Architectural Theory (1)
- Architekturtheorie (1)
- Arthur (1)
- Augmented reality (1)
- Augmented studio (1)
- Aussagenlogik (1)
- Awareness (1)
- B-Spline (1)
- B-Spline Finite Elemente (1)
- B-spline (1)
- BIM (1)
- Bauphysik (1)
- Bausoll (1)
- Bauteil (1)
- Bearbeitung von Ingenieuraufgaben (1)
- Beteiligung (1)
- Betonverflüssiger (1)
- Bildkorrektur (1)
- Bodenaggressivität (1)
- Building Information Model (1)
- Bäderwesen (1)
- C-S-H Phasen (1)
- C-S-H phases (1)
- Cognition (1)
- Complex Systems (1)
- Computer-Supported Cooperative Work (1)
- Construction Product data (1)
- Content Management (1)
- Datenmodell (1)
- Design Science (1)
- Direkte numerische Simulation (1)
- Display (1)
- Drehrohr (1)
- Duplex-Stahl (1)
- Duplexstahl (1)
- Dynamische Analyse (1)
- Dörfer (1)
- Editor (1)
- Effizienzmessung (1)
- Eigenspannung (1)
- Elastomerlager (1)
- Epistemology (1)
- Erkenntnistheorie (1)
- Ersatzbrennstoffe (1)
- FVK (1)
- Fernsehproduktion (1)
- Festigkeit (1)
- Festkörpermechanik (1)
- Film (1)
- Fließmittel (1)
- Fließverhalten (1)
- Frankreich (1)
- Gebäudesimulation ESP-r (1)
- Gefügeumwandlung (1)
- Gegenständliches Modell (1)
- Gilles Deleuze (1)
- Globalisierung (1)
- Glocke (1)
- Glockenläuten (1)
- Gruppengewahrsein (1)
- Heuristik (1)
- Hochwasserschutz (1)
- Homogenisieren (1)
- Homogenisierung (1)
- Hydratation (1)
- Hydraulik (1)
- Hydrologische Messung (1)
- IFC (1)
- Immobilienmarkt (1)
- Informationsmodell (1)
- Instandhaltungsplanung (1)
- Inverse Light Transport (1)
- Jim Jarmusch (1)
- Kalkulation (1)
- Kognitive Komplexität (1)
- Komplexität (1)
- Konfiguration (1)
- Korrosionsgeschwindigkeit (1)
- Korrosionsprüfung (1)
- Kräfte und Bewegungen (1)
- Kunststoff (1)
- Kunststoff-Metall-Verbund (1)
- Kunststoffbauteil (1)
- Kunststoffbauten (1)
- Kunststoffherstellung (1)
- Kunststoffhäuser (1)
- Kunststoffindustrie (1)
- Kunststofftechnik (1)
- LEG (1)
- Landesentwicklungsgesellschaft (1)
- Latentwärmespeicher (1)
- Licht Transport (1)
- Lösungsverfahren (1)
- Maintenance (1)
- Maschine (1)
- Massivbrücke (1)
- Mathematisches Modell (1)
- Megastadt (1)
- Mehrgitterverfahren (1)
- Metallographie (1)
- Mikrostruktur (1)
- Mobilität (1)
- Modellbildung (1)
- Modelltechnik (1)
- Modelluntersuchung (1)
- Monitoring (1)
- Monitoring Bericht (1)
- Multi-Projektor Systeme (1)
- MySQL (1)
- Netzwerkwissenschaften (1)
- Numerische Mathematik (1)
- Nutzungsdauer (1)
- Nutzungsänderung (1)
- OPL (1)
- Opimization (1)
- Optimierung (1)
- PCM-Putz (1)
- PCM-plaster (1)
- PHP (1)
- PPP (1)
- Periodendauer (1)
- Pfeiler (1)
- Phasenübergangsmaterialien (1)
- Philosophie (1)
- Physikalisches Modell (1)
- Plausibilität (1)
- Privatsphaere (1)
- Produktinformation (1)
- Prognosemodell (1)
- Projector-Camera Systems (1)
- Projekor-Kamera System (1)
- Projektion (1)
- Projektion <Optik> (1)
- Projektionssystem (1)
- Projektor-Kamera Systeme (1)
- Public Private Partnership (1)
- Pyrolyse (1)
- RDA (1)
- RDF (1)
- Radiometric Compensation (1)
- Radiometrische Kompensation (1)
- Recycling (1)
- Rendering (1)
- Resttragfähigkeit (1)
- Revitalization (1)
- Reziprozitaet (1)
- Rohrleitung (1)
- Scale-Up (1)
- Schweißprozesssimulation (1)
- Schwimmbad (1)
- Schwingungsanregung (1)
- Schwingungsdämpfung (1)
- Schwingungstilger (1)
- Schwingungsverhalten (1)
- Schädigung (1)
- Sensor-Based Infrastructure (1)
- Sensor-basierte Infrastrukture (1)
- Service-Oriented Platform (1)
- Service-orientierte Plattform (1)
- Stadtentwicklung (1)
- Stadtforschung (1)
- Stadtplanung (1)
- Stahlbau (1)
- Strategic Development (1)
- Strategische Planung (1)
- Strategische Projekte (1)
- Strömungsmesstechnik (1)
- Studiotechnik (1)
- Stuttgart / Sonderforschungsbereich Rechnergestützte Modellierung und Simulation zur Analyse (1)
- Städtebau (1)
- Systemtheorie (1)
- Technik / Geschichte (1)
- Temperaturfeld (1)
- Tragkonstruktion (1)
- Tragwerk (1)
- Tragwerke (1)
- Transformation (1)
- Tsingtau (1)
- Umbau (1)
- Umnutzung (1)
- Urbanisierung (1)
- Urbanistik (1)
- Visuelle Wahrnehmung (1)
- Warschau (1)
- Wasserstandmessung (1)
- Wasserversorgung (1)
- Waste (1)
- Weimar / Sonderforschungsbereich Werkstoffe und Konstruktionen für die Revitalisierung von Bauwerken (1)
- Wiederholung (1)
- Winkelverzug (1)
- Wissenschaft / Geschichte (1)
- Wissenschaftsphilosophie (1)
- Wohnhäuser in Stahlbauweise (1)
- Wohnungsbau (1)
- Wohnungsmarkt (1)
- Wolke (1)
- Zement (1)
- Zivilgesellschaft (1)
- cement (1)
- cinema (1)
- civil society (1)
- configuration (1)
- cost estimation (1)
- digital light projection (1)
- duplex stainless steel (1)
- editor (1)
- effective properties (1)
- efficiency (1)
- engineer-technical integration (1)
- fachliche Integration (1)
- fiber reeinforced plastics (1)
- finite element (1)
- flood protection (1)
- fluidity (1)
- france (1)
- history of science (1)
- history of technology (1)
- hydration (1)
- hydraulic measurement (1)
- hydraulics (1)
- image correction (1)
- information sharing (1)
- interactive (1)
- interaktiv (1)
- job description (1)
- latent heat storage (1)
- leakages (1)
- load bearing structure (1)
- machine (1)
- megacity (1)
- mehrphasig (1)
- microstructure (1)
- modell analysis (1)
- modelling (1)
- multi-projector systems (1)
- multigrid (1)
- multiphase (1)
- numerical simulation (1)
- phase change materials (1)
- phase transformation (1)
- philosophy (1)
- plastic building (1)
- privacy (1)
- processing of engineering tasks (1)
- projection (1)
- radiometric compensation (1)
- radiometrische Kompensation (1)
- reciprocity (1)
- remaining load capacitiy (1)
- repetition (1)
- residential buildings (1)
- residual stress (1)
- sequence (1)
- set regulator (1)
- simulation (1)
- sommerlicher Wärmeschutz (1)
- spatial augmented reality (1)
- strength (1)
- structures (1)
- substitute fuel (1)
- superplasticizer (1)
- technische Nutzungsdauer (1)
- thermal building simulation (1)
- thermal protection (1)
- thermo-physical material properties (1)
- thermophysikalische Materialeigenschaften (1)
- traditional social system (1)
- traditionellen Sozialsystem (1)
- unterschiedliche Sicherheitskonzepte (1)
- urbanization (1)
- visual perception (1)
Year of publication
- 2006 (168) (remove)
Hinsichtlich der Integration einzelner Bauwerkslebensphasen und der verschiedenen Beteiligten, insbesondere innerhalb von Bauplanungs- und Revitalisierungsprozessen, bestehen aktuell entscheidende Defizite. Die generelle Zielstellung der in diesem Beitrag vorgestellten Forschungsarbeiten besteht in der Unterstützung und Verbesserung der Integration durch die disziplin- und lebensphasenübergreifende Bereitstellung sämtlicher bauwerksbezogener Informationen. Dies erfordert einerseits geeignete Ansätze zur Modellierung und Integration der vielfältigen disziplinspezifischen Daten, andererseits geeignete Lösungen, die einen globalen Zugriff, Navigation und Recherche im Gesamtdatenbestand ermöglichen. Die Modellierung und Verwaltung bauwerksbezogener Daten ist seit längerem Gegenstand diverser Forschungsarbeiten. Im Rahmen des SFB 524 wurde ein eigener Ansatz basierend auf einem laufzeitdynamischen Partialmodellverbund entwickelt. Dieser wird in den wesentlichen Grundzügen anderen Ansätzen gegenübergestellt. Den Schwerpunkt dieses Beitrags bildet jedoch die Entwicklung einer geeigneten flexiblen Navigations- und Rechercheschicht zu Realisierung projektglobaler Informationsrecherche. Aus der Sicht der Modellierung und Datenverwaltung wie auch aus der Sicht der Informationsrecherche und Informationspräsentation in Planungsprozessen ergeben sich verschiedene Anforderungen an derartige Recherchewerkzeuge, wobei der wesentlichste Grundsatz maximale Flexibilität hinsichtlich verfügbarer Darstellungstechniken und deren freie Kombination mit Techniken formaler Suchanfragen ist. Das entwickelte Systemkonzept basiert auf einem Framework, welches verschiedene Grundtypen von Recherchemodulen und deren Interaktionsprinzipien vorgibt. Einzelne Recherchemodule werden als Ausprägungen dieser Modultypen realisiert und können je nach Bedarf laufzeitdynamisch in die Navigationsschicht integriert werden. Die technische Realisierung des Systems erfolgt im Umfeld vorhandener Prototypen aus vorangegangenen Forschungsaktivitäten. Dieses technische Umfeld gibt verschiedene Rahmenbedingungen vor, welche im Vorfeld prototypischer Implementierungen verschiedene Adaptionen des generellen Systemkonzepts notwendig machen. Der vorliegende Beitrag stellt den aktuellen Entwicklungsstand der Systemlösung aus konzeptioneller und technischer Sicht sowie erste prototypische Realisierungen von Recherchemodulen vor.
LIFETIME-ORIENTED OPTIMIZATION OF BRIDGE TIE RODS EXPOSED TO VORTEX-INDUCED ACROSS-WIND VIBRATIONS
(2006)
In recent years, damages in welded connections plates of vertical tie rods of several arched steel bridges have been reported. These damages are due to fatigue caused by wind-induced vibrations. In the present study, such phenomena are examined, and the corresponding lifetime of a reference bridge in Münster-Hiltrup, Germany, is estimated, based on the actual shape of the connection plate. Also, the results obtained are compared to the expected lifetime of a connection plate, whose geometry has been optimized separately. The structural optimization, focussing on the shape of the cut at the hanger ends, has been carried out using evolution strategies. The oscillation amplitudes have been computed by means of the Newmark-Wilson time-step method, using an appropriate load model, which has been validated by on-site experiments on the selected reference bridge. Corresponding stress-amplitudes are evaluated by multiplying the oscillation amplitudes with a stress concentration factor. This factor has been computed on the basis of a finite element model of the system "hanger-welding-connection plate", applying solid elements, according to the notch stress approach. The damage estimation takes into account the stochastics of the exciting wind process, as well as the stochastics of the material parameters (fatigue strength) given in terms of Woehler-curves. The shape optimization results in a substantial increase of the estimated hanger lifetime. The comparison of the lifetimes of the bulk plate and of the welding revealed that, in the optimized structure, the welding, being the most sensitive part in the original structure, shows much more resistance against potential damages than the bulk material.
DIGITAL SUPPORT OF MATERIAL- AND PRODUCT SELECTION IN THE ARCHITECTURAL DESIGN- AND PLANNING PROCESS
(2006)
Architecture is predominantly perceived over the surfaces limiting the space. The used surface materials thereby should support the design intention and have to fulfil various technical and economical requirements. If the architect wants to select the "right" or the "best" material he has to play with very different and sometimes contradicting criteria and must weight these individually for the special purpose. This selection process is supported only insufficiently by today's digital systems. If it would be possible to illustrate all the various parameters by numerical values, the method of multidimensional scaling will offer a solution for architects to find the material which is best fitting on basis of his individual weighting of criteria. By displaying the result of the architect's multidimensional query in a spatial arrangement multidimensional scaling can support an interactive selection process with additional feedback over the applied search strategy.
The paper is a proposal of calculation of internal forces and dislocations in the reinforced concrete beams before and after cracking. For the ideally elastic bars transfer matrix proposed by Rakowski was applied. The effects associated with cracking were introduced by means of the Borcz's theory in the spectrally way. Numerical example was shown. The presented attitude also enables to calculate dynamic problems and those connected with the stability of the compressed and bending cracked beams and columns.
The reduction of oscillation amplitudes of structural elements is necessary not only for maintenance of their durability and longevity but also for elimination of a harmful effect of oscillations on people and technology operations. The dampers are widely applied for this purpose. One of the most widespread models of structural friction forces having piecewise linear relation to displacement was analysed. T The author suggests the application of phase trajectories mapping in plane "acceleration – displacement". Unlike the trajectories mapping in a plane "velocity – displacement", they don't require large number of geometrical constructions for identification of the characteristics of dynamic systems. It promotes improving the accuracy. The analytical assumptions had been verified by numerical modeling. The results show good enough coincide between numerical and analytical estimation of dissipative characteristic.
In this paper proposed the application of two-parameters damage model, based on non-linear finite element approach, to the analysis of masonry panels. Masonry is treated as a homogenized material, for which the material characteristics can be defined by using homogenization technique. The masonry panels subjected to shear loading are studied by using the proposed procedure within the framework of three-dimensional analyses. The nonlinear behaviour of masonry can be modelled using concepts of damage theory. In this case an adequate damage function is defined for taking into account different response of masonry under tension and compression states. Cracking can, therefore, be interpreted as a local damage effect, defined by the evolution of known material parameters and by one or several functions which control the onset and evolution of damage. The model takes into account all the important aspects which should be considered in the nonlinear analysis of masonry structures such as the effect of stiffness degradation due to mechanical effects and the problem of objectivity of the results with respect to the finite element mesh. Finally the proposed damage model is validated with a comparison with experimental results available in the literature.
This paper presents results of applying Fuzzy Inference System for estimation of the number of potential Park and Ride users. Usually it is difficult to evaluate the number of users because it depends on human factor and data in the considered system are uncertain. In such situation the traditional mathematical approaches can not take into consideration rough data. Therefore a fuzzy approach can be applied in this case. A fuzzy methodology is treated as a proper way to describe choice of mode of transport, and especially that uncertainty accompanied of choosing process has rather fuzzy character. The proposed approach is based on the Mamdani Fuzzy Inference System and for calculation there is used Matlab software with Fuzzy Logic Toolbox. Mamdani model requires, as an input data, knowledge of the shape of membership function. These functions can be calibrated taking into consideration results of questionnaires conducted among users of Park and Ride system. Due to lack of representative sample of users, one has decided to use results of experts' questionnaires as a input data for calibration the shape of membership functions. Describing factor will be generalized cost of the trip for different modes of transport. Proposed approach consists of two main stages: modeling of share of public/private transport trips and Multimodal model estimating number of Park and Ride users. Verification of presented methodology is treated as an indirect proof. Proposed approach can be applied for estimation of bi-modal split. Then the results are compared with traditional approaches based on logit functions. Comparable results of proposed fuzzy approach with traditional logit models can be treated as a confirmation of chosen methodology.
The paper contains a description of dynamic effects in the silo wall during the outflow of a stored material. The work allows for determining the danger of construction damage due to resonant vibrations and is of practical importance by determining the influence of cyclic pressures and vibro–creeping during prolonged use of a silo. The paper was devised as a result of tests on silo walls in semi-technical scale. The model is generally applicable and allows for identification of parameters in real- size silos as well.
This research focuses on an approach to describe principles in architectural layout planning within the domain of revitalization. With the aid of mathematical rules, which are executed by a computer, solutions to design problems are generated. Provided that "design" is in principle a combinatorial problem, i.e. a constraint-based search for an overall optimal solution of a problem, an exemplary method will be described to solve such problems in architectural layout planning. To avoid conflicts relating to theoretical subtleness, a customary approach adopted from Operations Research has been chosen in this work. In this approach, design is a synonym for planning, which could be described as a systematic and methodical course of action for the analysis and solution of current or future problems. The planning task is defined as an analysis of a problem with the aim to prepare optimal decisions by the use of mathematical methods. The decision problem of a planning task is represented by an optimization model and the application of an efficient algorithm in order to aid finding one or more solutions to the problem. The basic principle underlying the approach presented herein is the understanding of design in terms of searching for solutions that fulfill specific criteria. This search is executed by the use of a constraint programming language.
DECENTRALIZED APPROACHES TO ADAPTIVE TRAFFIC CONTROL AND AN EXTENDED LEVEL OF SERVICE CONCEPT
(2006)
Traffic systems are highly complex multi-component systems suffering from instabilities and non-linear dynamics, including chaos. This is caused by the non-linearity of interactions, delays, and fluctuations, which can trigger phenomena such as stop-and-go waves, noise-induced breakdowns, or slower-is-faster effects. The recently upcoming information and communication technologies (ICT) promise new solutions leading from the classical, centralized control to decentralized approaches in the sense of collective (swarm) intelligence and ad hoc networks. An interesting application field is adaptive, self-organized traffic control in urban road networks. We present control principles that allow one to reach a self-organized synchronization of traffic lights. Furthermore, vehicles will become automatic traffic state detection, data management, and communication centers when forming ad hoc networks through inter-vehicle communication (IVC). We discuss the mechanisms and the efficiency of message propagation on freeways by short-range communication. Our main focus is on future adaptive cruise control systems (ACC), which will not only increase the comfort and safety of car passengers, but also enhance the stability of traffic flows and the capacity of the road (“traffic assistance”). We present an automated driving strategy that adapts the operation mode of an ACC system to the autonomously detected, local traffic situation. The impact on the traffic dynamics is investigated by means of a multi-lane microscopic traffic simulation. The simulation scenarios illustrate the efficiency of the proposed driving strategy. Already an ACC equipment level of 10% improves the traffic flow quality and reduces the travel times for the drivers drastically due to delaying or preventing a breakdown of the traffic flow. For the evaluation of the resulting traffic quality, we have recently developed an extended level of service concept (ELOS). We demonstrate our concept on the basis of travel times as the most important variable for a user-oriented quality of service.
MODEL OF TRAM LINE OPERATION
(2006)
From passenger's perspective punctuality is one of the most important features of trams operations. Unfortunately in most cases this feature is only insufficiently fulfilled. In this paper we present a simulation model for trams operation with special focus on punctuality. The aim is to get a helpful tool for designing time-tables and for analyzing the effects by changing priorities for trams in traffic lights respectively the kind of track separation. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops. In this paper the running time is modeled by the sum of its mean value and a zero-mean random variable. With the help of multiple regression we find out that the average running time is a function depending on the length of the sections and the number of intersections. The random component is modeled by a sum of two independent zero-mean random variables. One of these variables describes the disturbance caused by the process of waiting at an intersection and the other the disturbance caused by the process of driving. The time spent at a stop is assumed to be a random variable, too. Its distribution is estimated from given measurements of these stop times for different tram lines in Kraków. Finally a special case of the introduced model is considered and numerical results are presented. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
For the dynamic behavior of lightweight structures like thin shells and membranes exposed to fluid flow the interaction between the two fields is often essential. Computational fluid-structure interaction provides a tool to predict this interaction and complement or eventually replace expensive experiments. Partitioned analyses techniques enjoy great popularity for the numerical simulation of these interactions. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular analysis software. We use, for the fluid, GLS stabilized finite elements on a moving domain based on the incompressible instationary Navier-Stokes equations, where the formulation guarantees geometric conservation on the deforming domain. The structure is discretized by nonlinear, three-dimensional shell elements.
Commonly used sequential staggered coupling schemes may exhibit instabilities due to the so-called artificial added mass effect. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. Since iterative coupling algorithms are computationally very costly, their convergence rate is very decisive for their usability. To ensure and accelerate the convergence of this iteration the updates of the interface position are relaxed. The time dependent, 'optimal' relaxation parameter is determined automatically without any user-input via exploiting a gradient method or applying an Aitken iteration scheme.
The paper presents a linear static analysis on continuous orthotropic thin-walled shell structures simply supported at the transverse ends with a random deformable contour of the cross section. The external loads can be random as well. The class of this structures involves most of the bridges, scaffold bridges, some roof structures etc. A numerical example of steel continuous structures on five spans with an open contour of the cross-section has been solved. The examination of the structure has used the following two computation models: a prismatic structure consisting of isotropic strips, a plates and ribs, with considering their real interaction, and a smooth orthotropic plate equivalent to the structure in the first model. The displacements and forces of the structure characterizing its stressed and deformed condition have been determined. The results obtained from the two solutions have been analyzed. The study on the structure is made with the force method in combination with the analytical finite strip method (AFSM) in displacements. The basic system is obtained by separating the superstructure from the understructure at the places of intermediate supports and consists of two parts. The first part is a single span thin-walled prismatic shell structure; the second part presents supports (columns, space frames etc.). The connection between the superstructure and intermediate supports is made under random supporting conditions. The forces at the supporting points in the direction of the connections removed are assumed to be the basic unknowns of the force method. The solution of the superstructure has been accomplished by the AFSM in displacements. The structure is divided in only one (transverse) direction into a finite number of plain strips connected to each other in longitudinal linear nodes. The three displacements of the points on the node lines and the rotation around those lines have been assumed to be the basic unknown in each node. The boundary conditions of each strip of the basic system correspond to the simply support along the transverse ends and the restraint along the longitudinal ones. The particular strip of the basic system has been solved by the method of the single trigonometric series. The method is reduced to solving a discrete structure in displacements and restoring its continuity at the places of the sections made in respect to both the displacements and forces. The two parts of the basic system have been solved in sequence under the action of single values of each of the basic unknowns and with the external load. The solution of the support part is accomplished using software for analyzing structures by the FEM. The basic unknown forces have been determined from system of canonic equations, the conditions of the deformations continuity on the places of the removed connections under superstructure and intermediate supports. The final displacements and forces at a random point of a continuous superstructure have been determined using the principle of superposition. The computations have been carried by software developed with Visual Fortran version 5.0 for PC.
The concept is presented of the sensitivity analysis of the limit state of the structure with respect to selected basic variables. The sensitivity is presented in the form of the probability distribution of the limit state of the structure. The analysis is performed by the problem-oriented Monte Carlo simulation procedure. The procedure is based on the problem's definition of the elementary event, as a structural limit state. Thus the sample space consists of limit states of the structure. Defined on the sample space the one-dimensional random multiplier is introduced. This multiplier refers to the dominant basic variable (group of variables) of the problem. Numerical procedure results in the set of random numbers. Normalized relative histogram of this set is an estimator of the PDF of the limit state of the structure. Estimators of reliability, or the probability of failure are statistical characteristics of this histogram. The procedure is illustrated by the example of sensitivity analysis of the serviceability limit state of monumental structure. It is the colonnade of Licheń Basilica, situated in central Poland. Limit state of the structure is examined with reference to the upper deck horizontal deflection. Wind actions are taken as dominant variables. An assumption is made that the wind load intensities acting on the lower and on the upper storey of the colonnade, respectively, are identically distributed, but correlated random variables. Three correlation variants of these variables are considered. Relevant limit state histograms are analysed thereafter. The paper ends with the conclusions referring to the method and some general remarks on the fully probabilistic design.
Monitoring und Bewertung sind Hauptaufgaben im Management bzw. der Revitalisierung von Bauwerken. Unterschiedliche Verfahren können bei der Akquisition der erforderlichen geometrischen Information, wie z. B. Größe oder Verformung eines Gebäudes, eingesetzt werden. Da das Potenzial der digitalen Fotografie kontinuierlich wächst, stellt die Industriephotogrammetrie heute eine bedeutende Alternative zu den klassischen Verfahren wie Dehnmessstreifen oder anderen taktilen Sensoren dar. Moderne Industriephotogrammetrie erfasst die Bilder mittels digitaler Systeme. Dies bedeutet, dass die Information digitaler Bilder mit Hilfe der digitalen Bildverarbeitung untersucht werden muss, um die Bildkoordinaten der Messpunkte zu erhalten. Eine der Aufgaben der Bildverarbeitung für photogrammetrische Zwecke besteht somit darin, den Mittelpunkt von kreisförmigen Marken zu lokalisieren. Die modernen Operatoren liefern Subpixelgenauigkeit für die Koordinaten des Punktes. Das optische Messverfahren der Industriephotogrammetrie erfordert hinsichtlich der Hardware in erster Linie hochauflösende digitale Kameras. Dabei lassen sich die Kameras in Videokameras, HighSpeed-Kameras, intelligente Kameras sowie so genannte Consumer und Professionelle Kameras unterscheiden. Die geometrische Auflösung digitaler HighEnd-Kameras liegt heute bei über 10 Megapixel. In punkto Datentransfer zum Rechner sind verschiedene Standards am Markt verfügbar, z. B. USB2.0, GigE-Vision, CameraLink oder Firewire. Die Wahl des Standards hängt immer von der spezifischen Aufgabenstellung ab, da keine der Techniken eine führende Position einnimmt. Die moderne Photogrammetrie bietet viele neue Möglichkeiten für das Monitoring und die Bewertung von Bauwerken. Sie kann ein-, zwei-, drei- oder vierdimensionale Informationen liefern, falls erforderlich auch in Echtzeit. Als berührungsloses Messverfahren ist der Einsatz der Photogrammetrie noch möglich, wenn die taktilen Sensoren z. B. aufgrund ihres Platzbedarfes nicht mehr eingesetzt werden können. Hochauflösende Videokameras erlauben es, selbst dynamische Untersuchungen mit großer Präzision durchzuführen.
Für eine gesicherte Planung im Bestand, sind eine Fülle verschiedenster Informationen zu berücksichtigen, welche oft erst während des Planungs- oder Bauprozesses gewonnen werden. Voraussetzung hierfür bildet immer eine Bestandserfassung. Zwar existieren Computerprogramme zur Unterstützung der Bestandserfassung, allerdings handelt es sich hierbei ausschließlich um Insellösungen. Der Export der aufgenommenen Daten in ein Planungssystem bedingt Informationsverluste. Trotz der potentiellen Möglichkeit aktueller CAAD/BIM Systeme zur Verwaltung von Bestandsdaten, sind diese vorrangig für die Neubauplanung konzipiert. Die durchgängige Bearbeitung von Sanierungsprojekten von der Erfassung des Bestandes über die Entwurfs- und Genehmigungsplanung bis zur Ausführungsplanung innerhalb eines CAAD/BIM Systems wird derzeit nicht adäquat unterstützt. An der Professur Informatik in der Architektur (InfAR) der Fakultät Architektur der Bauhaus-Universität Weimar entstanden im Rahmen des DFG Sonderforschungsbereich 524 "Werkzeuge und Konstruktionen für die Revitalisierung von Bauwerken" in den letzten Jahren Konzepte und Prototypen zur fachlich orientierten Unterstützung der Planung im Bestand. Der Fokus lag dabei in der Erfassung aller planungsrelevanter Bestandsdaten und der Abbildung dieser in einem dynamischen Bauwerksmodell. Aufbauend auf diesen Forschungsarbeiten befasst sich der Artikel mit der kontextbezogenen Weiterverwendung und gezielten Bereitstellung von Bestandsdaten im Prozess des Planens im Bestand und der Integration von Konzepten der planungsrelevanten Bestandserfassung in marktübliche CAAD/BIM Systeme.
RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)
Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.
Im Bereich der Altbausanierung und der Bestandserfassung im Bauwesen ist es häufig notwendig, bestehende Pläne hinsichtlich des Bauwerkszustandes zu aktualisieren oder, wenn diese Pläne nicht (mehr) zugänglich sind, gänzlich neue Planunterlagen des Ist-Zustandes zu erstellen. Ein komfortabler Weg, diese Bauwerksdaten zu erheben, eröffnet die Technologie der Laservermessung. Der vorliegende Artikel stellt in diesem Zusammenhang Ansätze zur Teilautomatisierung der Generierung eines dreidimensionalen Computermodells eines Bauwerkes vor. Als Ergebnis wird ein Volumenmodell bereitgestellt, in dem zunächst die geometrischen und topologischen Informationen über Flächen, Kanten und Punkte im Sinne eines B-rep Modells beschrieben sind. Die Objekte dieses Volumenmodells werden mit Verfahren aus dem Bereich der künstlichen Intelligenz analysiert und in Bauteilklassen systematisch kategorisiert. Die Kenntnis der Bauteilsemantik erlaubt es somit, aus den Daten ein Bauwerks-Produktmodell abzuleiten und dieses einzelnen Fachplanern – etwa zur Erstellung eines Energiepasses – zugänglich zu machen. Der Aufsatz zeigt den erfolgreichen Einsatz virtueller neuronaler Netze im Bereich der Bestandserfassung anhand eines komplexen Beispiels.
A new application of software technology is the application area of smart living or sustainable living. Within this area application platforms are designed and realized with the goal to support value added services. In this context value added services integrates microelectronics, home automation and services to enhance the attractiveness of flats, homes and buildings. Especially real estate companies or service providers dealing with home services are interested in an effective design and management of their services. Service Engineering is the approved approach for designing customer oriented service processes. Service engineering consists of several phases; from situation analysis to service creation and service design to service management. This article will describe how the method service blueprint can be used to design service processes. Smart living includes all actions to enlarge a flat to a smart home for living. One special requirement of this application domain is the use of local components (actuators, sensors) within service processes. This article will show how this extended method supports service providers to improve the quality of customer oriented service processes and the derivation of needed interfaces of involved actors. For the civil engineering process it will be possible to derive needed information from a built in home automation system. The aim is to show, how to get needed smart local components to fullfill later offered it-supported value added services. Value added services focused on inhabitants are grouped to consulting and information, care and supervision, leisure time activities, repairs, mobility and delivery, safety and security, supply and disposal.
Water resources development and management is a complex problem. It includes the design and operation of single system components, often as part of larger interrelated systems and usually on the basis of river basins. While several decades ago the dominant objective was the maximization of economic benefit, other objectives have evolved as part of the sustainable development envisaged. Today, planning and operation of larger water resources systems is practically impossible without adequate computer tools, normally being one or several models, increasingly combined with data bank management systems and multi criteria assessment procedures in decision support systems. The use of models in civil engineering already has a long history when structural engineering is considered. These design support models, however, must rather be seen as expert systems made to support the engineer with his daily work. They often have no direct link to stakeholders and the decision makers community. The scale of investigation is often much larger in water resources engineering than in structural engineering which is related to different stakeholders and decision making procedures. Still, several similarities are obvious which can be summarized as the search for a compromise solution on a complex, i.e. multiobjective and interdisciplinary decision problem. While in structural engineering e.g. aestetics, stability and energy consumption might be important evaluation criteria in addition to construction and maintenance cost other or additional criteria have to be considered in water resources planning such as political, environmental and social criteria. In this respect civil engineers tend to overemphasize technical criteria. For the future the existing expert systems should be embedded into an improved decision support shell, keeping in mind that decision makers are hardly interested in numerical modelling results. The paper will introduce into the problem and demonstrate the state of the art by means of an example.
The paper is dedicated to decidability exploration of market segmentation problem with the help of linear convolution algorithms. Mathematical formulation of this problem represents interval task of bipartite graph cover by stars. Vertices of the first partition correspond to types of commodities, vertices of the second – to customers groups. Appropriate method is offered for interval problem reduction to two-criterion task that has one implemented linear convolution algorithm. Unsolvability with the help of linear convolution algorithm of multicriterion, and consequently interval, market segmentation problem is proved.
For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.
The Lucas-Kanade tracker has proven to be an efficient and accurate method for calculation of the optical flow. However, this algorithm can reliably track only suitable image features like corners and edges. Therefore, the optical flow can only be calculated for a few points in each image, resulting in sparse optical flow fields. Accumulation of these vectors over time is a suitable method to retrieve a dense motion vector field. However, the accumulation process limits application of the proposed method to fixed camera setups. Here, a histogram based approach is favored to allow more than a single typical flow vector per pixel. The resulting vector field can be used to detect roads and prescribed driving directions which constrain object movements. The motion structure can be modeled as a graph. The nodes represent entry and exit points for road users as well as crossings, while the edges represent typical paths.
Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.
Optimum technological solutions must take into account the entire life cycle of structures including design procedures as well as quality assurance, inspection, maintenance, and repair strategies. Unfortunately, current design standards do not provide a satisfactory basis to ensure expected structural lifetimes. The latter may vary from only a few years for temporary structures to over a century for bridges, water dams or nuclear repositories. Consistent scientific concepts are urgently required to cover this wide spectrum of lifetimes in structural design and maintenance. This was a motivation for a group of scientists at the Ruhr-University Bochum (RUB) to start a special research program supported by the German Research Foundation (DFG) within the Cooperative Research Center SFB 398 since 1996. Institutes of the University Wuppertal and of the University Essen-Duisburg joined the research group. The goal of the Center is to study sources of damage and deterioration in materials and structures, to develop consistent models and simulation methods, to predict structural lifetimes and finally to integrate this predictions into new lifetime-oriented design strategies.
Research activities in our center are organised in three Project Groups as follows:
- Modelling of lifetime effects
- Methods for lifetime-oriented structural analyses
- Future lifespan-oriented design strategies.
The aim of this paper is to present so-called discrete-continual boundary element method (DCBEM) of structural analysis. Its field of application comprises buildings constructions, structures and also parts and components for the residential, commercial and un-inhabitant structures with invariability of physical and geometrical parameters in some dimensions. We should mention here in particular such objects as beams, thin-walled bars, strip foundations, plates, shells, deep beams, high-rise buildings, extensional buildings, pipelines, rails, dams and others. DCBEM comes under group of semianalytical methods. Semianalytical formulations are contemporary mathematical models which currently becoming available for realization due to substantial speed-up of computer productivity. DCBEM is based on the theory of the pseudodifferential boundary equations. Corresponding pseudodifferential operators are discretely approximated using Fourier analysis or wavelet analysis. The main DCBEM advantages against the other methods of the numerical analysis is a double reduction in dimension of the problem (discrete numerical division applied not to the full region of the interest but only to the boundary of the region cross section, as a matter of fact one is solving an one-dimensional problem with the finite step on the boundary area of the region), one has opportunities to carrying out very detailed analysis of the specific chosen zones, simplified initial data preparation, simplistic and adaptive algorithms. There are two methods to define and conduct DCBEM analysis developed – indirect (IDCBEM) and direct (DDCBEM), thus indirect like in boundary element method (BEM) applied and used little bit more than direct.
The design of challenging space structures frequently relies on the theory of folded plates. The models are composed of plane facets of which the bending and membrane stiffness are coupled along the folds. In conventional finite element analysis of faceted structures the continuity of the displacement field is enforced exclusively at the nodes. Since approximate solutions for transverse and for in-plane displacements are not members of the same function space, separation occurs in between the common nodes of adjacent elements. It is shown that the kinematic assumptions of Bernoulli are accounted for this incompatibility along the edges in facet models. A general answer to this problem involves substantial modification of plate and membrane theory, but a straight forward formulation can be derived for simply folded plates, structures, whose folds do not intersect. A broad class of faceted structures, including models of various curved shells, belong to this category and can be calculated consistently. The additional requirements to assure continuity concern the mapping of displacement derivatives on the edges. An appropriate finite facet element provides node and edge-oriented degrees of freedom, whose transformation to system degrees of freedom, depends on the geometric configuration at each node. The concept is implemented using conform triangular elements. To evaluate the new approach, the energy norm of representative structures for refined meshes is calculated. The focus is placed on the mathematical convergence towards reliable solutions obtained from finite volume models.
Durch die Betrachtung des Produktions-Prozesses als zentrales Transformationselement wird die Struktur der Bauproduktion realitätsnah gefasst. Die Integration der prozessorientierten Kostendefinition setzt relevante Kostenparameter und Produktionsfaktoren so in Beziehung, dass sie im Einklang mit der realen Kostenstruktur und Kostendynamik einer Baustelle stehen. Die Beziehung zwischen Bauzeit und Kosten wird direkt erfasst und ausgewertet. Der hohen Dynamik der Bauproduktion zwischen kapazitätsbeschränkten Einsatzmitteln und Produktionsprozessen wurde durch das Poolmodell und der Simulation als Berechnungsmethode Rechnung getragen. Eine einfache Modellierung von sich zyklusartig wiederholenden Arbeitsvorgängen (Taktplanung) ist möglich. Die Taktbildung vollzieht sich bei der Simulation durch Kapazitätsbeschränkungen ohne Zutun des Benutzers. Durch eine Optimierungsmethode kann automatisiert nach der kostengünstigsten oder zeitlich schnellsten Produktionsvariante gesucht werden
Für eine beherrschbare Koordination und Durchführung von Planungsaufgaben in Bauprojekten wird der Planungsprozess zunehmend in formalisierten Modellen – Prozessmodellen – beschrieben. Die Produktmodellforschung ihrerseits widmet sich der Speicherung von Planungsdaten in Form von objektorientierten Modellen im Rechner. Hauptaugenmerk sind dabei die Wahrung der Konsistenz und die Modellierung von Abhängigkeiten innerhalb dieses Planungsmaterials. Der Bezug zu den Akteuren der Planung wird nicht direkt hergestellt. Ein formal beschriebener Planungsprozesses kann in der Praxis noch nicht derart realisiert werden, dass ein Zugriff auf Einzelobjekte des Planungsprozesses gewährleistet ist. Bestehende Planungsunterstützungs- und Workflowmanagement-Systeme abstrahieren und ordnen das Planungsmaterial nach wie vor auf Dateiebene. Der vorliegende Artikel beschreibt eine Methode für die geeignete Verbindung von formalisierten Prozessmodellen in der Bauplanung mit den Einzelobjekten, die in den modellorientierten Objektmengen kodiert sind. Dabei wird die Zugehörigkeit bestimmter Objekte zu Plänen und Dokumenten (zum Zwecke des Datenaustauschs) nicht länger durch die physische Zuordnung zu Dateien festgelegt. Es wird ein formales Beschreibungsmittel vorgestellt, welches die entsprechende Teilmengenbildung aus der Gesamtheit der Planungsobjekte ermöglicht. Für die bisherigen Formen des Datenaustausches werden aus den Objektmodellen der Planung Teilmengen herausgelöst und physikalisch zwischen den Planern transportiert. Das neue Beschreibungsmittel hingegen erlaubt es, die Bildungsvorschrift für Objektteilmengen statt der Mengen selbst zwischen den Planern auszutauschen. Der Zugriff auf die konkreten Objekte findet dann direkt modellbasiert statt.
The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.
Due to economical, technical or political reasons all over the world about 100 nuclear power plants have been disconnected until today. All these power stations are still waiting for their complete dismantling which, considering one reactor, causes cost of up to one Bil. Euros and lasts up to 15 years. In our contribution we present a resource-constrained project scheduling approach minimizing the total discounted cost of dismantling a nuclear power plant. A project of dismantling a nuclear power plant can be subdivided into a number of disassembling activities. The execution of these activities requires time and scarce resources like manpower, special equipment or storage facilities for the contaminated material arising from the dismantling. Moreover, we have to regard several minimum and maximum time lags (temporal constraints) between the start times of the different activities. Finally, each disassembling activity can be processed in two alternative execution modes, which lead to different disbursements and determine the resource requirements of the considered activity. The optimization problem is to determine a start time and an execution mode for each activity, such that the discounted cost of the project is minimum, and neither the temporal constraints are violated nor the activities' resource requirements exceed the availability of any scarce resource at any point in time. In our contribution we introduce an appropriate multi-mode project scheduling model with minimum and maximum time lags as well as renewable and cumulative resources for the described optimization problem. Furthermore, we show that the considered optimization problem is NP-hard in the strong sense. For small problem instances, optimal solutions can be gained from a relaxation based enumeration approach which is incorporated into a branch and bound algorithm. In order to be able to solve large problem instances, we also propose a truncated version of the devised branch and bound algorithm.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
The contribution presents a model that is able to simulate construction duration and cost for a building project. This model predicts set of expected project costs and duration schedule depending on input parameters such as production speed, scope of work, time schedule, bonding conditions and maximum and minimum deviations from scope of work and production speed. The simulation model is able to calculate, on the basis of input level of probability, the adequate construction cost and time duration of a project. The reciprocal view attends to finding out the adequate level of probability for construction cost and activity durations. Among interpretive outputs of the application software belongs the compilation of a presumed dynamic progress chart. This progress chart represents the expected scenario of development of a building project with the mapping of potential time dislocations for particular activities. The calculation of a presumed dynamic progress chart is based on an algorithm, which calculates mean values as a partial result of the simulated building project. Construction cost and time models are, in many ways, useful tools in project management. Clients are able to make proper decisions about the time and cost schedules of their investments. Consequently, building contractors are able to schedule predicted project cost and duration before any decision is finalized.
We propose a new approach to the numerical solution of quasi-static elastic-plastic problems based on the Moreau-Yosida theorem. After the time discretization, the problem is expressed as an energy minimization problem for unknown displacement and plastic strain fields. The dependency of the minimization functional on the displacement is smooth whereas the dependency on the plastic strain is non-smooth. Besides, there exists an explicit formula, how to calculate the plastic strain from a given displacement field. This allows us to reformulate the original problem as a minimization problem in the displacement only. Using the Moreau-Yosida theorem from the convex analysis, the minimization functional in the displacements turns out to be Frechet-differentiable, although the hidden dependency on the plastic strain is non-differentiable. The seconds derivative exists everywhere apart from the elastic-plastic interface dividing elastic and plastic zones of the continuum. This motivates to implement a Newton-like method, which converges super-linearly as can be observed in our numerical experiments.
Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.
The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.
In this study we introduce a concept of discrete Laplacian on the plane lattice and consider its iteration dynamical system. At first we discuss some basic properties on the dynamical system to be proved. Next making their computer simulations, we show that we can realize the following phenomena quite well:(1) The crystal of waters (2) The designs of carpets, embroideries (3) The time change of the numbers of families of extinct animals, and (4) The echo systems of life things. Hence we may expect that we can understand the evolutions and self organizations by use of the dynamical systems. Here we want to make a stress on the following fact: Although several well known chaotic dynamical systems can describe chaotic phenomena, they have difficulties in the descriptions of the evolutions and self organizations.
Reasonably accurate cost estimation of the structural system is quite desirable at the early stages of the design process of a construction project. However, the numerous interactions among the many cost-variables make the prediction difficult. Artificial neural networks (ANN) and case-based reasoning (CBR) are reported to overcome this difficulty. This paper presents a comparison of CBR and ANN augmented by genetic algorithms (GA) conducted by using spreadsheet simulations. GA was used to determine the optimum weights for the ANN and CBR models. The cost data of twenty-nine actual cases of residential building projects were used as an example application. Two different sets of cases were randomly selected from the data set for training and testing purposes. Prediction rates of 84% in the GA/CBR study and 89% in the GA/ANN study were obtained. The advantages and disadvantages of the two approaches are discussed in the light of the experiments and the findings. It appears that GA/ANN is a more suitable model for this example of cost estimation where the prediction of numerical values is required and only a limited number of cases exist. The integration of GA into CBR and ANN in a spreadsheet format is likely to improve the prediction rates.
The ride of the tram along the line, defined by a time-table, consists of the travel time between the subsequent sections and the time spent by tram on the stops. In the paper, statistical data collected in the city of Krakow is presented and evaluated. In polish conditions, for trams the time spent on stops makes up the remarkable amount of 30 % of the total time of tram line operation. Moreover, this time is characterized by large variability. The time spent by tram on a stop consists of alighting and boarding time and time lost by tram on stop after alighting and boarding time ending, but before departure. Alighting and boarding time itself usually depends on the random number of alighting and boarding passengers and also on the number of passengers which are inside the vehicle. However, the time spent by tram on stop after alighting and boarding time ending is an effect of certain random events, mainly because of impossibility of departure from stop, caused by lack of priorities for public transport vehicles. The main focus of the talk lies on the description and the modelling of these effects. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Mit diesen Ausführungen wird ein Beitrag zum weiteren Erhalt der historischen Bausubstanz in Mecklenburg aus der Sicht der Tragwerksanalyse geleistet. Dabei bestätigt es sich immer mehr, dass mit dem Modell der Geometrie, der Belastung und des Materials gleichberechtigte Modelle für eine wirklichkeitsnahe Einschätzung des Tragverhaltens eines Tragwerks vorliegen müssen. Es zeigt sich, dass dabei die besten Berechnungsprogramme nur die Ergebnisse liefern können, die mit den Eingabedaten zu erzielen sind. So hat sich der Forschungsschwerpunkt im Lehrgebiet Tragwerkslehre des FB Architektur an der Hochschule Wismar in den letzten Jahren auf die realistische Abbildung der Wechselwirkung zwischen der Bauaufnahme und der geometrischen Modellierung konzentriert. In diesem Bereich zeigen sich als Schwerpunkte die Wechselwirkung zwischen Schäden und Tragwerksanalyse und die Wechselwirkung zwischen der aufgenommenen Geometrie und dem geometrischen Modell für die Tragwerksanalyse. Die Fülle der aufgenommenen Daten sind dabei in der Regel mehr hinderlich als ein Segen für die Tragwerksanalyse. Hier wurde gezeigt, welche und wie viele geometrische Daten für das geometrische Modell für die Tragwerksanalyse sinnvoll sind. Da die eigene Datenaufnahme relativ viel Zeit beansprucht, wurde eine "geistige" Bauaufnahme durchgeführt. Dazu wird der historische Planungsprozess in den einzelnen Formfindungsschritten nachvollzogen und in die virtuelle Realität überführt. Mit dieser Methode ergeben sich unterschiedliche Bauzustände und es lassen sich auch mögliche Bauphasen abbilden. Die Tragwerksanalyse dieser virtuellen Realität zeigt dann mögliche Schwächen der Tragwerke und/oder die Notwendigkeit konstruktiver Veränderungen. Ein Vergleich der Ergebnisse der Tragwerksanalyse mit der Realität anhand des vorliegenden Datenbestands liefert die Grundlage für den aktuellen Handlungsbedarf. Da der Bauzustand eines Bauwerkes unter einer zeitlichen Veränderung steht, werden Methoden überprüft, die es ermöglichen, einen einmal vorgelegten Datenbestand aufzubereiten und weiter zu verwalten.
Designing a structure follows a pattern of creating a structural design concept, executing a finite element analysis and developing a design model. A project was undertaken to create computer support for executing these tasks within a collaborative environment. This study focuses on developing a software architecture that integrates the various structural design aspects into a seamless functional collaboratory that satisfies engineering practice requirements. The collaboratory is to support both homogeneous collaboration i.e. between users operating on the same model and heterogeneous collaboration i.e. between users operating on different model types. Collaboration can take place synchronously or asynchronously, and the information exchange is done either at the granularity of objects or at the granularity of models. The objective is to determine from practicing engineers which configurations they regard as best and what features are essential for working in a collaborative environment. Based on the suggestions of these engineers a specification of a collaboration configuration that satisfies engineering practice requirements will be developed.
Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Objects for civil engineering applications can be identified with their reference in memory, their alpha-numeric name or their geometric location. Particularly in graphic user interfaces, it is common to identify objects geometrically by selection with the mouse. As the number of geometric objects in a graphic user interface grows, it becomes increasingly more important to treat the basic operations add, search and remove for geometric objects with great efficiency. Guttmann has proposed the Region-Tree (R-tree) for geometric identification in an environment which uses pages on disc as data structure. Minimal bounding rectangles are used to structure the data in such a way that neighborhood relations can be described effectively. The literature shows that the parameters which influence the efficiency of the R-trees have been studied extensively, but without conclusive results. The goal of the research which is reported in this paper is to determine reliably the parameters which significantly influence the efficiency of R-trees for geometric identification in technical drawings. In order to make this investigation conclusive, it must be performed with the best available software technology. Therefore an object-oriented software for the method is developed. This implementation is tested with technical drawings containing many thousands of geometric objects. These drawings are created automatically by a stochastic generator which is incorporated into a test bed consisting of an editor and a visualisor. This test bed is used to obtain statistics for the main factors which affect the efficiency of R-trees. The investigation shows that the following main factors which affect the efficiency can be identified reliably : number of geometric objects on the drawing the minimum und maximum number of children of a node of the tree the maximum width and height of the minimal bounding rectangles of the geometric objects relative to the size of the drawing.
Solid behavior as well as liquid behavior characterizes the flow of granular material in silos. The presented model is based on an appropriate interaction of a displacement field and a velocity field. The constitutive equations and the applied algorithm are developed from the exact solution for a standard case. The standard case evolves from a very tall vertical plane strain silo containing material that flows at a constant speed. No horizontal displacements and velocities take place. No changes regarding the field values arise in the vertical direction and in time. Tension is not allowed at any point. Coulomb friction represents the effects of the vertical walls. The interaction between the flowing material and the walls is covered by a forced boundary condition resulting in an additional matrix for the solid component as well as for the liquid component. The resulting integral equations are designed to be solved directly. Three coefficients describe the properties of the granular material. They govern elastic solid behavior in combination with viscous liquid behavior.
Unconstrained models are very often found in the broad spectrum of different theories of traffic demand models. In these models there are none or only one-sided restrictions influencing the choice of the individual. However in the traffic demand different deciding dependencies of the traffic volume with regard to the specific conditions of the territory structure potentials exist. Kichhoff and Lohse introduced bi- and tri-linearly constrained models to show these dependencies. In principle, the dependencies are described as hard, elastic and open boundary sum criteria. In this article a model is formulated which gets away from these predefined boundary sum criteria and allows a free determination of minimal and maximal boundary sum criteria. The iterative solution algorithm is shown according to a FURNESS procedure at the same time. With the approach of freely selectable minimal and maximal boundary sum criteria the modeling transport planner gets the possibility to show the traffic event even better. Furthermore all common boundary sum criteria can be calculated with this model. Therewith the often necessary and sensible standard and special cases can also be modeled.
Strategic Developments
(2006)
Early sensor-based infrastructures were often developed by experts with a thorough knowledge of base technology for sensing information, for processing the captured data, and for adapting the system’s behaviour accordingly. In this paper we argue that also end-users should be able to configure Ubiquitous Computing environments. We introduce the CollaborationBus application: a graphical editor that provides abstractions from base technology and thereby allows multifarious users to configure Ubiquitous Computing environments. By composing pipelines users can easily specify the information flows from selected sensors via optional filters for processing the sensor data to actuators changing the system behaviour according to the users’ wishes. Users can compose pipelines for both home and work environments. An integrated sharing mechanism allows them to share their own compositions, and to reuse and build upon others’ compositions. Real-time visualisations help them understand how the information flows through their pipelines. In this paper we present the concept, implementation, and early user feedback of the CollaborationBus application.
In today’s information society the vast technical progress and the sinking cost of information and communication technology provide new opportunities for information supply, and new technical support for communication and cooperation over distance. These trends also entail challenges such as supplying information that is adequate for a particular person in a specific situation as well as managing communication among geographically distributed parties efficiently. Context-aware systems that use sensors in order to analyse their environment and to adapt their behaviour. Yet, adequate tools for developing sensor-based infrastructures are missing. We have designed and developed Sens-ation, an open and generic service-oriented platform, which provides powerful, yet easy-to-use, tools to software developers who want to develop context-aware, sensor-based infrastructures. The service-oriented paradigm of Sens-ation enables standardised communication within individual infrastructures, between infrastructures and their sensors, but also among distributed infrastructures. On a whole, Sens-ation facilitates the development allowing developers to concentrate on the semantics of their infrastructures, and to develop innovative concepts and implementations of context-aware systems.
A Flexible Model for Incorporating Construction Product Data into Building Information Models
(2006)
When considering the integration and interoperability between AEC-FM software applications and construction products' data, it is essential to investigate the state-of-the-art and conduct an extensive review in the literature of both Building Information Models and electronic product catalogues. It was found that there are many reasons and key-barriers that hinder the developed solutions from being implemented. Among the reasons that are attributed to the failure of many previous research projects to achieve this integration aim are the proprietary developments of CAD vendors, the fragmented nature of construction product data i.e. commercial and technical data, the prefabrication versus on-site production, marketing strategies and brand-naming, the referencing of a product to the data of its constituents, availability of life-cycle data in a single point in time where it is needed all over the whole life-cycle of the product itself, taxonomy problems, the inability to extract search parameters from the building information model to participate in the conduction of parametric searches. Finally and most important is keeping the product data in the building information model consistent and up-to-date. Hence, it was found that there is a great potential for construction product data to be integrated to building information models by electronic means in a dynamic and extensible manner that prevents the model from getting obsolete. The study has managed to establish a solution concept that links continually updated and extensible life-cycle product data to a software independent building information model (IFC) all over the life span of the product itself. As a result, the solution concept has managed to reach a reliable building information model that is capable of overcoming the majority of the above mentioned barriers. In the meantime, the solution is capable of referencing, retrieving, updating, and merging product data at any point in time. A distributed network application that represents all the involved parties in the construction product value chain is simulated by real software tools to demonstrate the proof of concept of this research work.
In the history of 'villages' in Shenzhen, rich traditional cultural resources that are directly related to the folk life in urban corporate community still exist today, synchronously agricultural economy of urban corporate community is transformed into joint-stock economy, and natural villages are transformed into 'heterogeneous' space of city. The most significant fact in the modern social transition is that modern societies have surpassed traditional societies, and cities have surpassed the country. Weber, Durkheim, Tönnies, Simmel and others devoted themselves to cultivating the essence of social transition. The most influential theory to observe and analyze it is the two-tiered approach of ideal type. Tönnies made distinction between 'Gemeinschaft and Gesellschaft', Durkheim distinguished 'mechanical solidarity and organic solidarity', and Redfield analyzed 'folk society and urban society'. In those classical theories, the former transit to the later is considered to be a general rule of transition from traditional society to modern society, and from traditional community to modern community. However, ever since Redfield used the dependent relationship and interactive framework of 'great tradition' and 'little tradition' to explain various complicated phenomena in the transition from tradition to modern in 1950s, he suggested that a folk-urban continuum can be formed in the transition from folk society to urban society. 'Both terms, ‘city’ and ‘country’, are not and have never been limited or restricted to their obvious denotations: ‘city’ is not and has never been only urban. As a category it always encompasses (includes, embodies, embraces) itself and its opposite, the country' (Hassenpflug 2002, 46). Generally, social groups and culture characterized by weak 'potential' will take their own 'little tradition' as 'bridge' and agency, in order to enter or melt themselves into a 'great tradition' that embodies great 'potential' to seek for space to live and develop. There are many different types of transitions that villagers enter and get melt into 'great tradition' through their individual 'little tradition'. There are exploration and development of traditional resources in 'segmentation', such as the frequent relation between a great flow of peasants to cities and the network of kinship, and of earthbound relations; alternatively, there are assistances and utilization of resources of a whole corporate network, such as the traditional corporate community’s organization of local resources during the process of non-agriculturization of villages; and 'villages' in Shenzhen is of the latter situation. The following conclusion can be made based on the above analyses: urban corporate community formed in the process of non-agricultural development and urbanization is an organizing dependency on which villagers melt into city and adapt to urban life. The unique inner-structure and function determine that comparing with other organizations, it has a better performance, efficiency and more humanity care. Firstly, corporate community which is re-organized in the non-agricultural process currently is the only and the most effective organizational resources that can be utilized and has significant meanings in protecting villagers’ interest and benefit; secondly, in the short term, other approaches do not have the advantage and the effect as urban corporate community has on the focusing degree of public affairs in the comprehensive urbanization process; thirdly, the 'new' key connotation of urban corporate community, including its community management functions, is the main reason for which such community has the rationality of being; fourthly, urban corporate community will inevitably face many problems in the urbanization due to its inner fixed characteristics (lack of external support), but to a certain degree it has the ability to self-repair and problem solving under the precondition that, the government and society have a fair, impersonal view of 'villages', and base on this view providing multi-supports, especially providing rational system arrangement and policy supports. Consequently, in order to preserve and protect social system and cultural heritage within the 'villages', and gradually make the coordinative development of 'great tradition' represented by cities and of 'little tradition' represented by 'villages', 'soft reconstruction' rather than 'hard reconstruction' should be adopted by the government, during the recent reconstruction of 'villages' in Shenzhen.
Die Instandhaltung der städtischen Trinkwassernetze ist Aufgabenschwerpunkt der Wasserversorgungsunternehmen bzw. Netzbetreiber. Dazu notwendige Rehabilitationsplanungen stützen sich zurzeit weitgehend auf die Trendprognose von Schadensraten und die Erfahrungen der Mitarbeiter. Der Einfluss wesentlicher Kenngrößen wie Werkstoffeigenschaften oder die Resttragfähigkeit des Rohres bleiben hierbei größtenteils unberücksichtigt. Über materialtechnische Untersuchungen werden die notwendigen Kenngrößen ermittelt, die eine zuverlässige Bewertung des technischen Zustands des Rohrstrangs ermöglichen. So lassen sich die Prognose der technischen Nutzungsdauer und Rehabilitationsplanungen auf eine solide Basis stellen. In dieser Dissertationsschrift wird hierzu ein Untersuchungs- und Bewertungsalgorithmus mit integrierten Prognoseverfahren erarbeitet.
For efficient distant cooperation the members of workgroups need information about each other. This need for information disclosure often conflicts with the users' wishes for privacy. In the literature often reciprocity is suggested as a solution to this trade-off. Yet, this conception of reciprocity and its enforcement by systems does not match reality. In this paper we present our study's major findings investigating the role of reciprocity among which we found that participants greatly disregarded the above conception. Additionally we discuss their significant implications for the design of systems seeking to disclose personal information.
Zwischen Transformation und Globalisierung - Immobilienmarkt und Stadtentwicklung in Warschau
(2006)
Nach der politischen Wende Ende der 1980er/Anfang der 1990er Jahre entwickelte sich in Warschau innerhalb kurzer Zeit ein hoch dynamischer Immobilienmarkt kapitalistischer Prägung, dessen Mechanismen grundlegende Auswirkungen auf die Stadtentwicklung Warschaus haben. Im folgenden Aufsatz werden die wesentlichen Eigenschaften des Büro- und Wohnungsmarkts aufgezeigt. Es werden für jeden Sektor die Funktionsweise, die wesentlichen Akteure der Nachfrage- und Angebotsseite, die Rolle der Institutionen und die räumlichen Konsequenzen dargestellt.
Die Bauaufgaben der Zukunft liegen in der Auseinandersetzung mit bestehender Architektur. Die planerische Herausforderung besteht im Verzicht auf den Neubau durch die Umnutzung und den Umbau existenter Gebäude. Umnutzung und Umbau sind Werterhaltungsstrategien, die den Lebenszyklus eines Gebäudes als integralen Bestandteil der Planung betrachten und deren Ziel es ist, ungenutzte Bestandsgebäude durch keine oder wenige bauliche Eingriffe so zu verändern, dass sie einer Weiternutzung zugeführt werden können. Die Umnutzung unterliegt der Prämisse, dass an den Gebäuden keine baulichen Veränderungen vorgenommen werden, wohingegen der Umbau bauliche Eingriffe gestattet. Als Alternative zum Neubau ist der Erfolg beider Strategien entscheidend davon abhängig, dass der Architekt schon zu Beginn der Planung zu der Entscheidung gelangt, ob sich ein Gebäude unter Anwendung einer der beiden Strategien weiternutzen lässt. Diese Entscheidung wird vom Architekten in der Praxis durch einen Vergleich des Soll-Zustands (Raumprogramm) mit dem Ist-Zustand (Bestandsgrundriss) des Gebäudes getroffen. Die Analyse und Bewertung des Bestandes erfolgt in dieser frühen Phase der Planung in Form von Vorentwurfsskizzen, welche die organisatorischen oder baulichen Veränderungen der Gebäudegrundrisse im Falle einer Weiternutzung darstellen. In dieser Arbeit wird die Hypothese aufgestellt, dass der Vergleich des Raumprogramms mit dem Gebäudegrundriss im Wesentlichen eine kombinatorische Problemstellung darstellt. Unter dieser Annahme wird untersucht, ob durch den Einsatz von Optimierungsverfahren in der Grundrissplanung Lösungen für Umbau- und Umnutzungsaufgaben automatisiert erzeugt werden können. Ziel ist es, durch den computergestützten Einsatz dieser Verfahren zu plausiblen Planungslösungen, die dem Architekten als Grundlage für die weitere Bearbeitung der Planung dienen, zu gelangen.
Die Arbeit beschäftigt sich mit der Entstehung eines ökonomischen Kraftmaßes am Beispiel der Maschine von Marly im Zeitraum von ca. 1680 bis 1840. Die Leitthese der Dissertation besagt, dass vom 17. zum 19. Jahrhundert eine grundlegende Transformation des Maschinenbegriffs stattfand, die als Übergang vom Substanzbegriff zum Funktionsbegriff der Maschine bezeichnet werden kann. Im 17. Jahrhundert wurden mechanische Apparate als in sich geschlossene, selbstbezügliche Strukturen aufgefasst. Als anschaulich erfahrbare Objekte konnten sie als Bildgeber dienen, die mittels des Verfahrens der Strukturanalogie Erklärungsmuster für verschiedenste Phänomene (Körper, Staat, Welt) boten. Demzufolge galten sie als selbstevident: sie waren erklärend und mussten selbst nicht erklärt werden. Ihr etwaiger Zweck und ihre Einbettung in gesellschaftliche Zusammenhänge spielten dabei keine Rolle. Wie anhand der Beschreibungen und Darstellungen aus jener Zeit nachgewiesen werden kann, wurde die Maschine von Marly innerhalb dieser Episteme als architektonisches Objekt wahrgenommen, bei dem vor allem das Zusammenspiel der einzelnen Elemente Aufmerksamkeit erregte. Wie andere Maschinen auch stand sie unter dem Primat der Sichtbarkeit. Man war davon überzeugt, dass die Eigenschaften einer Maschine von der strukturellen Anordnung ihrer Bauteile abhingen und glaubte, ihre Qualität an ihrer Gestalt ablesen zu können. Ab der Mitte des 18. Jahrhunderts tauchte die Maschine von Marly in den Schriften physiokratischer Autoren auf. Zuerst diente sie dort als Beispiel für die Verschwendungssucht Louis’ XIV. und als Metapher für eine schlechte Einrichtung des Staates. Doch zunehmend begann man, sie auch in ihrer Faktizität als technisch-politisches Objekt zu begreifen. Man kritisierte ihre aktuelle Nutzung und schlug andere Möglichkeiten ihrer Verwendung vor, etwa die Bewässerung von Feldern oder die städtische Trinkwasserversorgung. Damit war die Maschine von Marly nicht länger ein Modell für die Einrichtung des Staates, das nur am Maßstab der immanenten Perfektion beurteilt werden konnte. Vielmehr war sie nun ein Instrument der Regierung, das sich als Teil eines staatlich verfassten Gemeinwesens verantworten musste. Als solches wurde sie auch zu einem bevorzugten Gegenstand aufklärerischer Reformprojekte. Das zeigt sich besonders deutlich am Wettbewerb, den die Pariser Akademie der Wissenschaften 1784-1786 organisiert hatte und der Vorschläge zur Verbesserung oder Ersetzung der Maschine von Marly zum Gegenstand hatte. Die Auswertung der mehr als 100 eingereichten Projekte und Memoranden ermöglicht einen einzigartigen Blick auf die Hoffnungen und Wünsche, die Ende des 18. Jahrhunderts an die Erfindung technischer Geräte gekoppelt waren. Um 1800 kann man die allmähliche Entstehung eines Funktionsbegriffs der Maschine bemerken. Lazare Carnots Essai sur les machines en général, der eine in der Sprache der Algebra artikulierte Definition der Maschine beinhaltete, trug maßgeblich dazu bei, die Anschaulichkeit zugunsten eines operativen Symbolismus zu delegitimieren. Erst dadurch war die Formulierung eines Effizienzkalküls möglich. Ergänzt wurde diese Formalisierung durch den Diskurs der Industrialisierung, in dem technische Apparate zunehmend als Produktionsmittel verstanden wurden. Die Maschine von Marly war ein wichtiger Schauplatz für die Entstehung eines ökonomischen Kraftmaßes. Nicht nur wurden dort Experimente mit verschiedenen Messinstrumenten (Dynamometern) durchgeführt, auch diente sie Joseph Montgolfier als Beispiel um zu beweisen, dass Kraft als Geldwert ausgedrückt werden könne. In den ersten Jahrzehnten des 19. Jahrhundert wurden Maschinen schließlich relational als Positionen innerhalb eines nationalen Produktionssystems definiert. Sie galten als Krafttransformatoren, bei denen ein bestimmter Input von ‚force motrice’ einen entsprechenden Output von ‚travail utile’ ergeben würde. Ihre vornehmlichste Aufgabe war die möglichst effiziente Ausnutzung der Kraftressourcen. Den vorläufigen Endpunkt erreichte die Entstehung des ökonomischen Kraftmaßes um 1830 mit der Formulierung des Begriffs der ‚mechanischen Arbeit’.
The main hypothesis of this research is that civil society’s participation is able to improve the planning results in the Chinese city of Qingdao in the contemporary age. Qingdao is a young city developed from a German colony in eastern China. Apart from the powers of the government and the market, the 'third power', including mainly the power of volunteer citizens and the citizens’ organisations, also positively promoted the spatial development in Qingdao’s history. Since 1978’s reform, Qingdao’s great progress in urban housing, historic preservation, public space and urban traffic results mainly from the increasing strength of both the government and the market, while the government has always been the dominant promoter for urban construction. The actual planning mechanism – the government formulates 'what to do' itself and decides 'how to do it' with the market – has much limit in reacting to the rapidly changing situation, serving diversified social interests, and raising sufficient funds for the city’s urgent demands in Qingdao. Searching for new development strategies based on the understanding of civil society in the Chinese context can provide a promising perspective on the urban studies of Qingdao. Chinese civil society can be understood as the intermediate sphere of individuals, families, citizen’s organisations, social movements, public communication, and of the non-governmental body’s non-for-profit involvement for the provision of public services between the state and the market. China has its own cultural tradition of civil society, and the modern civil society in China is showing its great potential in improving social integration and urban life. The Chinese government has started to advocate for civil society’s participation in urban construction, and encouraging the 'bottom-up' mechanism in the planning-related issues through political statements and legislative approaches since the last two decades. The existing planning practice in China is able to demonstrate that civil society’s participation helps improve the quality of Chinese urban planning realistically under present conditions, and that moderation of planning experts and the push of the authority are the key factors for successfully integrating the strength of civil society in planning. However, the power of civil society is not yet sufficiently discovered in Qingdao’s planning. For better planning results, the city of Qingdao needs more initiatives to mobilize civil society in the planning practice, as well as more support to enrich the related studies. This thesis recommends that Qingdao establishes the 'Foundation for Collaborative Urban Solutions' through the joint efforts of the authority and civil initiatives, which aims at moderating and facilitating the strength of civil society. The suggested pilot projects include: a. The Community-based Housing Workshop for regenerating the living environment of the run-down communities, where the residents are willing to collaborate with the foundation with own efforts. b. The Heritage Preservation Workshop for suggesting an efficient supervision mechanism involving civil society which protects the historic heritage from being destroyed in the urban construction. c. The Public Space Forum for improving accessibility, quantity and ecologic function in the development of Qingdao’s urban public space with the knowledge and creativity of both the government and the citizens. d. The Mass Transport Forum for a realistic strategy for funding the rail-based traffic system in Qingdao through enabling the civil society - especially the individual citizens and their households to invest. The 'Foundation of Collaborative Urban Solutions' is able to improve Qingdao’s planning to cope with the urban problems the city are facing in its contemporary development, as well as to provide valuable reference for the further research of civil society’s participation in Chinese urban planning.
Die Dissertation widmet sich den 'Wiederholungsstrukturen in den Filmen von Jim Jarmusch'. Mit dem Thema soll ein neues Konzept der Filmanalyse vorgestellt werden. In seiner Methodologie beruht es auf dem Element der Wiederholung. Die Wiederholung tritt im Film semiotisch auf. Im modernen philosophischen Denken spielt die Wiederholung eine Rolle, indem sie auf bestimmte Weise differentiell auftritt. Im Film bildet die Wiederholung kennzeichnende Codierungen aus. In unterschiedlicher Hinsicht bietet es sich somit an, das Element als Schlüssel zur Filminterpretation aufzugreifen. Das neue Konzept unterscheidet sich von bisherigen filmischen Methoden dadurch, dass mit ihm über das standardisierte Begriffsinstrumentarium aus der Filmwissenschaft hinausgegangen wird, ohne diesem den Rücken zu kehren. Jedoch wird Filmanalyse anhand des Elements der Wiederholung nun genuin als Akt der semiotischen Interpretation und des philosophischen Lesens von Filmen begriffen. In diesem Rahmen beruht das Verstehen von Filmen auf einzelnen und komplexen Zeichen, die im Film Zeitlichkeit und Räumlichkeit herstellen. In poststrukturaler Hinsicht lässt sich die Wiederholung als das konstitutive Moment im Zeit-Bild von Gilles Deleuze verstehen. In der Philosophie gibt es aber noch andere Denker, bei denen die Wiederholung relevant ist. Wie lässt sich die Wiederholung als materielles Element im Film einerseits, als philosophisch Gedachtes andererseits für die Filmanalyse gewinnbringend einsetzen? In Beantwortung dieser Frage wird in der Untersuchung zu den 'Wiederholungsstrukturen in den Filmen von Jim Jarmusch' versucht, dem Konzept des auteur-structuralisme Rechnung zu tragen. In der Lektüre und Interpretation wird Jarmusch als Autor / auteur mit der Struktur seiner Filme 'identifiziert'. Mit der Verschränkung von Autor / auteur und filmischem Text wird auf Roland Barthes Forderung nach der 'Geburt des Lesers' eingegangen. Filme sind demnach auch dann lesbar, wenn der Autor / auteur selbst (in unserem Fall Jim Jarmusch) für das, was er produziert hat, nicht mehr einsteht. Das theoretische Ziel der Untersuchung liegt darin, Erkenntnisse über die filmische Wiederholung zu gewinnen, sowohl in Bezug auf das philosophische Denken der Wiederholung, als auch hinsichtlich ihrer materiellen Verkörperung. Das Denken der Wiederholung wird fragend behandelt, indem wissenschaftlich untersuchend in die filmische Illusion eingegriffen wird. Mit Blick auf das ganze filmische Schaffen Jarmuschs wird aufgezeigt, wie sich seine filmischen „möglichen Welten“ anhand der Aufschlüsselung der Zeichen interpretieren lassen. Die Untersuchung stellt somit ein bislang noch nicht angewandtes Konzept der Betrachtung von Filmen vor, das auch auf andere Filmautoren / auteurs und ihr jeweiliges künstlerisches Schaffen übertragbar wäre.
Für die Optimierung eines bereits bestehenden Prozesses, z.B. im Hinblick auf den maximal möglichen Durchsatz bei gleich bleibender Qualität der Pyrolyseprodukte oder für die Einstellung der Betriebsparameter bei einem unbekannten Einsatzstoff, kann ein mathematisches Modell eine erste Abschätzung für die Einstellung betrieblicher Parameter, wie z.B. Temperaturprofile im Gas und Feststoff, geben. Darüber hinaus kann man mit einem Modell für neu zu konzipierende Anlagen konstruktive Parameter ermitteln oder überprüfen. In dem hier dargestellten vereinfachten Modellansatz werden u. a. die Umsatzvorgänge für ein Partikelkollektiv mit Hilfe von Summenparametern aus Untersuchungen an einer Thermowaage und ergänzend im Drehrohr ermittelt. Das Prozessmodell basiert auf einem Reaktormodell, das das Verweilzeitverhalten des Einsatzstoffes im Reaktor beschreibt und einem Basismodell, bestehend aus Massen- und Energiebilanzen für Solid und Gas sowie Ansätzen zur Trocknung und zum Umsatz. Im Hinblick auf die Verfügbarkeit von stoffspezifischen Daten von Abfällen sind insbesondere zur Berechnung des Verweilzeitverhaltens und des Umsatzes im Heißbetrieb vereinfachende Ansätze durch die Bildung von Summenparametern hilfreich. Das Prozessmodell wurde schrittweise validiert: Zunächst wurde in Kaltversuchen ein Summenparameter, der u.a. die unbekannten Reibungsverhältnisse im Drehrohr berücksichtigt, durch Vergleich von Experiment und Rechnung für Sand ermittelt. Für heterogene Abfallgemische kann dieser Materialfaktor zwar für Kaltversuche bestimmt werden (soweit dies für Abfälle möglich ist), im Heißbetrieb ändern sich jedoch alle wesentlichen Stoffparameter wie Partikeldurchmesser, Schüttdichte und Schüttwinkel sowie die Reibungsverhältnisse. Für diesen Fall wird der Materialfaktor zu Eins gesetzt und die wesentlichen Stoffgrößen umsatzabhängig modelliert. Dazu ist die Kenntnis der Schüttdichten, statischen Schüttwinkel und mittleren Partikeldurchmesser vom Abfall und Koks aus dem Abfall notwendig. Die mit diesen Stoffdaten berechnete Verweilzeit wurde in einem Heißversuch bei der Pyrolyse von Brennstoff aus Müll- (BRAM) Pellets mit einem Fehler von ca. 20 % erreicht. Das Basismodell wurde zunächst ohne Umsatz an Messergebnisse mit Sand im Drehrohr unter Variation von Temperaturen und Massenstrom angepasst bevor mit diesem Modell die Pyrolyse von einem homogenen Einsatzstoff (Polyethylen mit Sand) im Drehrohr berechnet wurde. Hier konnte bereits gezeigt werden, dass mit diesem vereinfachten Modellansatz gute Ergebnisse beim Vergleich von Modell und Experiment erzielt werden können. Im nächsten Schritt wurde der Sand angefeuchtet, um die Teilmodelle der Trocknung unterhalb und bei Siedetemperatur zu validieren. Die Mess- und Modellierungsergebnisse stimmen gut miteinander überein. Für ein Abfallgemisch aus BRAM-Pellets konnte der Verlauf der Solidtemperaturen unter der Berücksichtigung variabler Stoffwerte des Solids und eines Verschmutzungsfaktors, der den Belag des Drehrohres mit anklebendem Pellets bis zur Verkokung berücksichtigt, gut wiedergegeben werden. Die Gastemperaturen können in erster Näherung ausreichend genau durch das mathematische Modell beschrieben werden. Mit diesem vereinfachten mathematischen Modellansatz steht nun ein Hilfsmittel zur Auslegung und Optimierung von indirekt beheizten Drehrohren zur Verfügung, um bei einem neuen Einsatzstoff mit Daten aus experimentellen Basisuntersuchungen, die Temperaturverläufe im Feststoff und Gas sowie die Gaszusammensetzung in Abhängigkeit der wesentlichen Einflussgrößen abzuschätzen.
Im Zusammenhang mit der gegenwärtigen Zustandsbewertung und geplanten Sanierung der Dreiturmanlage der St. Severikirche in Erfurt wird eine dynamische Analyse unter Glockenläuten mit Hilfe eines Finite-Elemente-Modells durchgeführt. Mit diesem unter Verwendung des Programms SLang erstellten FE-Modell wird das Schwingungsverhalten der Dreiturmanlage nachgebildet. Dabei dient als Grundlage die zuvor erfolgte Schwingungsmessung. Mit dem angepassten Modell werden schwingungsreduzierende Maßnahmen hinsichtlich ihrer Wirksamkeit untersucht und bewertet. Weiterhin wird an Ersatzsystemen die aktive Schwingungsisolierung mittels Glockenstuhlunterkonstruktion und der Einbau eines passiven Tilgerdämpfers betrachtet.
Die heute erhältlichen Web-Content Management-Systeme (WCMS) verfügen über ein umfangreiches und breit gefächertes Angebot an Funktionen, die weit über die, zur Redaktion und zum Management von Internetpräsentationen, not-wendigen Grundanforderungen hinausgehen. Das macht diese Systeme in ih-ren Einsatz sehr flexibel und deckt vielfältige Anforderungen der Endanwender ab. Andererseits steigt durch die dadurch bedingte Komplexität der Arbeitsauf-wand erheblich und die Bedien- und Benutzerfreundlichkeit sinkt. Gerade für kleinere Internetpräsentationen, die ohne aufwendige Interaktionsmöglichkeiten aber auf häufig wechselndem Informationsangeboten aufwarten, wäre dies in seiner Grundfunktionalität reduziertes System vorteilhaft. Ein solches reduziertes Web-Content Management-System soll während der Diplomarbeit entworfen und beispielhaft implementiert werden. Als Ausgangs- und Orientierungspunkt soll hierzu die Internetpräsentation der Professur Informations- und Wissensverarbeitung dienen. Zur softwaretechnischen Umsetzung sind PHP und MySQL in Verbindung mit regulären HTML und CSS zu be-nutzen. Für das weitere Vorgehen müssen zunächst die Struktur und der Aufbau der Internetpräsentation der Professur analysiert, strukturiert und formalisiert werden. Anschließend sind die am häufigsten professionell genutzten Webcontent-Managementsysteme (TYPO3 und weitere siehe www.opensourcecms.com) hinsichtlich der durch sie angebotenen Grundfunktionalitäten und der verwen-deten Templates und Vorlagen zu untersuchen. Die aus dieser Analyse resultierenden Ergebnisse sind Ausgangspunkt für die Anforderungsdefinition des zu erstellenden Mini-WCMS. Anschließend ist eine prototypische Implementierung des theoretisch entstan-denen Systems, zugeschnitten auf die speziellen Bedürfnisse der Professur, vorzunehmen und hinsichtlich seiner Eignung zu diskutieren.
Datenmodelle zur Bearbeitung von Ingenieuraufgaben am Beispiel von Wohnhäusern in Stahlbauweise
(2006)
Modelle bilden die Grundlage der Planung. Sie repräsentieren die zur Bearbeitung erforderlichen Eigenschaften eines Bauwerks in einer an die spezifische Aufgabe angepassten Form. Zwischen den verschiedenen zur Abbildung des Bauwerks eingesetzten Modellen bestehen fachliche Zusammenhänge bezüglich der darin abgebildeten Aspekte. Diese Abhängigkeiten werden in der praktischen Planungsbearbeitung gegenwärtig auf Grundlage von Erfahrungswerten, normativen Vorgaben und vereinfachenden Annahmen berücksichtigt. Die detailliertere Modellierung von Bauwerkseigenschaften führt zu einer engeren Verzahnung der verschiedenen Modelle. Um eine fachliche Inselbildung zu vermeiden, ist eine entsprechend angepasste Abbildung der zwischen den einzelnen Modellen bestehenden Beziehungen erforderlich. Mit den steigenden Ansprüchen an eine Bearbeitung von Ingenieuraufgaben gewinnt eine über den Zweck der Bereitstellung ausgewählter Informationen zum Bauwerk und der Unterstützung eines Datenaustauschs zwischen verschiedenen Fachplanern hinausgehende datentechnische Abbildung an Bedeutung. Dies setzt eine Diskussion der Anforderungen an eine solche Beschreibung aus fachlicher Sicht voraus. Die Untersuchung der fachlichen Anforderungen wird am Beispiel von Wohnhäusern in Stahlbauweise geführt.
Die Diplomarbeit leistet einen Betrag zur numerischen Untersuchung des Verzuges eines Feinkornbaustahles beim MAG-Schweißen. Ziel dabei war es, Informationen zu dem schweißbedingten Verzug beeinflussenden Größen zusammenzustellen, Angaben zu möglichen Berechnungsformeln des an einer T-Stoßverbindung durch das Schweißen entstehenden Winkelverzuges des Gurtes zu recherchieren und die Grundlagen für eine Verzugsberechnung mit Hilfe der Finite-Element-Methode darzustellen. Das Hauptinteresse lag dabei jedoch auf den durch das Schweißen von Kehlnähten entstehenden Winkelverzuges des Gurtes an T-Stoßverbindungen. Dieser wurde mit Hilfe numerischer Berechnungen untersucht. Dabei wurden die Einflüsse, welche geometrischen, werkstofflichen oder verfahrensbedingten Ursprungs seien können, betrachtet.
Komplexität ist ein genuin architektonisches Problem. Der Begriff von Architektur als einer universellen Praxis bzw. ganzheitlichen Erkenntnisweise enthält bereits im Kern den Begriff 'Komplexität'. Beide Konzepte fallen weithin zusammen – Architektur kann damit als spezifische Denkform des Komplexen betrachtet werden. Das in der Arbeit aufgezeigte Dilemma besteht darin, dass der modernen Architektur ihr ursprünglicher Gestaltungsgegenstand – Komplexität – abhanden gekommen ist: 'Nönnig dekonstruiert zu Recht die hoch aggregierten Begriffe des Raumes und des Entwurfs. Die gesamte Arbeit zeigt, dass die modernen Beschreibungsformen von Komplexität gerade nicht in der Architektur entwickelt sind.' (Prof. Gerd Zimmermann, Weimar). Um dieses Defizit zu beheben und Architektur als eigenständige Wissenstechnik ('Technoepisteme') zu etablieren, wird ausgehend von spezifischen 'Praxis- und Diskursdefiziten' ein Theorieszenario entwickelt, mit dem und in dem Architektur sich als komplexe Wissensform verwirklicht (u.a. Komplexe Systeme, Design Sciences, Operationale Heuristik). 'Vorliegende Arbeit ist der Versuch […] die Architektur gewissermaßen wieder in den Stand zu setzen, der ihr im System des Denkens eigentlich zukommt […] Der Komplexitätsdiskurs in der Architektur ist zurück.' (Prof. Gerd Zimmermann, Weimar)
Die Fachtagung richtete sich an Geschäftsführer, Projektleiter, Bauleiter und Projektsteuerer in Planung und Ausführung mit Beiträgen zum Nachtrags- und Änderungsmanagement am Bau, Workflow-Management in der Baupraxis, Integration von Informationsprozessen auf der Basis von Nemetschek Technologien sowie Kompetenzaufbau durch gezielte Weiterbildung.
Am beispiel der sich im Bau befindlichen U-Bahnstation 'Vijzelgracht' in Amsterdam werden beispielhaft Ablaufkonzepte untersucht und und durch Analyse ihrer Logik geprüft. Der Schwerpunkt der Untersuchung liegt in der Entwicklung von möglichen Konzepten der Kombination von Erdaushub mit der Montage von aussteifenden Elementen. Dazu werden die wesentlichen, relevanten Arbeitsschritte identifiziert und beschrieben. Auf der Basis von Aufwandswerten werden Maschinen und Geräte dimensioniert. Die Mengenermittlung des Erdstoffvolumens ist hierbei ebenso Grundlage für die Erarbeitung von Ablaufkonzepten. Auf diesen Grundlagen werden mehrere Ablaufkonzepte dargestellt. Der Nachweis der logischen Wahrheit erfolgt am Beispiel einer Ablaufvariante. Hier werden die in logische Sprache überführten Arbeitsschritte dargestellt und auf Wahrheit hin geprüft.
A new approach to the non-linear analysis of cross-sections loaded by normal forces and bending moments is presented in the paper. The mechanical model is based on the LAGRANGE principle of minimum of total potential energy. Deformations, stresses and limit load parameters are obtained by solving a non-linear optimisation problem. The mathematical model is independent of the specifics of material. In addition to the stress strain relation and the specific strain energy W(ε) two further functions F(ε) and Φ(ε) are introduced to describe the material behaviour. Thus cracks in concrete, non-linearity of material etc. can be taken into account without basic modification of the numerical algorithm. For polygonal cross-sections the GAUSS' integral theorem is used. Numerical solutions of the non-linear optimisation problems can be found by application of standard software. Thus the analysis of reinforced concrete cross-sections or more general composite cross-sections with non-linear behaviour of material is as simple as in the case of linear elasticity. The application of the method is demonstrated for polygonal cross-sections. Pre-stresses or pre-strains can easily be included in the mathematical model.
The presented method for an physically non-linear analysis of stresses and deformations of composite cross-sections and members based on energy principles and their transformation to non-linear optimisation problems. From the LAGRANGE principle of minimum of total potential energy a kinematic formulation of the mechanical problem can be developed, which has the general advantage that pre-deformations excited by shrinkage, temperature, residual deformations after unloading et al., can be considered directly. Thus the non-linear analysis of composite cross-sections with layers of different mechanical properties and different preloading becomes possible and cracks in concrete, stiffness degradation and other specifics of the material behaviour can be taken into account without cardinal modification of the mathematical model. The impact of local defects on the bearing capacity of an entire element can also be analysed in this principle way. Standard computational systems for mathematical optimisation or general programs for spreadsheet analysis enable an uncomplicated implementation of the developed models and an effective non-linear analysis for composite cross-sections and elements.
The contribution focuses on the development of a basic computational scheme that provides a suitable calculation environment for the coupling of analytical near-field solutions with numerical standard procedures in the far-field of the singularity. The proposed calculation scheme uses classical methods of complex function theory, which can be generalized to 3-dimensional problems by using the framework of hypercomplex analysis. The adapted approach is mainly based on the factorization of the Laplace operator EMBED Equation.3 by the Cauchy-Riemann operator EMBED Equation.3 , where exact solutions of the respective differential equation are constructed by using an orthonormal basis of holomorphic and anti-holomorphic functions.
The extended finite element method (XFEM) offers an elegant tool to model material discontinuities and cracks within a regular mesh, so that the element edges do not necessarily coincide with the discontinuities. This allows the modeling of propagating cracks without the requirement to adapt the mesh incrementally. Using a regular mesh offers the advantage, that simple refinement strategies based on the quadtree data structure can be used to refine the mesh in regions, that require a high mesh density. An additional benefit of the XFEM is, that the transmission of cohesive forces through a crack can be modeled in a straightforward way without introducing additional interface elements. Finally different criteria for the determination of the crack propagation angle are investigated and applied to numerical tests of cracked concrete specimens, which are compared with experimental results.
Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.
The importance of modern simulation methods in the mechanical analysis of heterogeneous solids is presented in detail. Thereby the problem is noted that even for small bodies the required high-resolution analysis reaches the limits of today's computational power, in terms of memory demand as well as acceptable computational effort. A further problem is that frequently the accuracy of geometrical modelling of heterogeneous bodies is inadequate. The present work introduces a systematic combination and adaption of grid-based methods for achieving an essentially higher resolution in the numerical analysis of heterogeneous solids. Grid-based methods are as well primely suited for developing efficient and numerically stable algorithms for flexible geometrical modeling. A key aspect is the uniform data management for a grid, which can be utilized to reduce the effort and complexity of almost all concerned methods. A new finite element program, called Mulgrido, was just developed to realize this concept consistently and to test the proposed methods. Several disadvantages which generally result from grid discretizations are selectively corrected by modified methods. The present work is structured into a geometrical model, a mechanical model and a numerical model. The geometrical model includes digital image-based modeling and in particular several methods for the theory-based generation of inclusion-matrix models. Essential contributions refer to variable shape, size distribution, separation checks and placement procedures of inclusions. The mechanical model prepares the fundamentals of continuum mechanics, homogenization and damage modeling for the following numerical methods. The first topic of the numerical model introduces to a special version of B-spline finite elements. These finite elements are entirely variable in the order k of B-splines. For homogeneous bodies this means that the approximation quality can arbitrarily be scaled. In addition, the multiphase finite element concept in combination with transition zones along material interfaces yields a valuable solution for heterogeneous bodies. As the formulation is element-based, the storage of a global stiffness matrix is superseded such that the memory demand can essentially be reduced. This is possible in combination with iterative solver methods which represent the second topic of the numerical model. Here, the focus lies on multigrid methods where the number of required operations to solve a linear equation system only increases linearly with problem size. Moreover, for badly conditioned problems quite an essential improvement is achieved by preconditioning. The third part of the numerical model discusses certain aspects of damage simulation which are closely related to the proposed grid discretization. The strong efficiency of the linear analysis can be maintained for damage simulation. This is achieved by a damage-controlled sequentially linear iteration scheme. Finally a study on the effective material behavior of heterogeneous bodies is presented. Especially the influence of inclusion shapes is examined. By means of altogether more than one hundred thousand random geometrical arrangements, the effective material behavior is statistically analyzed and assessed.
In many applications such as parameter identification of oscillating systems in civil enginee-ring, speech processing, image processing and others we are interested in the frequency con-tent of a signal locally in time. As a start wavelet analysis provides a time-scale decomposition of signals, but this wavelet transform can be connected with an appropriate time-frequency decomposition. For instance in Matlab are defined pseudo-frequencies of wavelet scales as frequency centers of the corresponding bands. This frequency bands overlap more or less which depends on the choice of the biorthogonal wavelet system. Such a definition of frequency center is possible and useful, because different frequencies predominate at different dyadic scales of a wavelet decomposition or rather at different nodes of a wavelet packet decomposition tree. The goal of this work is to offer better algorithms for characterising frequency band behaviour and for calculating frequency centers of orthogonal and biorthogonal wavelet systems. This will be done with some product formulas in frequency domain. Now the connecting procedu-res are more analytical based, better connected with wavelet theory and more assessable. This procedures doesn’t need any time approximation of the wavelet and scaling functions. The method only works in the case of biorthogonal wavelet systems, where scaling functions and wavelets are defined over discrete filters. But this is the practically essential case, because it is connected with fast algorithms (FWT, Mallat Algorithm). At the end corresponding to the wavelet transform some closed formulas of pure oscillations are given. They can generally used to compare the application of different wavelets in the FWT regarding it’s frequency behaviour.
The design and application of high performance materials demands extensive knowledge of the materials damage behavior, which significantly depends on the meso- and microstructural complexity. Numerical simulations of crack growth on multiple length scales are promising tools to understand the damage phenomena in complex materials. In polycrystalline materials it has been observed that the grain boundary decohesion is one important mechanism that leads to micro crack initiation. Following this observation the paper presents a polycrystal mesoscale model consisting of grains with orthotropic material behavior and cohesive interfaces along grain boundaries, which is able to reproduce the crack initiation and propagation along grain boundaries in polycrystalline materials. With respect to the importance of modeling the geometry of the grain structure an advanced Voronoi algorithm is proposed to generate realistic polycrystalline material structures based on measured grain size distribution. The polycrystal model is applied to investigate the crack initiation and propagation in statically loaded representative volume elements of aluminum on the mesoscale without the necessity of initial damage definition. Future research work is planned to include the mesoscale model into a multiscale model for the damage analysis in polycrystalline materials.
The design of safety-critical structures, exposed to cyclic excitations demands for non-degrading or limited-degrading behavior during extreme events. Among others, the structural behavior is mainly determined by the amount of plastic cycles, completed during the excitation. Existing simplified methods often ignore this dependency, or assume/request sufficient cyclic capacity. The paper introduces a new performance based design method that considers explicitly a predefined number of re-plastifications. Hereby approaches from the shakedown theory and signal processing methods are utilized. The paper introduces the theoretical background, explains the steps of the design procedure and demonstrates the applicability with help of an example. This project was supported by German Science Foundation (Deutsche Forschungsgemeinschaft, DFG)
The present paper is part of a comprehensive approach of grid-based modelling. This approach includes geometrical modelling by pixel or voxel models, advanced multiphase B-spline finite elements of variable order and fast iterative solver methods based on the multigrid method. So far, we have only presented these grid-based methods in connection with linear elastic analysis of heterogeneous materials. Damage simulation demands further considerations. The direct stress solution of standard bilinear finite elements is severly defective, especially along material interfaces. Besides achieving objective constitutive modelling, various nonlocal formulations are applied to improve the stress solution. Such a corrective data processing can either refer to input data in terms of Young's modulus or to the attained finite element stress solution, as well as to a combination of both. A damage-controlled sequentially linear analysis is applied in connection with an isotropic damage law. Essentially by a high resolution of the heterogeneous solid, local isotropic damage on the material subscale allows to simulate complex damage topologies such as cracks. Therefore anisotropic degradation of a material sample can be simulated. Based on an effectively secantial global stiffness the analysis is numerically stable. The iteration step size is controlled for an adequate simulation of the damage path. This requires many steps, but in the iterative solution process each new step starts with the solution of the prior step. Therefore this method is quite effective. The present paper provides an introduction of the proposed concept for a stable simulation of damage in heterogeneous solids.
Advanced finite elements are proposed for the mechanical analysis of heterogeneous materials. The approximation quality of these finite elements can be controlled by a variable order of B-spline shape functions. An element-based formulation is developed such that the finite element problem can iteratively be solved without storing a global stiffness matrix. This memory saving allows for an essential increase of problem size. The heterogeneous material is modelled by projection onto a uniform, orthogonal grid of elements. Conventional, strictly grid-based finite element models show severe oscillating defects in the stress solutions at material interfaces. This problem is cured by the extension to multiphase finite elements. This concept enables to define a heterogeneous material distribution within the finite element. This is possible by a variable number of integration points to each of which individual material properties can be assigned. Based on an interpolation of material properties at nodes and further smooth interpolation within the finite elements, a continuous material function is established. With both, continuous B-spline shape function and continuous material function, also the stress solution will be continuous in the domain. The inaccuracy implied by the continuous material field is by far less defective than the prior oscillating behaviour of stresses. One- and two-dimensional numerical examples are presented.
In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.
A fast solver method called the multigrid preconditioned conjugate gradient method is proposed for the mechanical analysis of heterogeneous materials on the mesoscale. Even small samples of a heterogeneous material such as concrete show a complex geometry of different phases. These materials can be modelled by projection onto a uniform, orthogonal grid of elements. As one major problem the possible resolution of the concrete specimen is generally restricted due to (a) computation times and even more critical (b) memory demand. Iterative solvers can be based on a local element-based formulation while orthogonal grids consist of geometrical identical elements. The element-based formulation is short and transparent, and therefore efficient in implementation. A variation of the material properties in elements or integration points is possible. The multigrid method is a fast iterative solver method, where ideally the computational effort only increases linear with problem size. This is an optimal property which is almost reached in the implementation presented here. In fact no other method is known which scales better than linear. Therefore the multigrid method gains in importance the larger the problem becomes. But for heterogeneous models with very large ratios of Young's moduli the multigrid method considerably slows down by a constant factor. Such large ratios occur in certain heterogeneous solids, as well as in the damage analysis of solids. As solution to this problem the multigrid preconditioned conjugate gradient method is proposed. A benchmark highlights the multigrid preconditioned conjugate gradient method as the method of choice for very large ratio's of Young's modulus. A proposed modified multigrid cycle shows good results, in the application as stand-alone solver or as preconditioner.
Projector-Based Augmentation
(2006)
Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces.
Virtual studio technology plays an important role for modern television productions. Blue-screen matting is a common technique for integrating real actors or moderators into computer generated sceneries. Augmented reality offers the possibility to mix real and virtual in a more general context. This article proposes a new technological approach for combining real studio content with computergenerated information. Digital light projection allows a controlled spatial, temporal, chrominance and luminance modulation of illumination – opening new possibilities for TV studios.
Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. With the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. They will lead to clipping errors and to visible artifacts on the surface. In this article, we present a novel algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real-time.
Summer overheating in buildings is a common problem, especially in office buildings with large glazed facades, high internal loads and low thermal mass. Phase change materials (PCM) that undergo a phase transition in the temperature range of thermal comfort can add thermal mass without increasing the structural load of the building. The investigated PCM were micro-encapsulated and mixed into gypsum plaster. The experiments showed a reduction of indoor-temperature of up to 4 K when using a 3 cm layer of PCM-plaster with micro-encapsulated paraffin. The measurement results could validate a numerical model that is based on a temperature dependent function for heat capacity. Thermal building simulation showed that a 3 cm layer of PCM-plaster can help to fulfil German regulations concerning heat protection of buildings in summer for most office rooms.
In this paper we study the structure of the solutions to higher dimensional Dirac type equations generalizing the known λ-hyperholomorphic functions, where λ is a complex parameter. The structure of the solutions to the system of partial differential equations (D- λ) f=0 show a close connection with Bessel functions of first kind with complex argument. The more general system of partial differential equations that is considered in this paper combines Dirac and Euler operators and emphasizes the role of the Bessel functions. However, contrary to the simplest case, one gets now Bessel functions of any arbitrary complex order.
The modeling of crack propagation in plain and reinforced concrete structures is still a field for many researchers. If a macroscopic description of the cohesive cracking process of concrete is applied, generally the Fictitious Crack Model is utilized, where a force transmission over micro cracks is assumed. In the most applications of this concept the cohesive model represents the relation between the normal crack opening and the normal stress, which is mostly defined as an exponential softening function, independently from the shear stresses in tangential direction. The cohesive forces are then calculated only from the normal stresses. By Carol et al. 1997 an improved model was developed using a coupled relation between the normal and shear damage based on an elasto-plastic constitutive formulation. This model is based on a hyperbolic yield surface depending on the normal and the shear stresses and on the tensile and shear strength. This model also represents the effect of shear traction induced crack opening. Due to the elasto-plastic formulation, where the inelastic crack opening is represented by plastic strains, this model is limited for applications with monotonic loading. In order to enable the application for cases with un- and reloading the existing model is extended in this study using a combined plastic-damage formulation, which enables the modeling of crack opening and crack closure. Furthermore the corresponding algorithmic implementation using a return mapping approach is presented and the model is verified by means of several numerical examples. Finally an investigation concerning the identification of the model parameters by means of neural networks is presented. In this analysis an inverse approximation of the model parameters is performed by using a given set of points of the load displacement curves as input values and the model parameters as output terms. It will be shown, that the elasto-plastic model parameters could be identified well with this approach, but require a huge number of simulations.
Die effektive Kooperation aller beteiligten Fachplaner im Bauplanungsprozess ist die Voraussetzung für wirtschaftliches und qualitativ hochwertiges Bauen. Bauprojektorganisationen bestehen in der Regel aus zahlreichen unabhängigen Planungspartnern, die örtlich verteilt spezifische Planungsaufgaben bearbeiten und die Ergebnisse in Teilproduktmodellen ablegen. Da Planungsprozesse im Bauwesen stark arbeitsteilig ablaufen, sind die Teilproduktmodelle der einzelnen Fachplanungen in hohem Maße voneinander abhängig. Ziel des hier vorgestellten Ansatzes ist die Integration der Teilproduktmodelle der Gebäudeplanung in einem netzwerkbasierten Modellverbund am Beispiel der Brandschutzplanung. Im Beitrag werden die Probleme der Verteiltheit und insbesondere der semantischen Heterogenität der involvierten Teilproduktmodelle betrachtet. Der verteilte Zugriff wird mithilfe mobiler Software-Agenten realisiert. Die Agenten können sich dabei frei im netzwerkbasierten Planungsverbund bewegen und agieren als Vertreter der Fachplaner. Das Problem der semantischen Heterogenität der Teilproduktmodelle wird auf der Basis von Ontologien gelöst. Dazu werden erstens Domänenontologien entwickelt, die Objekte der realen Welt einer abgeschlossenen Domäne, hier des Brandschutzes, abbilden. Zweitens werden Applikationsontologien entwickelt, die die einzelnen proprietären Datenhaltungen (im Sinne von Teilproduktmodellen) der jeweiligen Fachplanungen repräsentieren. Beide Ontologien werden mit einem regelbasierten Ansatz verknüpft. Im vorgestellten Anwendungsfall Brandschutz dient die Domänenontologie als einheitliche Schnittstelle für den Zugriff auf die verteilten Modelle und abstrahiert dabei von deren Datenbankspezifika und proprietären Schemata. Mithilfe von mobilen Agenten und semantischen Technologien kann so eine Plattform zur Verfügung gestellt werden, die erstens die dynamische Integration von Ressourcen in den Planungsverbund erlaubt und zweitens auf deren Basis unabhängig von der Verteiltheit und Heterogenität der eingebundenen Ressourcen ingenieurgerechte Verarbeitungsmethoden realisiert werden können.
In classical complex function theory the geometric mapping property of conformality is closely linked with complex differentiability. In contrast to the planar case, in higher dimensions the set of conformal mappings is only the set of Möbius transformations. Unfortunately, the theory of generalized holomorphic functions (by historical reasons they are called monogenic functions) developed on the basis of Clifford algebras does not cover the set of Möbius transformations in higher dimensions, since Möbius transformations are not monogenic. But on the other side, monogenic functions are hypercomplex differentiable functions and the question arises if from this point of view they can still play a special role for other types of 3D-mappings, for instance, for quasi-conformal ones. On the occasion of the 16th IKM 3D-mapping methods based on the application of Bergman's reproducing kernel approach (BKM) have been discussed. Almost all authors working before that with BKM in the Clifford setting were only concerned with the general algebraic and functional analytic background which allows the explicit determination of the kernel in special situations. The main goal of the abovementioned contribution was the numerical experiment by using a Maple software specially developed for that purpose. Since BKM is only one of a great variety of concrete numerical methods developed for mapping problems, our goal is to present a complete different from BKM approach to 3D-mappings. In fact, it is an extension of ideas of L. V. Kantorovich to the 3-dimensional case by using reduced quaternions and some suitable series of powers of a small parameter. Whereas until now in the Clifford case of BKM the recovering of the mapping function itself and its relation to the monogenic kernel function is still an open problem, this approach avoids such difficulties and leads to an approximation by monogenic polynomials depending on that small parameter.
The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.
In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.
In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.