56.03 Methoden im Bauingenieurwesen
Refine
Document Type
- Conference Proceeding (599)
- Article (143)
- Doctoral Thesis (28)
- Master's Thesis (4)
- Diploma Thesis (1)
Institute
- Professur Informatik im Bauwesen (467)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (173)
- Graduiertenkolleg 1462 (34)
- Institut für Strukturmechanik (ISM) (30)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (IKI) (8)
- Professur Baubetrieb und Bauverfahren (8)
- Professur Baumechanik (7)
- Professur Stahlbau (6)
- Professur Stochastik und Optimierung (5)
- Professur Massivbau I (4)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (3)
- Professur Bauphysik (3)
- Professur Informatik in der Architektur (3)
- Professur Modellierung und Simulation - Konstruktion (3)
- Professur Bodenmechanik (2)
- Professur Computer Vision in Engineering (2)
- Professur Holz- und Mauerwerksbau (2)
- Professur Informations- und Wissensverarbeitung (2)
- Professur Tragwerkslehre (2)
- bauhaus.institut für experimentelle Architektur (2)
- Juniorprofessur CAD in der Bauinformatik (1)
- Juniorprofessur Simulation und Experiment (1)
- Professur Baustatik und Bauteilfestigkeit (1)
- Professur Planung von Ingenieurbauten (1)
Keywords
- Computerunterstütztes Verfahren (287)
- Architektur <Informatik> (198)
- CAD (164)
- Angewandte Mathematik (144)
- Angewandte Informatik (143)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (72)
- Modellierung (65)
- Bauwerk (50)
- Finite-Elemente-Methode (46)
- Building Information Modeling (38)
In the given paper the generalized formulation of the problem of computer modelling of the complex-composite structure interaction with different types of dynamic loads and effects is discussed. Here the analysis is given as for the usage of some universal computing systems for the solution of such problems. Also if is shown that the quantification of the dynamic models of the complex-composite systems with the variable structure, depending on the character and intensivity of the effects, is necessary. The different variants of the joint and the space structure element modelling are gested. It allows to consider the complex modes of the joint bending-torsional oscillations of such structures as bridges, towers, high-rise buildings. The peculiarities of the modelling and testing of some problems of the objects aerodynamics and the interaction of the frameworks constructions with shock and movable loads are considered. In this paper the examples of the complex-composite structure dynamic analysis are shown. It is achieved by means of some special methods of the input of the real inducements and loads of the exploitated analog-object into the computing model. The suggested models found a wide use both at the design of new structures and the dynamic monitoring of the exploitated structures.
We briefly review and use the recent comprehensive research on the manifolds of square roots of −1 in real Clifford geometric algebras Cl(p,q) in order to construct the Clifford Fourier transform. Basically in the kernel of the complex Fourier transform the complex imaginary unit j is replaced by a square root of −1 in Cl(p,q). The Clifford Fourier transform (CFT) thus obtained generalizes previously known and applied CFTs, which replaced the complex imaginary unit j only by blades (usually pseudoscalars) squaring to −1. A major advantage of real Clifford algebra CFTs is their completely real geometric interpretation. We study (left and right) linearity of the CFT for constant multivector coefficients in Cl(p,q), translation (x-shift) and modulation (w -shift) properties, and signal dilations. We show an inversion theorem. We establish the CFT of vector differentials, partial derivatives, vector derivatives and spatial moments of the signal. We also derive Plancherel and Parseval identities as well as a general convolution theorem.
For planning in existing built contexts, the building survey is the starting point for initial planning proposals, for the diagnosis and documentation of building damages, for the creation of objectives catalogues, for the detailed design of renovation and conversion measures and for ensuring fulfilment of building legislation, particularly by change of use and refitting. An examination of currently available IT-tools shows insufficient support for planning within existing contexts, most notably a deficit with regard to information capture and administration. This paper discusses the concept for a modular surveying system (basic concept, separation of geometry from semantic data, and separation into sub-systems) and the prototypical realisation of a system for the complete support of the entire building surveying process for existing buildings. The project aims to contribute to the development of a planning system for existing buildings. ...
Non-destructive techniques for damage detection became the focus of engineering interests in the last few years. However, applying these techniques to large complex structures like civil engineering buildings still has some limitations since these types of structures are
unique and the methodologies often need a large number of specimens for reliable results. For this reason, cost and time can greatly influence the final results.
Model Assisted Probability Of Detection (MAPOD) has taken its place among the ranks of damage identification techniques, especially with advances in computer capacity and modeling tools. Nevertheless, the essential condition for a successful MAPOD is having a reliable model in advance. This condition is opening the door for model assessment and model quality problems. In this work, an approach is proposed that uses Partial Models (PM) to compute the Probability Of damage Detection (POD). A simply supported beam, that can be structurally modified and
tested under laboratory conditions, is taken as an example. The study includes both experimental and numerical investigations, the application of vibration-based damage detection approaches and a comparison of the results obtained based on tests and simulations.
Eventually, a proposal for a methodology to assess the reliability and the robustness of the models is given.
This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.
Pre-stressed structural elements are widely used in large-span structures. As a rule, they have higher stiffness characteristics. Pre-stressed rods can be applied as girders of different purpose, and as their separate parts, e.g. rods of trusses and frames. Among numerous ways of prestressing the compression of girders, trusses, and frames by tightenings from high-strength materials is under common application.
The development of a consistent material model for textile reinforced concrete requires the formulation and calibration of several sub-models on different resolution scales. Each of these models represents the material structure at the corresponding scale. While the models at the micro-level are able to capture the fundamental failure and damage mechanisms of the material components (e.g. filament rupture and debonding from the matrix) their computational costs limit their application to the small size representative unit cells of the material structure. On the other hand, the macro-level models provide a sufficient performance at the expense of limited range of applicability. Due to the complex structuring of the textile reinforced concrete at several levels (filament - yarn - textile - matrix) it is a non-trivial task to develop a multiscale model from scratch. It is rather more effective to develop a set of conceptually related sub-models for each structural level covering the selected phenomena of the material behavior. The homogenized effective material properties obtained at the lower level may be verified and validated using experiments and models at the higher level(s). In this paper the development of a consistent material model for textile reinforced concrete is presented. Load carrying and failure mechanisms at the micro, meso and macro scales are described and models with the focus on the specified scales are introduced. The models currently being developed in the framework of the collaborative research center are classified and evaluated with respect to the failure mechanisms being captured. The micromechanical modeling of the yarn and bonding behavior is discussed in detail and the correspondence with the experiments focused on the selected failure and interaction mechanisms is shown. The example of modeling the bond layer demonstrates the application of the presented strategy.
The goal of the collaborative research center (SFB 532) >Textile reinforced concrete (TRC): the basis for the development of a new material technology< installed in 1998 at the Aachen University is a complex assessment of mechanical, chemical, economical and productional aspects in an interdisciplinary environment. The research project involves 10 institutes performing parallel research in 17 projects. The coordination of such a research process requires effective software support for information sharing in form of data exchange, data analysis and data archival. Furthermore, the processes of experiment planning and design, modification of material compositions and design parameters and development of new material models in such an environment call for systematic coordination applying the concepts of operational research. Flexible organization of the data coming from several sources is a crucial premise for a transparent accumulation of knowledge and, thus, for a successful research in a long run. The technical information system (TRC-TIS) developed in the SFB 532 has been implemented as a database-powered web server with a transparent definition of the product and process model. It serves as an intranet server with access domains devoted to the involved research groups. At the same time, it allows the presentation of selected results just by granting a data object an access from the public area of the server via internet.
Due to the amount of flow simulation and measurement data, automatic detection, classification and visualization of features is necessary for an inspection. Therefore, many automated feature detection methods have been developed in recent years. However, only one feature class is visualized afterwards in most cases, and many algorithms have problems in the presence of noise or superposition effects. In contrast, image processing and computer vision have robust methods for feature extraction and computation of derivatives of scalar fields. Furthermore, interpolation and other filter can be analyzed in detail. An application of these methods to vector fields would provide a solid theoretical basis for feature extraction. The authors suggest Clifford algebra as a mathematical framework for this task. Clifford algebra provides a unified notation for scalars and vectors as well as a multiplication of all basis elements. The Clifford product of two vectors provides the complete geometric information of the relative positions of these vectors. Integration of this product results in Clifford correlation and convolution which can be used for template matching of vector fields. For frequency analysis of vector fields and the behavior of vector-valued filters, a Clifford Fourier transform has been derived for 2D and 3D. Convolution and other theorems have been proved, and fast algorithms for the computation of the Clifford Fourier transform exist. Therefore the computation of Clifford convolution can be accelerated by computing it in Clifford Fourier domain. Clifford convolution and Fourier transform can be used for a thorough analysis and subsequent visualization of flow fields.
Modellverwaltungssysteme sind eine geeignete technologische Basis zum Management digitaler Bauwerksmodelle bei Planungstätigkeiten für den Neubau als auch für die Revitalisierung von Bauwerken. Die Unterstützung von Revitalisierungsprozessen impliziert für den Entwurf integrierter Planungsumgebungen spezifische Anforderungen wie die Repräsentation von Informationen, die mit verschiedenen Typen von Vagheit behaftet sind, die Notwendigkeit, den Soll- sowie den Ist- Zustand des Bauwerks abzubilden und die Fähigkeit des Umgangs mit temporal inkonsistenten Modellzuständen. Die erforderliche Dynamik der Domänenmodelle und die erforderliche Nutzbarkeit in Virtual Enterprises stellen weitere Ansprüche an die Realisierungsbasis der Modellverwaltungssysteme. Zur Implementierung derartiger Systeme erweist es sich als vorteilhaft, Eigenschaften objektorientierter Programmiersprachen mit nichtstatischen Typsystemen auszunutzen, da diese durch die vorhandene Metaebene sowie Introspektions- und Reflektionsmechanismen eine effiziente Realisierungsbasis bereitstellen. Zur effektiven Unterstützung synchroner kooperativer Planungstätigkeiten innerhalb einzelner Fachdisziplinen wurde ein Benachrichtigungsmechanismus realisiert, der an das Modellverwaltungssystem angekoppelte Fachapplikationen über nebenläufig vorgenommene Modifikationen am zugehörigen Domänenmodell oder an Projektinformationen informiert. Weiterhin existiert ein Mechanismus zur vereinfachten Anbindung von existierenden Applikationen, die auf statischen Partialmodellen beruhen oder standardisierte, modellbasierte Austauschformate unterstützen. Abschließend wird eine aus einem zentralen Projektserver, Domänenservern und Domänenclients bestehende hybride Systemarchitektur vorgestellt, die geeignet ist, unter den Randbedingungen kooperativer und geographisch verteilter Arbeit bei Revitalisierungsvorhaben in Virtual Enterprises eingesetzt zu werden.
Safety operation of important civil structures such as bridges can be estimated by using fracture analysis. Since the analytical methods are not capable of solving many complicated engineering problems, numerical methods have been increasingly adopted. In this paper, a part of isotropic material which contains a crack is considered as a partial model and the proposed model quality is evaluated. EXtended IsoGeometric Analysis (XIGA) is a new developed numerical approach [1, 2] which benefits from advantages of its origins: eXtended Finite Element Method (XFEM) and IsoGeometric Analysis (IGA). It is capable of simulating crack propagation problems with no remeshing necessity and capturing singular field at the crack tip by using the crack tip enrichment functions. Also, exact representation of geometry is possible using only few elements. XIGA has also been successfully applied for fracture analysis of cracked orthotropic bodies [3] and for simulation of curved cracks [4]. XIGA applies NURBS functions for both geometry description and solution field approximation. The drawback of NURBS functions is that local refinement cannot be defined regarding that it is based on tensorproduct constructs unless multiple patches are used which has also some limitations. In this contribution, the XIGA is further developed to make the local refinement feasible by using Tspline basis functions. Adopting a recovery based error estimator in the proposed approach for evaluation of the model quality and performing the adaptive processes is in progress. Finally, some numerical examples with available analytical solutions are investigated by the developed scheme.
SYSWELD Forum 2011
(2011)
Am 25. und 26. Oktober 2011 trafen sich an der Bauhaus-Universität Weimar 70 nationale und internationale Fachleute aus Forschung und Praxis, um sich im Rahmen des vierten SYSWELD Forums über aktuelle Entwicklungen der numerischen Simulation auf dem Gebiet der Wärmebehandlung und des Schweißens auszutauschen. Die numerische Simulation im Bereich des Schweißens und der Wärmebehandlung hat sich in den letzten Jahren beachtlich weiterentwickelt und bietet ein zukunftsweisendes und innovatives Arbeitsfeld für Ingenieure.
Der Planungsprozess im Konstruktiven Ingenieurbau ist gekennzeichnet durch drei sich zyklisch wiederholende Phasen: die Phase der Aufgabenverteilung, die Phase der parallelen Bearbeitung mit entsprechenden Abstimmungen und die Phase der Zusammenführung der Ergebnisse. Die verfügbare Planungssoftware unterstützt überwiegend nur die Bearbeitung in der zweiten Phase und den Austausch der Datenbestände durch Dokumente. Gegenstand der Arbeit ist die Entwicklung einer Systemarchitektur, die in ihrem Grundsatz alle Phasen der verteilten Bearbeitung und unterschiedliche Arten der Kooperation (asynchron, parallel, wechselseitig) berücksichtigt und bestehende Anwendungen integriert. Das gemeinsame Arbeitsmaterial der Beteiligten wird nicht als Dokumentmenge, sondern als Menge von Objekt- und Elementversionen und deren Beziehungen abstrahiert. Elemente erweitern Objekte um applikationsunabhängige Eigenschaften (Features). Für die Bearbeitung einer Aufgabe werden Teilmengen auf Basis der Features gebildet, für deren Elemente neue Versionen abgeleitet und in einen privaten Arbeitsbereich geladen werden. Die Bearbeitung wird auf Operationen zurückgeführt, mit denen das gemeinsame Arbeitsmaterial konsistent zu halten ist. Die Systemarchitektur wird formal mit Mitteln der Mathematik beschrieben, verfügbare Technologie beschrieben und deren Einsatz in einem Umsetzungskonzept dargestellt. Das Umsetzungskonzept wird pilothaft implementiert. Dies erfolgt in der Umgebung des Internet in der Sprache Java unter Verwendung eines Versionsverwaltungswerkzeuges und relationalen Datenbanken.
This paper presents a robust model updating strategy for system identification of wind turbines. To control the updating parameters and to avoid ill-conditioning, the global sensitivity analysis using the elementary effects method is conducted. The formulation of the objective function is based on M¨uller-Slany’s strategy for multi-criteria functions. As a simulationbased optimization, a simulation adapter is developed to interface the simulation software ANSYS and the locally developed optimization software MOPACK. Model updating is firstly tested on the beam model of the rotor blade. The defect between the numerical model and the reference has been markedly reduced by the process of model updating. The effect of model updating becomes more pronounced in the comparison of the measured and the numerical properties of the wind turbine model. The deviations of the frequencies of the updated model are rather small. The complete comparison including the free vibration modes by the modal assurance criteria shows the excellent coincidence of the modal parameters of the updated model with the ones from the measurements. By successful implementation of the model validation via model updating, the applicability and effectiveness of the solution concept has been demonstrated.
Due to the complex interactions between the ground, the driving machine, the lining tube and the built environment, the accurate assignment of in-situ system parameters for numerical simulation in mechanized tunneling is always subject to tremendous difficulties. However, the more accurate these parameters are, the more applicable the responses gained from computations will be. In particular, if the entire length of the tunnel lining is examined, then, the appropriate selection of various kinds of ground parameters is accountable for the success of a tunnel project and, more importantly, will prevent potential casualties. In this context, methods of system identification for the adaptation of numerical simulation of ground models are presented. Hereby, both deterministic and probabilistic approaches are considered for typical scenarios representing notable variations or changes in the ground model.
SYSBAT - An Application to the Building ProductionBased on Computer Supported Cooperative Work
(2003)
Our proposed solution is to enable partners of a construction project to share all the technical data produced and handled during the building production process by building a system through the use of internet technology. The system links distributed databases and allows building partners to access remotely and manipulate specific information. It provides an updated building representation that is being enriched and refined all along the building production process. A recent collaboration with Nemetschek France (subsidiary company of Nemetschek AG, AEC CAD software leader) focus on a building product repository available in a web context. The aim is to help building project actors to choose a technical solution that fits its professional needs, and maintain our information system with up to date information. It starts with the possibility to build on line building product catalogs, in order to link Allplan CAD entities with building technical features. This paper presents the conceptual approaches on which our information system is built. Starting from a general organization diagram organization, we focus on the product and the description branches of construction works (including last IFC model specifications). Our aim is to add decisional support to the construction works selection process. To do so, we consider the actor's role upon the system and the pieces of information each one needs to achieve a given task.
The subject of this talk is the problem of surface design based upon a mesh that may contain both triangular and quadrangular domains. We investigate the cases when such a combined mesh occurs more preferable for bivariate data interpolation than a pure triangulation. First we describe a modification of the well-known flipping algorithm that constructs a locally optimal combined mesh with a predefined quality criterion. Then we introduce two quality measures for triangular and quadrangular domains and present the results of a computational experiment that compares integral interpolation errors and errors in gradients caused by the piecewise surface models produced by the flipping algorithm with the introduced quality measures. The experiment shows that triangular meshes with the Delaunay quality measure provide better interpolation accuracy only if the interpolated function is strictly convex, as well as a saddle-shaped function is better interpolated by bilinear patches within a combined mesh. For a randomly shaped function combined meshes demonstrate smaller error values and better stability in compare with pure triangulations. At the end we consider other resources for mesh improvement, such as excluding >bad< points from the input set for the mesh generating procedure. Because the function values at these points should not be lost, some linear or bilinear patches are replaced by nonlinear patches that pass through the excluded points.
The design of mobile IT systems, especially the design of wearable computer systems, is a complex task that requires computer science knowledge, such as that related to hardware configuration and software development, in addition to knowledge of the domain in which the system is intended to be used. Particularly in the AEC sector, it is necessary that the support from mobile information technology fit the work situation at hand. Ideally, the domain expert alone can adjust the wearable computer system to achieve this fit without having to consult IT experts. In this paper, we describe a model that helps in transferring existing design knowledge from non-AEC domains to new projects in the construction area. The base for this is a model and a methodology that describes the usage scenarios of said computer systems in an application-neutral and domain-independent way. Thus, the actual design information and experience will be transferable between different applications and domains.
Structural engineering projects are increasingly organized in networked cooperations due to a permanently enlarged competition pressure and a high degree of complexity while performing the concurrent design activities. Software that intends to support such collaborative structural design processes implicates enormous requirements. In the course of our common research work, we analyzed the pros and cons of the application of both the peer-to-peer (University of Bonn) and multiagent architecture style (University of Bochum) within the field of collaborative structural design. In this paper, we join the benefits of both architecture styles in an integrated conceptual approach. We demonstrate the surplus value of the integrated multiagent–peer-to-peer approach by means of an example scenario in which several structural engineers are co-operatively designing the basic structural elements of an arched bridge, applying heterogeneous CAD systems.
In the superelliptic shell joined to a circular cylinder bending stresses are absent when it is subjected to uniform pressure.Some geometrical characteristics have been found. Expressions for determining stresses in the shell crest(in the singular point of plane type) are suggested. The problem of a theoretical critical buckling load of an elongated shell supported by frames is studied. A critical buckling load for two shells with different specifications was found experimentally.
This contribution will be freewheeling in the domain of signal, image and surface processing and touch briefly upon some topics that have been close to the heart of people in our research group. A lot of the research of the last 20 years in this domain that has been carried out world wide is dealing with multiresolution. Multiresolution allows to represent a function (in the broadest sense) at different levels of detail. This was not only applied in signals and images but also when solving all kinds of complex numerical problems. Since wavelets came into play in the 1980's, this idea was applied and generalized by many researchers. Therefore we use this as the central idea throughout this text. Wavelets, subdivision and hierarchical bases are the appropriate tools to obtain these multiresolution effects. We shall introduce some of the concepts in a rather informal way and show that the same concepts will work in one, two and three dimensions. The applications in the three cases are however quite different, and thus one wants to achieve very different goals when dealing with signals, images or surfaces. Because completeness in our treatment is impossible, we have chosen to describe two case studies after introducing some concepts in signal processing. These case studies are still the subject of current research. The first one attempts to solve a problem in image processing: how to approximate an edge in an image efficiently by subdivision. The method is based on normal offsets. The second case is the use of Powell-Sabin splines to give a smooth multiresolution representation of a surface. In this context we also illustrate the general method of construction of a spline wavelet basis using a lifting scheme.
Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.
Analysis System for Bridge Test (Chinese name abbr.: QLJC) is an application software specially designed for bridge test to analyze the static and dynamic character of bridge structures, calculate efficiency ratio of load test, pick up the results of observation points and so on. In this paper, research content, system design, calculation theory, characteristics and practical application of QLJC is introduced in detail.
Information science researchers and developers have spent many years addressing the problem of retrieving the exact information needed and using it for analysis purposes. In informationseeking dialogues, the user, i.e. construction project manager or supplier, often asks questions about specific aspects of the tasks they want to perform. But most of the time it is difficult for the software systems to unambiguously understand their overall intentions. The existence of information tunnels (Tannenbaum 2002) aggravates this phenomenon. This study includes a detailed case study of the material management process in the construction industry. Based on this case study, the structure of a formal user model for information retrieval in construction management is proposed. This prototype user model will be incorporated into the system design for construction information management and retrieval. This information retrieval system is a user-centered product based on the development of a user configurable visitor mechanism for managing and retrieving project information without worrying too much about the underlying data structure of the database system. An executable UML model combined with OODB is used to reduce the ambiguity in the user's intentions and to achieve user satisfaction.
With the advances of the computer technology, structural optimization has become a prominent field in structural engineering. In this study an unconventional approach of structural optimization is presented which utilize the Energy method with Integral Material behaviour (EIM), based on the Lagrange’s principle of minimum potential energy. The equilibrium condition with the EIM, as an alternative method for nonlinear analysis, is secured through minimization of the potential energy as an optimization problem. Imposing this problem as an additional constraint on a higher cost function of a structural property, a bilevel programming problem is formulated. The nested strategy of solution of the bilevel problem is used, treating the energy and the upper objective function as separate optimization problems. Utilizing the convexity of the potential energy, gradient based algorithms are employed for its minimization and the upper cost function is minimized using the gradient free algorithms, due to its unknown properties. Two practical examples are considered in order to prove the efficiency of the method. The first one presents a sizing problem of I steel section within encased composite cross section, utilizing the material nonlinearity. The second one is a discrete shape optimization of a steel truss bridge, which is compared to a previous study based on the Finite Element Method.
The planning of projects in building engineering is a complex process which is characterized by a dynamical composition and many modifications during the definition and execution time of processes. For a computer-aided and network-based cooperation a formal description of the planning process is necessary. In the research project “Relational Process Modelling in Cooperative Building Planning” a process model is described by three parts: an organizational structure with participants, a building structure with states and a process structure with activities. This research project is part of the priority program 1103 “Network-Based Cooperative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). Planning processes in civil engineering can be described by workflow graphs. The process structure describes the logical planning process and can be formally defined by a bipartite graph. This structure consists of activities, transitions and relationships between activities and transitions. In order to minimize errors at execution time of a planning process a consistent and structurally correct process model must be guaranteed. This contribution considers the concept and the algorithms for checking the consistency and the correctness of the process structure.
This paper deals with the modelling and the analysis of masonry vaults. Numerical FEM analyses are performed using LUSAS code. Two vault typologies are analysed (barrel and cross-ribbed vaults) parametrically varying geometrical proportions and constraints. The proposed model and the developed numerical procedure are implemented in a computer analysis. Numerical applications are developed to assess the model effectiveness and the efficiency of the numerical procedure. The main object of the present paper is the development of a computational procedure which allows to define 3D structural behaviour of masonry vaults. For each investigated example, the homogenized limit analysis approach has been employed to predict ultimate load and failure mechanisms. Finally, both a mesh dependence study and a sensitivity analysis are reported. Sensitivity analysis is conducted varying in a wide range mortar tensile strength and mortar friction angle with the aim of investigating the influence of the mechanical properties of joints on collapse load and failure mechanisms. The proposed computer model is validated by a comparison with experimental results available in the literature.
Let the information of a civil engineering application be decomposed into objects of a given set of classes. Then the set of objects forms the data base of the application. The objects contain attributes and methods. Properties of the objects are stored in the attributes. Algorithms which the objects perform are implemented in the methods of the objects. If objects are modified by a user, the consistency of data in the base is destroyed. The data base must be modified in an update to restore its consistency. The sequence of the update operations is not arbitrary, but is governed by dependence between the objects. The situation can be described mathematically with graph theory. The available algorithms for the determination of the update sequence are not suitable when the data base is large. A new update algorithm for large data bases has been developed and is presented in this paper.
In this paper we present a computer aided method supporting co-operation between different project partners, such as architects and engineers, on the basis of strictly three-dimensional models. The center of our software architecture is a product model, described by the Industry Foundation Classes (IFC) of the International Alliance for Interoperability (IAI). From this a geometrical model is extracted and automatically transferred to a computational model serving as a basis for various simulation tasks. In this paper the focus is set on the advantage of the fully three-dimensional structural analysis performed by p-version of the finite element analysis. Other simulation methods are discussed in a separate contribution of this Volume (Treeck 2004). The validity of this approach will be shown in a complex example.
Am Beispiel eines 3-feldrigen Durchlaufträgers wird die Versagenswahrscheinlichkeit von wechselnd belasteten Stahlbetonbalken bezüglich des Grenzzustandes der Adaption (Einspielen, shakedown) untersucht. Die Adaptionsanalyse erfolgt unter Berücksichtigung der beanspruchungschabhängigen Degradation der Biegesteifigkeit infolge Rissbildung. Die damit verbundene mechanische Problemstellung kann auf die Adaptionsanalyse linear elastisch - ideal plastischer Balkentragwerke mit unbekannter aber begrenzter Biegesteifigkeit zurückgeführt werden. Die Versagenswahrscheinlichkeit wird unter Berücksichtigung stochastischer Tragwerks- und Belastungsgrößen berechnet. Tragwerkseigenschaften und ständige Lasten gelten als zeitunabhängige Zufallsgrößen. Zeitlich veränderliche Lasten werden als nutzungsdauerbezogene Extremwerte POISSONscher Rechteck-Pulsprozesse unter Berücksichtigung zeitlicher Überlagerungseffekte modelliert, so dass die Versagenswahrscheinlichkeit ebenfalls eine nutzungsdauerbezogene Größe ist. Die mechanischen Problemstellungen werden numerisch mit der mathematischen Optimierung gelöst. Die Versagenswahrscheinlichkeit wird auf statistischem Weg mit der Monte-Carlo-Methode geschätzt.
The complex failure process of concrete structures can not be described in detail by standard engineering design formulas. The numerical analysis of crack development in concrete is essential for several problems. In the last decades a large number of research groups have dealt with this topic and several models and algorithms were developed. However, most of these methods show some difficulties and are limited to special cases. The goal of this study was to develop an automatic algorithm for the efficient simulation of multiple cracking in plain and reinforced concrete structures of medium size. For this purpose meshless methods were used to describe the growth of crack surfaces. Two meshless interpolation schemes were improved for a simple application. The cracking process of concrete has been modeled using a stable criterion for crack growth in combination with an improved cohesive crack model which can represent the failure process under combined crack opening and crack sliding very well. This crack growth algorithm was extended in order to represent the fluctuations of the concrete properties by enlarging the single-parameter random field concept for multiple correlated material parameters.
The ride of the tram along the line, defined by a time-table, consists of the travel time between the subsequent sections and the time spent by tram on the stops. In the paper, statistical data collected in the city of Krakow is presented and evaluated. In polish conditions, for trams the time spent on stops makes up the remarkable amount of 30 % of the total time of tram line operation. Moreover, this time is characterized by large variability. The time spent by tram on a stop consists of alighting and boarding time and time lost by tram on stop after alighting and boarding time ending, but before departure. Alighting and boarding time itself usually depends on the random number of alighting and boarding passengers and also on the number of passengers which are inside the vehicle. However, the time spent by tram on stop after alighting and boarding time ending is an effect of certain random events, mainly because of impossibility of departure from stop, caused by lack of priorities for public transport vehicles. The main focus of the talk lies on the description and the modelling of these effects. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
The paper presents a linear static analysis on continuous orthotropic thin-walled shell structures simply supported at the transverse ends with a random deformable contour of the cross section. The external loads can be random as well. The class of this structures involves most of the bridges, scaffold bridges, some roof structures etc. A numerical example of steel continuous structures on five spans with an open contour of the cross-section has been solved. The examination of the structure has used the following two computation models: a prismatic structure consisting of isotropic strips, a plates and ribs, with considering their real interaction, and a smooth orthotropic plate equivalent to the structure in the first model. The displacements and forces of the structure characterizing its stressed and deformed condition have been determined. The results obtained from the two solutions have been analyzed. The study on the structure is made with the force method in combination with the analytical finite strip method (AFSM) in displacements. The basic system is obtained by separating the superstructure from the understructure at the places of intermediate supports and consists of two parts. The first part is a single span thin-walled prismatic shell structure; the second part presents supports (columns, space frames etc.). The connection between the superstructure and intermediate supports is made under random supporting conditions. The forces at the supporting points in the direction of the connections removed are assumed to be the basic unknowns of the force method. The solution of the superstructure has been accomplished by the AFSM in displacements. The structure is divided in only one (transverse) direction into a finite number of plain strips connected to each other in longitudinal linear nodes. The three displacements of the points on the node lines and the rotation around those lines have been assumed to be the basic unknown in each node. The boundary conditions of each strip of the basic system correspond to the simply support along the transverse ends and the restraint along the longitudinal ones. The particular strip of the basic system has been solved by the method of the single trigonometric series. The method is reduced to solving a discrete structure in displacements and restoring its continuity at the places of the sections made in respect to both the displacements and forces. The two parts of the basic system have been solved in sequence under the action of single values of each of the basic unknowns and with the external load. The solution of the support part is accomplished using software for analyzing structures by the FEM. The basic unknown forces have been determined from system of canonic equations, the conditions of the deformations continuity on the places of the removed connections under superstructure and intermediate supports. The final displacements and forces at a random point of a continuous superstructure have been determined using the principle of superposition. The computations have been carried by software developed with Visual Fortran version 5.0 for PC.
We consider the standardization problem (SP) which can be formulated as follows. It is known demand bi in each type i in {1, 2, ..., n} of items. Production of yi items of the ith type brings a profit fi (yi), where fi is a nondecreasing concave function for each i in {1, 2, ..., n}.It is necessary to satisfy the demand and to maximize the total profit provided that there exist >standardization possibilities< . These possibilities means that some types of items can be replaced by some another types. We introduce generalized standardization problem (GSP) in which titems demand is given as the set of admissible demand vectors. We show that GSP and SP are special cases of the resource allocation problem over a network polymatroid. Ibasing on this observation we propose a polynomial time solution algorithm for GSP and SP.
Eine der wichtigsten Aufgaben und Herausforderungen der Bauinformatik ist gegenwärtig die Realisierung des durchgängigen, fachübergreifenden Datenflusses im Planungsprozeß eines Bauvorhabens. Im Hinblick auf die internationale Wettbewerbsfähigkeit der deutschen Bauwirtschaft ist es unumgänglich, vorhandene Effizienzpotentiale in der Bauplanung auszuschöpfen, welche durch eine qualitative Verbesserung der Planung sowie durch eine Verringerung der Bearbeitungszeit aller beteiligten Fachplanern erreicht werden können. Nach dem gegenwärtigen Stand der Technik werden die Informationsobjekte standardisiert, damit sie durchgängig nutzbar sind. Diese werden in einem allgemeingültigen Format den speziellen Programmen der Fachplaner zur Verfügung gestellt. In dieser Arbeit wird der Ansatz verfolgt, eine Integration durch die Standardisierung der Kommunikation zwischen den Informationsobjekten und ihren Anwendungsprogrammen zu erreichen. Dabei kann auf die Standardisierung der zu übertragenden Objekte verzichtet werden. Ziel der Ausarbeitung ist die Definition von implementationstechnischen Regeln, die alle auszutauschenden Objekte sowie die Anwendungen, die solche Objekte aufnehmen wollen, erfüllen müssen. Die Bearbeitung der Objekte soll in den gewohnten Anwendungen in unveränderter Weise erfolgen.
In dieser Arbeit wird eine neue Methode für die Integration von Informationen in digitalen Planungsunterlagen erarbeitet. Die Grundidee des Integrationsansatzes stützt sich auf die aktive Einbeziehung der Anwender während der Realisierung der Übernahme von Informationen und bei der Aktualisierung von Planungsunterlagen, die inkonsistent zu anderen Planungsunterlagen sind. Diese Grundidee kombiniert mit den Möglichkeiten neuer Kommunikationstechnologien war für die Spezifikation von neuen Methoden für die Übernahme von Informationen und für die Überwachung von Veränderungen ausschlaggebend. Die neuen Methoden werden in dieser Ausarbeitung erarbeitet und vorgestellt. Ziel der Ausarbeitung ist die Definition von implementationstechnischen Regeln, die alle auszutauschenden Objekte erfüllen müssen. Die Realisierung der Integrationsaufgaben durch den Anwender basiert dabei auf den Möglichkeiten der traditionellen Integration analoger Dokumente.
The problem F|n=2|F is to minimize the given objective function F(C1,m, C2,m) of completion times Ci,m of two jobs i Î J={1, 2} processed on m machines M={1, 2, …, m}. Both jobs have the same technological route through m machines. Processing time ti,k of job iÎ J on machine kÎ M is known. Operation preemptions are not allowed. Let R2m be space of non-negative 2m-dimensional real vectors t=(t1,1,…, t1,m, t2,1,…, t2,m) with Chebyshev’s distance d(t, t*). To solve problem F|n=2|F, we can use the geometric algorithm, which includes the following steps: 1) construct digraph (V, A) for problem F|n=2|F and find so-called border vertices in (V, A); 2) construct the set of trajectories corresponding to the shortest paths Rt in digraph (V, A) from the origin vertex to each of the border vertices; 3) find an optimal path in the set Rt that represents a schedule with minimal value of the objective function F. Let path tu Î Rt be optimal for the problem F|n=2|F with operation processing times defined by vector t. If for any small positive real number e > 0 there exists vector t*Î R2m such that d(t, t*) = e and path tu is not optimal for the problem F|n=2|F with operation processing times defined by vector t*, then optimality of path tu is not stable. The main result of the paper is the proof of necessary and sufficient conditions for optimality stability of path tu. If objective function F is continuous non-decreasing (e.g., makespan, total completion time, maximal lateness or total tardiness), then to test whether optimality of the path tu Î Rt is stable takes O(m log m) time.
Hydro- und morphodynamischen Prozesse in Binnengewässern und im Küstennahbereich erzeugen hochkomplexe Phänomene. Zur Beurteilung der Entwicklung von Küstenzohnen, von Flussbetten sowie von Eingriffen des Menschen in Form von Schutzbauwerken sind geeignete numerische Modellwerkzeuge notwendig. Es wird ein holistischer Modellansatz zur Approximation gekoppelter Seegangs-, Strömungs- und Morphodynamischer Prozesse auf der Basis stabilisierter Finiter Elemente vorgestellt. Der Großteil der Modellgleichungen der Hydro- und Morphodynamik sind Transportgleichungen. Dem Transportcharakter dieser Gleichungen entsprechend wird ein stabilisiertes Finites Element Verfahren auf Dreiecken vorgestellt. Die vorgestellte Approximation entspricht einem streamline upwinding Petrov-Galerkin-Verfahrens für vektorwertige mehrdimensionale Probleme, bei dem der Fehler eines Standard-Galerkin-Verfahrens mit Hilfe eines Upwinding-Koeffizienten minimiert wird. Die Wahl des Upwinding-Koeffizienten ist übertragbar auf andere Problemklassen und basiert ausschließlich auf dem Charakter der zugrundeliegene Das Modell wurde für Seegangs- und Strömungs-Untersuchungen im Jade-Weser-Ästuar an der deutschen Nordseeküste eingesetzt.
Heutige Methoden zur Soll-Spezifikation von Bauleistungen (Kostenermittlung und zeitliche Ablaufplanung) gehen von einer abstrahierten und vereinfachten Betrachtung der Zusammenhänge bei Bauprojekten aus. Leistungsverzeichnisse, Kostenermittlungen und Bauzeitpläne orientieren sich nur indirekt an der Geometrie des Bauwerks und der Baustelle. Die dabei verwendeten Medien wie Papier, 2D-Dateien, digitale Leistungsbeschreibungen oder 3D-Darstellungen lassen die Suche nach Informationen auf der Baustelle zu einem zeitaufwändigen und in Anbetracht existierender Medientechnologien ineffizienten Prozess werden. Interaktive virtuelle Umgebungen erlauben die Auflösung starrer Zusammenhänge durch interaktive Eingriffe des Anwenders und visualisieren komplexe bauproduktionstechnische Vorgänge. Das Konzept der visuellen interaktiven Simulation der Bauproduktion sieht vor, die Soll-Spezifikation anhand eines interaktiven 3D-Modells zu entwickeln, um räumliche Veränderungen und parallele Prozesse auf der virtuellen Baustelle im Rahmen der Entscheidungsfindung zum Bauablauf besser berücksichtigen zu können. Verlangt man einen hohen Grad an Interaktivität mit dem 3D-Modell, dann bieten sich Computerspieltechnologien sehr gut zu Verifikationszwecken an. Die visuelle interaktive Simulation der Bauproduktion ist damit als eine 3D-modellbasierte Methode der Prozessmodellierung zu verstehen, die Entscheidungen als Input benötigt und die Kostenermittlung sowie die zeitliche Ablaufplanung als Output liefert.
In recent years special hypercomplex Appell polynomials have been introduced by several authors and their main properties have been studied by different methods and with different objectives. Like in the classical theory of Appell polynomials, their generating function is a hypercomplex exponential function. The observation that this generalized exponential function has, for example, a close relationship with Bessel functions confirmed the practical significance of such an approach to special classes of hypercomplex differentiable functions. Its usefulness for combinatorial studies has also been investigated. Moreover, an extension of those ideas led to the construction of complete sets of hypercomplex Appell polynomial sequences. Here we show how this opens the way for a more systematic study of the relation between some classes of Special Functions and Elementary Functions in Hypercomplex Function Theory.
The cost of keeping large area urban computer aided architectural design (CAAD) models up to date justifies wider use and access. This paper reviews the potential for collaborative groupwork creation and maintenance of such models and suggests an approach to data entry, data management and generation of appropriate levels of detail models from a Geographic Information System (GIS). Staff at the University of the West of England (UWE) modelled a large area of Bristol to demonstrate millennium landmark proposals. It became swiftly apparent that continued amendment of the model to keep it an accurate reflection of changes on the ground was a major data management problem. Piecing in new CAAD models received from Architectural Practices to visualise them in context as part of the planning negotiation process has often taken staff several days of work for each instance. The model is so complex and proprietary that Bristol City operates a specialist visualisation bureau service. UWE later modelled the environs of the Tower of London to support bids for funding and to provide the context for judging the visual impact of iterative design development. Further research continued to develop more effective approaches to. Data conversion and amalgamation from all the diverse sources was the major impediment to effective group working to create the models. It became apparent that a GIS would assist retrieving all the appropriate data that described the part of the model under creation. It was possible to predict that management of many historic part models stepping back through time, allowing for different expert interpretations to co-exist would be in itself a major task requiring a spatial database/GIS. UWE started afresh from the original source data, to explore the collaborative use of GIS and Virtual Reality Modelling Language (VRML) to integrate models and interventions from various sources and to generate an overall navigable interactive whole. Current exploration of the combination of event driven behaviours and Structured Query Language is seeking to define how appropriately to modify objects in the VRML model on demand. This is beginning to realise the potential for use of this process for: asynchronous group modelling on the lines of a collaborative virtual design studio; historic building maintenance management; visitor management; interpretation of historic sites to visitors and public planning information.
Spatial data acquisition, integration, and modeling for real-time project life-cycle applications
(2004)
Current methods for site modeling employs expensive laser range scanners that produce dense point clouds which require hours or days of post-processing to arrive at a finished model. While these methods produce very detailed models of the scanned scene, useful for obtaining as-built drawings of existing structures, the associated computational time burden precludes the methods from being used onsite for real-time decision-making. Moreover, in many project life-cycle applications, detailed models of objects are not needed. Results of earlier research conducted by the authors demonstrated novel, highly economical methods that reduce data acquisition time and the need for computationally intensive processing. These methods enable complete local area modeling in the order of a minute, and with sufficient accuracy for applications such as advanced equipment control, simple as-built site modeling, and real-time safety monitoring for construction equipment. This paper describes a research project that is investigating novel ways of acquiring, integrating, modeling, and analyzing project site spatial data that do not rely on dense, expensive laser scanning technology and that enable scalability and robustness for real-time, field deployment. Algorithms and methods for modeling objects of simple geometric shape (geometric primitives from a limited number of range points, as well as methods provide a foundation for further development required to address more complex site situations, especially if dynamic site information (motion of personnel and equipment). Field experiments are being conducted to establish performance parameters and validation for the proposed methods and models. Initial experimental work has demonstrated the feasibility of this approach.
The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency.
What is nowadays called (classic) Clifford analysis consists in the establishment of a function theory for functions belonging to the kernel of the Dirac operator. While such functions can very well describe problems of a particle with internal SU(2)-symmetries, higher order symmetries are beyond this theory. Although many modifications (such as Yang-Mills theory) were suggested over the years they could not address the principal problem, the need of a n-fold factorization of the d’Alembert operator. In this paper we present the basic tools of a fractional function theory in higher dimensions, for the transport operator (alpha = 1/2 ), by means of a fractional correspondence to the Weyl relations via fractional Riemann-Liouville derivatives. A Fischer decomposition, fractional Euler and Gamma operators, monogenic projection, and basic fractional homogeneous powers are constructed.
The frame of this paper is the development of methods and procedures for the description of the motion of an arbitrary shaped foundation. Since the infinite half-space cannot be properly described by a model of finite dimensions without violating the radiation condition, the basic problems are infinite dimensions of the half-space as well as its non-homogeneous nature. Consequently, an approach has been investigated to solve this problem indirectly by developing Green's function in which the non-homogeneity and the infiniteness of the half-space has been included. When the Green's function is known, the next step will be the evaluation of contact stresses acting between the foundation and the surface of the half-space through an integral equation. The equation should be solved in the area of the foundation using Green's function as the kernel. The derivation of three-dimensional Green's function for the homogeneous half-space (Kobayashi and Sasaki 1991) has been made using the potential method. Partial differential equations occurring in the problem have been made ordinary ones through the Hankel integral transform. The general idea for obtaining the three-dimensional Green's function for the layered half-space is similar. But in that case some additional phenomena may occur. One of them is the possibility of the appearance of Stonely surface waves propagating along the contact surfaces of layers. Their contribution to the final result is in most cases important enough that they should not be neglected. The main advantage of results presented in comparing to other obtained with numerical methods is their accuracy especially in the case of thin layers because all essential steps of Green's function evaluation except of the contour integration along the branch cut have been made analytically. On the other hand the disadvantage of this method is that the mathematical effort for obtaining the Green's function is increasing drastically with the increase of the number of layers. Future work will therefore be directed in simplifying of the above described process
The aim of this paper we discuss explicit series constructions for the fundamental solution of the Helmholtz operator on some important examples non-orientable conformally at manifolds. In the context of this paper we focus on higher dimensional generalizations of the Klein bottle which in turn generalize higher dimensional Möbius strips that we discussed in preceding works. We discuss some basic properties of pinor valued solutions to the Helmholtz equation on these manifolds.
This research focuses on an approach to describe principles in architectural layout planning within the domain of revitalization. With the aid of mathematical rules, which are executed by a computer, solutions to design problems are generated. Provided that "design" is in principle a combinatorial problem, i.e. a constraint-based search for an overall optimal solution of a problem, an exemplary method will be described to solve such problems in architectural layout planning. To avoid conflicts relating to theoretical subtleness, a customary approach adopted from Operations Research has been chosen in this work. In this approach, design is a synonym for planning, which could be described as a systematic and methodical course of action for the analysis and solution of current or future problems. The planning task is defined as an analysis of a problem with the aim to prepare optimal decisions by the use of mathematical methods. The decision problem of a planning task is represented by an optimization model and the application of an efficient algorithm in order to aid finding one or more solutions to the problem. The basic principle underlying the approach presented herein is the understanding of design in terms of searching for solutions that fulfill specific criteria. This search is executed by the use of a constraint programming language.
The paper is dedicated to decidability exploration of market segmentation problem with the help of linear convolution algorithms. Mathematical formulation of this problem represents interval task of bipartite graph cover by stars. Vertices of the first partition correspond to types of commodities, vertices of the second – to customers groups. Appropriate method is offered for interval problem reduction to two-criterion task that has one implemented linear convolution algorithm. Unsolvability with the help of linear convolution algorithm of multicriterion, and consequently interval, market segmentation problem is proved.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
The contribution presents a model that is able to simulate construction duration and cost for a building project. This model predicts set of expected project costs and duration schedule depending on input parameters such as production speed, scope of work, time schedule, bonding conditions and maximum and minimum deviations from scope of work and production speed. The simulation model is able to calculate, on the basis of input level of probability, the adequate construction cost and time duration of a project. The reciprocal view attends to finding out the adequate level of probability for construction cost and activity durations. Among interpretive outputs of the application software belongs the compilation of a presumed dynamic progress chart. This progress chart represents the expected scenario of development of a building project with the mapping of potential time dislocations for particular activities. The calculation of a presumed dynamic progress chart is based on an algorithm, which calculates mean values as a partial result of the simulated building project. Construction cost and time models are, in many ways, useful tools in project management. Clients are able to make proper decisions about the time and cost schedules of their investments. Consequently, building contractors are able to schedule predicted project cost and duration before any decision is finalized.
SLang - the Structural Language : Solving Nonlinear and Stochastic Problems in Structural Mechanics
(1997)
Recent developments in structural mechanics indicate an increasing need of numerical methods to deal with stochasticity. This process started with the modeling of loading uncertainties. More recently, also system uncertainty, such as physical or geometrical imperfections are modeled in probabilistic terms. Clearly, this task requires close connenction of structural modeling with probabilistic modeling. Nonlinear effects are essential for a realistic description of the structural behavior. Since modern structural analysis relies quite heavily on the Finite Element Method, it seems to be quite reasonable to base stochastic structural analysis on this method. Commercially available software packages can cover deterministic structural analysis in a very wide range. However, the applicability of these packages to stochastic problems is rather limited. On the other hand, there is a number of highly specialized programs for probabilistic or reliability problems which can be used only in connection with rather simplistic structural models. In principle, there is the possibility to combine both kinds of software in order to achieve the goal. The major difficulty which then arises in practical computation is to define the most suitable way of transferring data between the programs. In order to circumvent these problems, the software package SLang (Structural Language) has been developed. SLang is a command interpreter which acts on a set of relatively complex commands. Each command takes input from and gives output to simple data structures (data objects), such as vectors and matrices. All commands communicate via these data objects which are stored in memory or on disk. The paper will show applications to structural engineering problems, in particular failure analysis of frames and shell structures with random loads and random imperfections. Both geometrical and physical nonlinearities are taken into account.
Today´s procedures for the awarding of public construction performance contracts are mainly paper-based. Although the usage of electronic means is permitted in the VOB, the regulations are not sufficient yet. Especially software agents within the AEC-bidding process were not considered at all. The acceptance of an agent-based virtual marketplace for AEC-bidding depends on a reliable and trustworthy public key infrastructure according to the (German) digital signature act. Only if confidentiality, integrity, non-repudiation, and authentication are provided reliably, users will assign sensitive business processes like public tendering procedures to software agents. The development of a secure agent-based virtual marketplace for AEC-bidding according to legal regulations is an entirely new approach from a technical as well as from a legal point of view. The objective of this research project is the development of intelligent software-agents which are able to legally call for bids, to calculate proposals, and to award the successful bidder.
Ein simultanes Lösungsverfahren für Fluid-Struktur-Wechselwirkungen aus dem Bereich des Bauingenieurwesens wird vorgestellt. Die Modellierung der Tragwerksdynamik erfolgt mit der geometrisch nichtlinearen Elastizitätstheorie in total Lagrangescher Formulierung. Die Strömung wird mit den inkompressiblen Navier-Stokes-Gleichungen beschrieben. Wenn Turbulenzeffekte massgeblich sind, kommen die Reynolds-Gleichungen in Verbindung mit dem k-omega-Turbulenzmodell von Wilcox zum Einsatz. Zur Beschreibung von komplexen freien Oberflächen wird die Level-Set-Methode eingesetzt. Die einheitliche Diskretisierung von Fluid und Struktur mit der Raum-Zeit-Finite-Element-Methode führt zu einem konsistenten Berechnungsmodell für das gekoppelte System. Da die isoparametrischen Raum-Zeit-Elemente ihre Geometrie in Zeitrichtung ändern können, erlaubt die Methode eine natürliche Beschreibung des infolge der Strukturbewegung zeitveränderlichen Strömungsgebiets. Die gewichtete Integralformulierung der Kopplungsbedingungen mit globalen Freiwerten für die Interface-Spannungen sichert eine konservative Kopplung von Fluid und Struktur. Ausgewählte Anwendungsbeispiele zeigen die Leistungsfähigkeit der entwickelten Methodik und belegen die guten Konvergenzeigenschaften des simultanen Lösungsverfahrens.
Within the scheduling of construction projects, different, partly conflicting objectives have to be considered. The specification of an efficient construction schedule is a challenging task, which leads to a NP-hard multi-criteria optimization problem. In the past decades, so-called metaheuristics have been developed for scheduling problems to find near-optimal solutions in reasonable time. This paper presents a Simulated Annealing concept to determine near-optimal construction schedules. Simulated Annealing is a well-known metaheuristic optimization approach for solving complex combinatorial problems. To enable dealing with several optimization objectives the Pareto optimization concept is applied. Thus, the optimization result is a set of Pareto-optimal schedules, which can be analyzed for selecting exactly one practicable and reasonable schedule. A flexible constraint-based simulation approach is used to generate possible neighboring solutions very quickly during the optimization process. The essential aspects of the developed Pareto Simulated Annealing concept are presented in detail.
A practical framework for generating cross correlated fields with a specified marginal distribution function, an autocorrelation function and cross correlation coefficients is presented in the paper. The contribution promotes a recent journal paper [1]. The approach relies on well known series expansion methods for simulation of a Gaussian random field. The proposed method requires all cross correlated fields over the domain to share an identical autocorrelation function and the cross correlation structure between each pair of simulated fields to be simply defined by a cross correlation coefficient. Such relations result in specific properties of eigenvectors of covariance matrices of discretized field over the domain. These properties are used to decompose the eigenproblem which must normally be solved in computing the series expansion into two smaller eigenproblems. Such decomposition represents a significant reduction of computational effort. Non-Gaussian components of a multivariate random field are proposed to be simulated via memoryless transformation of underlying Gaussian random fields for which the Nataf model is employed to modify the correlation structure. In this method, the autocorrelation structure of each field is fulfilled exactly while the cross correlation is only approximated. The associated errors can be computed before performing simulations and it is shown that the errors happen especially in the cross correlation between distant points and that they are negligibly small in practical situations.
From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.
Die Arbeit befasst sich mit der österreichischen Bogenstaumauer Kölnbrein, die während des Ersteinstaus geschädigt wurde. Diese Schäden, sowie mögliche Ursachen sind anhand von Literaturquellen dokumentiert. Nach Erstellung eines Rechenmodells wurden, durch ein Randelementeprogramm, Rissfortschrittsberechnungen durchgeführt und mit dem realen Bauwerk verglichen. Zum Einsatz kamen die Anwendungen „OSM“, „FRANC3D“ und „BES“ der Cornell-University.
SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)
After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.
The design of safety-critical structures, exposed to cyclic excitations demands for non-degrading or limited-degrading behavior during extreme events. Among others, the structural behavior is mainly determined by the amount of plastic cycles, completed during the excitation. Existing simplified methods often ignore this dependency, or assume/request sufficient cyclic capacity. The paper introduces a new performance based design method that considers explicitly a predefined number of re-plastifications. Hereby approaches from the shakedown theory and signal processing methods are utilized. The paper introduces the theoretical background, explains the steps of the design procedure and demonstrates the applicability with help of an example. This project was supported by German Science Foundation (Deutsche Forschungsgemeinschaft, DFG)
Site superintendents performing project management tasks on construction sites need to access project documents and need to collect information that they observe while inspecting the site. Often, information that is observed on a construction site needs to be integrated into electronic documents or project control systems. In the future, we expect integrated product and process models to be the medium for storing and handling construction project management information. Even though mobile computing devices today are already capable of storing and handling such integrated product and process data models, the user interaction with such large and complex models is difficult and not adequately addressed in the existing research. In this paper, we introduce a system that supports project management tasks on construction sites effectively and efficiently by making integrated product and process models accessible. In order to effectively and efficiently enter or access information, site superintendents need visual representations of the project data that are flexible with respect to the level of detail, the decomposition structure, and the type of visual representation. Based on this understanding of the information and data collection needs, we developed the navigational model framework and the application Site Data Collection System (SiDaCoS), which implements that framework. The navigational model framework allows site superintendents to create customized representations of information contained in a product and process model that correspond to their data access and data collection needs on site.
Die Sicherheit von Tragwerken hängt von der zuverlässigen Modellierung sämtlicher Tragwerksparameter ab. Üblicherweise werden diese Parameter als deterministische oder stochastische Größen beschrieben. Stochastische Größen sind Zufallsgrößen, die unscharfe Informationen über Tragwerksparameter mit Hilfe von Dichtefunktionen erfassen. Nicht alle unscharfen Tragwerksparameter lassen sich als Zufallsgrößen darstellen. Sie können jedoch als Fuzzy-Größen modelliert werden. Fuzzy-Größen beschreiben unscharfe Tragwerksparameter als unscharfe Menge mit Bewertungsfunktion (Zugehörigkeitsfunktion). Die Fuzzy-Modellierung im Bauingenieurwesen umfaßt die Fuzzifizierung, die Fuzzy-Analyse, die Defuzzifizierung und die Sicherheitsbeurteilung. Sie erlaubt es, Tragwerke mit nicht-stochastischen unscharfen Eingangsinformationen zu untersuchen. Nicht-stochastische Eingangsinformationen treten sowohl bei bestehenden als auch bei neuen Tragwerken auf. Die unscharfen Ergebnisse der Fuzzy-Modellierung gestatten es, das Systemverhalten zutreffender zu beurteilen; sie sind die Ausgangspunkte für eine neue Sicherheitsbeurteilung auf der Grundlage der Möglichkeitstheorie. Bei der Fuzzy-Analyse ist die alpha-Diskretisierung vorteilhaft einsetzbar. Bei fehlender Monotonie der deterministischen Berechnungen und unter Berücksichtigung der Nichtlinearität wird die Fuzzy-Analyse mit Optimierungsalgorithmen durchgeführt. Zwei Beispiele werden diskutiert: die Lösung eines transzendenten Eigenwertproblems und eines linearen Gleichungssystems. Die Systemantworten der Fuzzy-Analyse werden der Sicherheitsbeurteilung zugrunde gelegt. Für ausgewählte physikalische Größen werden Versagensfunktionen definiert. Diese bewerten die Möglichkeit des Versagens. Mit Hilfe von Min-max-Operationen der Fuzzy-Set-Theorie erhält man aus Versagensfunktion und Fuzzy-Antwort die Versagensmöglichkeit bzw. die Überlebensmöglichkeit. Die ermittelte Versagensmöglichkeit repräsentiert die subjektive Beurteilung der Möglichkeit, daß das Ereignis &qout;Versagen&qout; eintritt. Beispiele zeigen die Unterschiede zwischen der Sicherheitsbeurteilung mittels Fuzzy-Modells und mittels deterministischen Modells.
Die Berücksichtigung stochastischer System- und Lastparameter bei der nach EC zulässigen Analyse des Tragwerksverhaltens unter Berücksichtigung globalen nichtlinearen Systemverhaltens sind notwendig, da dies ein anderes Sicherheitskonzept erfordert. Wird der plastische Grenzlastfaktor (PGLF), der die Ausnutzung der Systemkapazitäten bis zum Kollaps ermöglicht, zur Grenzzustandsbeurteilung herangezogen, wird dies besonders deutlich. Für das Modell eines ebenen Stahlbetontragwerks wird starr-ideal-plastisches Materialverhalten vorausgesetzt. Die Bestimmung des PGLFs für ein gegebenes Lastbild kann ausgehend von einem Extremalprinzip über die Lösung einer Optimierungsaufgabe erfolgen. Diese direkte Bestimmung des Kollapses bereitet aber bei der stochastischen Analyse Schwierigkeiten, da die zugehörigen Grenzzustandsgleichungen (GZG) nicht gutartig sind. Es wird die stochastische Methode des Multi-Modal Importance Sampling (MMIS) vorgeschlagen, die unter Berücksichtigung der Eigenschaften dieses mechanischen Modells die Versagenswahrscheinlichkeit bestimmt, d.h. das Verfahren nimmt auf die nur stückweise Stetigkeit GZG des speziellen Problems Rücksicht. Es setzt die zugehörige Grenzzustandsfunktion voraus. Die wesentlichen Bemessungspunkte werden durch Anwendung des Betaverfahrens gesucht und dann mit einem Importance-Sampling-Algorithmus mit multimodaler Sampling Dichte die Versagenswahrscheinlichkeit bestimmt . Das Verfahren sucht und berücksichtigt die wesentlichen Versagensbereiche des Problems mit vertretbarem Aufwand. Verbesserungen könnten sowohl bei den enthaltenen Such- und Iterationsalgorithmen als auch bei der Wahl der einzelnen Sampling-Dichten erzielt werden, was Gegenstand weiterer Untersuchungen ist.
In ersten Teil des Vortrages wird eine Heuristik zur Approximation der Pareto Menge multikriterielle verallgemeinerter Job-Shop Scheduling-Probleme vorgestellt. Der Algorithmus basiert auf einer genetischen lokale Suche Heuristik. Die Mittelwertbildung der Startzeiten der Vorgänge wurde hierbei als Rekombinationsoperator verwendet. Als lokale Suche wurde ein Threshold Accepting Algorithmus implementiert. Der Algorithmus wurde auf einer großen Menge von Benchmark-Instanzen und den Zielfunktionen Makespan, Tardiness, Lateness und Summe der Fertigstellungszeiten getestet. Die Ergebnisse zu den verschiedenen Zielfunktionen werden vorgestellt. Die vom Algorithmus erzeugte Approximation der Pareto Menge kann eine große Anzahl von Lösungen enthalten. Aus dieser Menge muss der Entscheidungsträger eine Lösung auswählen. Da die Datenmenge schon bei moderaten Problemdimensionen sehr umfangreich wird, stellt dies ein Problem für den Entscheider dar. Deshalb muss die große Menge der Lösungen auf eine überschaubare Anzahl reduziert werden. Bei dieser Reduktion der Lösungen muss die Diversität der verbleibenden Lösungen beachtet werden. Diese Reduzierung wird als Short Listing bezeichnet und im zweiten Teil des Vortrages vorgestellt. Im ersten Schritt des Short Listings werden die Lösungen mittels Abstandsmaßen im Lösungsraum geclustert. Die dazu verwendeten Abstandsmaße werden auf den Permutationen der Vorgänge auf den Ressourcen definiert. Es wurden fünf Abstandsmaße und zwei Clusterverfahren, ein hierarchisches und ein nicht-hierarchisches, untersucht. Im zweiten Schritt wird aus jedem Cluster jeweils eine Lösung ausgewählt und dem Entscheidungsträger vorgelegt. Dabei wurden zwei Methoden untersucht. Im ersten Fall wurde die bezüglich einer Ranking-Funktion beste Lösung und im zweiten Fall die Medianlösung bezüglich des Abstandsmaßes ausgewählt. Die Abstandsmaße, Clusteralgorithmen und Auswahlmethoden wurden auf einer großen Menge von Benchmark-Instanzen verglichen. Die Ergebnisse werden im Vortrag vorgestellt.
In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.
This work was partially supported by DAAD, Fundamental Researches Foundation of Belarus and International Soros Science Education Program We consider a vector discrete optimization problem on a system of non- empty subsets (trajectories) of a finite set. The vector criterion of the pro- blem consists partial criterias of the kinds MINSUM, MINMAX and MIN- MIN. The stability of eficient (Pareto optimal, Slater optimal and Smale op- timal) trajectories to perturbations of vector criterion parameters has been investigated. Suficient and necessary conditions of eficient trajectories local stability have been obtained. Lower evaluations of eficient trajectories sta- bility radii, and formulas in several cases, have been found for the case when l(inf) -norm is defined in the space of vector criterion parameters.
A new application of software technology is the application area of smart living or sustainable living. Within this area application platforms are designed and realized with the goal to support value added services. In this context value added services integrates microelectronics, home automation and services to enhance the attractiveness of flats, homes and buildings. Especially real estate companies or service providers dealing with home services are interested in an effective design and management of their services. Service Engineering is the approved approach for designing customer oriented service processes. Service engineering consists of several phases; from situation analysis to service creation and service design to service management. This article will describe how the method service blueprint can be used to design service processes. Smart living includes all actions to enlarge a flat to a smart home for living. One special requirement of this application domain is the use of local components (actuators, sensors) within service processes. This article will show how this extended method supports service providers to improve the quality of customer oriented service processes and the derivation of needed interfaces of involved actors. For the civil engineering process it will be possible to derive needed information from a built in home automation system. The aim is to show, how to get needed smart local components to fullfill later offered it-supported value added services. Value added services focused on inhabitants are grouped to consulting and information, care and supervision, leisure time activities, repairs, mobility and delivery, safety and security, supply and disposal.
In distributed project organisations and collaboration there is a need for integrating unstructured self-contained text information with structured project data. We consider this a process of text integration in which various text technologies can be used to externalise text content and consolidate it into structured information or flexibly interlink it with corresponding information bases. However, the effectiveness of text technologies and the potentials of text integration greatly vary with the type of documents, the project setup and the available background knowledge. The goal of our research is to establish text technologies within collaboration environments to allow for (a) flexibly combining appropriate text and data management technologies, (b) utilising available context information and (c) the sharing of text information in accordance to the most critical integration tasks. A particular focus is on Semantic Service Environments that leverage on Web service and Semantic Web technologies and adequately support the required systems integration and parallel processing of semi-structured and structured information. The paper presents an architecture for text integration that extends Semantic Service Environments with two types of integration services. Backbone to the Information Resource Sharing and Integration Service is a shared environment ontology that consolidates information on the project context and the available model, text and general linguistic resources. It also allows for the configuration of Semantic Text Analysis and Annotation Services to analyse the text documents as well as for capturing the discovered text information and sharing it through semantic notification and retrieval engines. A particular focus of the paper is the definition of the overall integration process configuring a complementary set of analyses and information sharing components.
SELECTION AND SCALING OF GROUND MOTION RECORDS FOR SEISMIC ANALYSIS USING AN OPTIMIZATION ALGORITHM
(2015)
The nonlinear time history analysis and seismic performance based methods require a set of scaled ground motions. The conventional procedure of ground motion selection is based on matching the motion properties, e.g. magnitude, amplitude, fault distance, and fault mechanism. The seismic target spectrum is only used in the scaling process following the random selection process. Therefore, the aim of the paper is to present a procedure to select a sets of ground motions from a built database of ground motions. The selection procedure is based on running an optimization problem using Dijkstra’s algorithm to match the selected set of ground motions to a target response spectrum. The selection and scaling procedure of optimized sets of ground motions is presented by examining the analyses of nonlinear single degree of freedom systems.
In spite of the extensive research in dynamic soil-structure interaction (SSI), there still exist miscon-ceptions concerning the role of SSI in the seismic performance of structures, especially the ones founded on soft soil. This is due to the fact that current analytical SSI models that are used to evaluate the influence of soil on the overall structural behavior are approximate models and may involve creeds and practices that are not always precise. This is especially true in the codified approaches which in-clude substantial approximations to provide simple frameworks for the design. As the direct numerical analysis requires a high computational effort, performing an analysis considering SSI is computationally uneconomical for regular design applications. This paper outlines the set up some milestones for evaluating SSI models. This will be achieved by investigating the different assumptions and involved factors, as well as varying the configurations of R/C moment-resisting frame structures supported by single footings which are subject to seismic excita-tions. It is noted that the scope of this paper is to highlight, rather than fully resolve, the above subject. A rough draft of the proposed approach is presented in this paper, whereas a thorough illustration will be carried out throughout the presentation in the course of the conference.
From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.
O-D Matrix sind die wichtigste Informationsquelle, die den Bedarf an Transport in der Stadt betrifft. Sie dienen den richtigen Bildung von Straßennetz und Öffentlichen Personen Verkehrslinien. Anhand diesen Grundlagen gerechnet man Verkehrsstärken, die zur Dimensionierung der Verkehrsinfrastruktur benutzt werden. Im Verkehrsplannung zur Rechnung O-D Matrix wird Vier - Stufen - Methode angewendet. In dieser Methode verwendet man die Angaben aus den Haushaltsumfragen und aus den Kordonumfragen. Diese Methode fordert einen großen Arbeitsaufwand und Kosten. Hier wird die andere Methode dargestellt. Bei Schätzung der O-D Matrix verwendet man die Verkehrsstärken, die in den Straßenquerschnitten gemessen werden. Wir brauchen gemessene Verkehrsstärken auch für andere Ziele (z. B. für Signalsteuerung, für Beobachtung des Motorisierungstrendes). Darum ist diese Methode billiger als das traditionelle Verfahren. In diesen Methode haben wir zwei Möglichkeiten. Die erste - von Verkehrsstärkemessungen kann man O-D Matrix direkt schätzen. Und zweite Möglichkeit - zuerst rechnen wir die Verkehrserzeugungen und später O-D Matrix. Das zweite Verfahren hat einen wesentlichen Vorteil und zwar, anhand der Verkehrserzeugungen ist es möglich alle Elemente O-D Matrix zu rechnen. Bei der Schätzung der O-D Matrix unmittelbar von den Messungen (das erste Verfahren) bekommt man nur diese Elemente, die in den beobachtenen Straßenquerschnitten auftreten.
This paper concerns schedule synchronization problems in public transit networks. In particular, it consists of three main parts. In the first the subject area is introduced, the terms are defined and framework for optimal synchronization in the form of problem representation and formulation is proposed. The second part is devoted to transfer synchronization problem when passengers changing transit lines at transfer points. The intergrated Tabu Search and Genetic solution method is developed with respect to this specific problem. The third part deals with headways harmonization problem i.e. synchronization of different transit lines schedules on a common segments of routes. For the solution of this problem a new bilevel optimization method is proposed with zones harmonization at the bottom level and co-ordination of zones, by time buffers assigned to timing points, at the upper level. Finally, the synchronization problems are numerically illustrated by real-life examples of the public transport lines in Cracow.
Although there are some good reasons to design engineering software as a stand-alone application for a single computer, there are also numerous possibilities for creating distributed engineering applications, in particular using the Internet. This paper presents some typical scenarios how engineering applications can benefit from including network capabilities. Also, some examples of Internet-based engineering applications are discussed to show how the concepts presented can be implemented.
Scalarization methods are a category of multiobjective optimization (MOO) methods. These methods allow the usage of conventional single objective optimization algorithms, as scalarization methods reformulate the MOO problem into a single objective optimization problem. The scalarization methods analysed within this thesis are the Weighted Sum (WS), the Epsilon-Constraint (EC), and the MinMax (MM) method. After explaining the approach of each method, the WS, EC and MM are applied, a-posteriori, to three different examples: to the Kursawe function; to the ten bar truss, a common benchmark problem in structural optimization; and to the metamodel of an aero engine exit module.
The aim is to evaluate and compare the performance of each scalarization method that is examined within this thesis. The evaluation is conducted using performance metrics, such as the hypervolume and the generational distance, as well as using visual comparison.
The application to the three examples gives insight into the advantages and disadvantages of each method, and provides further understanding of an adequate application of the methods concerning high dimensional optimization problems.
The paper introduces a systematic construction management approach, supporting expansion of a specified construction process, both automatically and semi-automatically. Throughout the whole design process, many requirements must be taken into account in order to fulfil demands defined by clients. In implementing those demands into a design concept up to the execution plan, constraints such as site conditions, building code, and legal framework are to be considered. However, complete information, which is needed to make a sound decision, is not yet acquired in the early phase. Decisions are traditionally taken based on experience and assumptions. Due to a vast number of appropriate available solutions, particularly in building projects, it is necessary to make those decisions traceable. This is important in order to be able to reconstruct considerations and assumptions taken, should there be any changes in the future project’s objectives. The research will be carried out by means of building information modelling, where rules deriving from standard logics of construction management knowledge will be applied. The knowledge comprises a comprehensive interaction amongst bidding process, cost-estimation, construction site preparation as well as specific project logistics – which are usually still separately considered. By means of these rules, favourable decision taking regarding prefabrication and in-situ implementation can be justified. Modifications depending on the available information within current design stage will consistently be traceable.
In this paper experimental studies and numerical analysis carried out on reinforced concrete beam are partially reported. They aimed to apply the rigid finite element method to calculations for reinforced concrete beams using discrete crack model. Hence rotational ductility resulting from crack occurrence had to be determined. A relationship for calculating it in static equilibrium was proposed. Laboratory experiments proved that dynamic ductility is considerably smaller. Therefore scaling of the empirical parameter was carried out. Consequently a formula for its value depending on reinforcement ratio was obtained.
The topic of structural robustness is covered extensively in current literature in structural engineering. A few evaluation methods already exist. Since these methods are based on different evaluation approaches, the comparison is difficult. But all the approaches have one in common, they need a structural model which represents the structure to be evaluated. As the structural model is the basis of the robustness evaluation, there is the question if the quality of the chosen structural model is influencing the estimation of the structural robustness index. This paper shows what robustness in structural engineering means and gives an overview of existing assessment methods. One is the reliability based robustness index, which uses the reliability indices of a intact and a damaged structure. The second one is the risk based robustness index, which estimates the structural robustness by the usage of direct and indirect risk. The paper describes how these approaches for the evaluation of structural robustness works and which parameters will be used. Since both approaches needs a structural model for the estimation of the structural behavior and the probability of failure, it is necessary to think about the quality of the chosen structural model. Nevertheless, the chosen model has to represent the structure, the input factors and reflect the damages which occur. On the example of two different model qualities, it will be shown, that the model choice is really influencing the quality of the robustness index.
In construction engineering, a schedule’s input data, which is usually not exactly known in the planning phase, is considered deterministic when generating the schedule. As a result, construction schedules become unreliable and deadlines are often not met. While the optimization of construction schedules with respect to costs and makespan has been a matter of research in the past decades, the optimization of the robustness of construction schedules has received little attention. In this paper, the effects of uncertainties inherent to the input data of construction schedules are discussed. Possibilities are investigated to improve the reliability of construction schedules by considering alternative processes for certain tasks and by identifying the combination of processes generating the most robust schedule with respect to the makespan of a construction project.
This paper presents an application of dynamic decision making under uncertainty in planning and estimating underground construction. The application of the proposed methodology is illustrated by its application to an actual tunneling project—The Hanging Lake Tunnel Project in Colorado, USA. To encompass the typical risks in underground construction, tunneling decisions are structured as a risk-sensitive Markov decision process that reflects the decision process faced by a contractor in each tunneling round. This decision process consists of five basic components: (1) decision stages (locations), (2) system states (ground classes and tunneling methods), (3) alternatives (tunneling methods), (4) ground class transition probabilities, and (5) tunneling cost structure. The paper also presents concepts related to risk preference that are necessary to model the contractor’s risk attitude, including the lottery concept, utility theory, and the delta property. The optimality equation is formulated, the model components are defined, and the model is solved by stochastic dynamic programming. The main results are the optimal construction plans and risk-adjusted project costs, both of which reflect the dynamics of subsurface construction, the uncertainty about geologic variability as a function of available information, and the contractor’s risk preference.
Most of the existing seismic resistant design codes are based on the response spectrum theory. The influence of inelastic deformations can be evaluated by considering inelastic type of resisting force and then the inelastic spectrum is considerably different from the elastic one. Also, the influence of stiffness degradation and strength deterioration can be accounted for by including more precise models from material point of view. In some recent papers the corresponding changes in response spectra due to the P- Ä effect are discussed. The experience accumulated from the recent earthquakes indicates that structural pounding may considerably influence the response of structures and should be taken into account in design procedures. The most convenient way to do that is to predict the influence of the pounding on the response spectra for accelerations, velocities and displacements. Generally speaking the contact problems such as pounding are characterized by large extent of nonlinearity and slow convergence of the computational procedures. Thus obtaining spectra where the contact problem is accounted for seems very attractive from engineering point of view because could easy be implemented into the design procedures. However it is worth nothing that there is not rigorous mathematical proof that the original system can be decomposed into single equations related to single degree of freedom systems. It is the porpose of the paper to study the influence of the pounding on the response spectra and to evaluate the amplification due to the impact. For this purpose two adjacent SDOF systems are considered that are able to interact during the vibration process. This problem is solved versus the elastic stiffness ratio, which appears to be very important for such assemblage. The contact between masses is numerically simulated using opening gap elements as links. Comparisons between calculated response spectra and linear response spectra are made in order to derive analytical relationships to simply obtain the contribution of pounding. The results are graphically illustrated in response spectra format and the influence of the stiffness ratio is clarified.
The management of resources is an essential task in each construction company. Today, ERP systems and e-Business systems are available to assist construction companies to efficiently organise the allocation of their personnel and equipment within the company, but they cannot provide the company with the idle resources for every single task that has to be performed during a construction project. Therefore, companies should have an alternative solution to better exploit expensive resources and compensate their fixed costs, but also have them available at the right time for their own business activities. This paper outlines the approach taken by the EU funded project “e-Sharing” (IST-2001-33325) to support resource management between construction companies. It will describe requirements for the management of construction resources, its core features, and the integration approach. Therefore, we will outline the approach of an integrated resource type model supporting the management and classification of construction equipment, construction tasks and qualification profiles. The development is based on a cross-domain analysis and evaluation of existing models. ...
Recently, many reseraches on active control systems of building structures are preformed based on modern control theory and are installed real buildings. The authors have already proposed intelligent fuzzy optimal active control (IFOAC) systems. IFOAC systems imitate intelligent activities of human brains such as prediction, adaptation, decision-kaking and so on. In IFOAC systems, objective and subjective judgements on the active control can be taken into account. However, IFOAC systems are considered to be suitable for far-field erathquake and control effect becomes small in case of near-field earthqaukes which include a few velosity pules with large amplitudes. To improve control effect in case of near-souece earthquakes, the authors have also proposed hybrid control (HC) systems, in which IFOAC systems and fuzzy control system are combined. In HC systems, the fuzzy control systems are introduced as a reflective fuzzy active control (RFAC) system and imitates spinal reflection of human. In HC systems, active control forces are activated to buildings in accordance with switching rules on active control forces. In this paper, optimizations on fuzzy control rules in RFAC system and switching rules of active control forces in HC system are performed by Parameter-Free Genetic Algorithms (PfGAs). Here, the optimization is performed by using different earthquake inputs. The results of digital simulations show that the HC system can reduce maximal response displacements under restrictions on strokes of the actuator effectively in case of a near-source earthquake and the effectiveness of the proposed HC system is discussed and clarified.
Research on Establishment of a Standard of Traffic Impact Assessment with Integrated Database System
(2004)
Planning support systems, such as geographical information system (GIS) and traffic flow simulation models, are widely in use in recent urban planning research. In this paper we propose a method to apply traffic impact assessment (TIA) to large-scale, commercial developments. In TIA research we often encounter the problem of increasing amount of data that is necessary for detailed investigation and analysis, as the scale of commercial developments become larger and more complex. As a result, TIA presents two problems. The first problem is the difficulty of data acquisition. The second problem is the reliability of data. As a solution, we developed an integrated database system.
A multicriterial statement of the above mentioned problem is presented. It differes from the classical statement of Spanning Tree problem. The quality of solution is estimated by vector objective function which contains weight criteria as well as topological criteria (degree and diameter of tree). Many real processes are not determined yet. And that is why the investigation of the stability is very important. Many errors are connected with calculations. The stability analysis of vector combinatorial problems allows to discover the value of changes in the initial data for which the optimal solution is not changed. Furthermore, the investigation of the stability allows to construct the class of the problems on base of the one problem by means of the parameter variations. Analysis of the problems with belong to this class allows to obtaine axact and adecuate discription of model
The technological processes, schedules, parallel algorithms, etc., having some technological limitations and exacting increases of efficiency of their execution can be described through digraphs, on which the appropriate optimization problem (construction of optimal scheduling of tops of digraph) can be solved. The problems, researched in the given operation, have a generally following statement: The problem 1: Under the given graph G and option value h to construct parallel scheduling of tops of digraph of minimum length. Let's designate the problem S(G, h, l). The problem 2: Under the given graph G and option value l to construct parallel scheduling of tops of digraph of minimum width. Let's designate the problem S(G, l, h). The problem 3: Under the given graph G, option value h and periods of execution of operations di, i=1, …, n to construct parallel scheduling of tops of digraph of minimum length. Let's designate the problem S(G, h, di, l). The problems 1,2,3 in a case when h-arbitrary have exponential complexity. In operation the method of solution of the problem S(T, h, di, l) is offered on the basis of choice of tops having greatest weight. The approach to solution of the problem S(G, 3, l) is offered, where G the graph satisfying property : S[i] =S [i], i=1, …, l. For obtaining a rating of width of scheduling on an available estimator of length, we offer to use iterative algorithm of polynomial complexity, on which each step the current value of width of scheduling is set, which is used for specification of length of scheduling.
RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)
Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.
The paper introduced the research and application of the highway construction management information integrated system. Explained the development and application of highway survey applet run on mobile telephone supporting Java and the technique of transmitting engineering data by GPRS wireless network technology. And expounded the development and application of highway engineering construction field data collecting software run on Pocket-PC. Recommended the technique of engineering long distance data transmitting based on C/S structure adopting VPN (Virtual Private Networks) technology. Especially expatiated on the research on the platform of highway construction management information integrated system adopting geography information system (GIS) technique, database technique and network technique. And said all to subsystem about bid manage, contract management, engineering design drawing, engineering survey calculation, measure and pay, data processing on engineering experiment, quantity assessing, project plan and progress, engineering document management etc. Besides? proposed highway construction project visual analysis and inquiry system based on Web-GIS; Explained the research and application of highway engineering construction OA based on B/S structure; real-time workflow and information processing such as the management of administration, business and procedure of authorization and information distribution. At last, the author described the prospect of the application of C/S and B/S structure in trade software development in the highway construction management.
In current AEC practice client requirements are typically recorded in a building program, which, depending on the building type, covers various aspects from the overall goals, activities and spatial needs to very detailed material and condition requirements. This documentation is used as the starting point of the design process, but as the design progresses, it is usually left aside and changes are made incrementally based on the previous design solution. These incremental small changes can lead to a solution that may no longer meet the original requirements. In addition, design is by nature an iterative process and the proposed solutions often also cause evolution in the client requirements. However, the requirements documentation is usually not updated accordingly. Finding the latest updates and evolution of the requirements from the documentation is very difficult, if not impossible. This process can lead to an end result, which is significantly different from the documented requirements. Some important requirements may not be satisfied, and even if the design process was based on agreed-upon changes in the scope and requirements, differences in the requirements documents and in the completed building can lead to well-justified doubts about the quality of the design and construction process...
Die Verfügbarkeit und die sichere Beherrschung moderner und kostengünstiger Informations- und Kommunikationstechnologien gestattet es auch kleinen unternehmerischen Einheiten im Bauwesen sich in betriebsübergreifenden Netzwerken zu organisieren und sich besser in die Wertschöpfungskette am Bau zu integrieren. In sogenannten >Virtuellen Organisationen< (VO) können diese für die Volkswirtschaft wichtigen Unternehmen ihre Wettbewerbsvorteile wie höhere Flexibilität, schnelle Reaktion auf Kundenwünsche und Marktnähe nutzen und somit ihr langfristiges Bestehen in einem liberalen Wettbewerb auf einem erweiterten EU-Markt sichern und ausbauen. Die derzeit existierenden Hindernisse im Informationsfluss zwischen Baustelle und Büro können durch den Einsatz mobiler, funkvernetzter Endgeräte, wie beispielsweise >Smart-Phones<, und einer neu zu gestaltenden Infrastruktur zum Wissensmanagement und zur kontextsensitiven Informationsdarstellung abgebaut werden. Im Rahmen dieser Veröffentlichung werden konzeptionelle Ansätze eines integrierten Informationsmanagements zur Unterstützung von VO, die im Rahmen des BMBF-Projektes >IuK-System Bau< derzeit entwickelt werden, vorgestellt.
Renovation's peculiarities of industrial enterprises in conditions of economic selfsufficiency
(1997)
Probleme of recrienfation of building complex, to the sharp increase of share of reconstruction works, capital repair and modernisation of in-dustrial plants are concidered in this work. The conception of develop-ment and creation of unitified system of expluatation and renovation of industrial plants are worded out. This system is based on date-computer technology and taking into conciderations of real economic relations.
In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.
We investigate aspects of tram-network section reliability, which operates as a part of the model of whole city tram-network reliability. Here, one of the main points of interest is the character of the chronological development of the disturbances (namely the differences between time of departure provided in schedule and real time of departure) on subsequent sections during tram line operation. These developments were observed in comprehensive measurements done in Krakow, during one of the main transportation nodes (Rondo Mogilskie) rebuilding. All taken building activities cause big disturbances in tram lines operation with effects extended to neighboring sections. In a second part, the stochastic character of section running time will be analyzed more detailed. There will be taken into consideration sections with only one beginning stop and also with two or three beginning stops located at different streets at an intersection. Possibility of adding results from sections with two beginning stops to one set will be checked with suitable statistical tests which are used to compare the means of the two samples. Section running time may depend on the value of gap between two following trams and from the value of deviation from schedule. This dependence will be described by a multi regression formula. The main measurements were done in the city center of Krakow in two stages: before and after big changes in tramway infrastructure.
Die zunehmend erforderliche Kooperation verschiedener Beteiligter unterschiedlicher Fachbereiche und der Einsatz hochspezialisierter Fachapplikationen in heterogenen Systemumgebungen unterstreichen die Bedeutung und Notwendigkeit neuer Konzepte und Möglichkeiten zur Schaffung einer computergestützten Integrationsebene. Ziel einer computergestützten Integrationsebene ist die Verbesserung der Kooperation und Kommunikation unter den Beteiligten. Grundlage dafür ist die Etablierung eines effizienten und fehlerfreien Daten- und Informationsaustausches zwischen den verschiedenen Fachplanern und -applikationen. Die Basis für die Datenintegrationsebene bildet ein digitales Bauwerksmodell im Sinne eines >virtuellen Bauwerks<, welches alle relevanten Daten und Informationen über ein zu planendes oder real existierendes Bauwerk zur Verfügung stellt. Bei der Verwirklichung einer Bauwerksmodell-orientierten Datenintegrationsebene und deren Modellverwaltung erweist sich speziell die Definition des Bauwerksmodells also die Spezifikation der relevanten auszutauschenden Daten als äußerst komplex. Der hier vorzustellende Relationen-orientierte Ansatz, d.h. die Realisierung des Daten- und Informationsaustauschs mittels definierter Relationen und Beziehungen zwischen dynamisch modifizierbaren Domänenmodellen, bietet Ansätze zur: * Verringerung und Beherrschung der Komplexität des Bauwerksmodells (Teilmodellbildung) * Realisierung eines effizienten Datenaustauschs (Relationenmanagement) Somit stellt der Relationenorientierte Ansatz einen adäquaten Lösungsweg zur Modellierung eines digitalen Bauwerksmodells als Datenintegrationsebene für den Lebenszyklus eines Bauwerkes dar.
Für planende Ingenieure und Architekten besteht seit jeher im Rekonstruktionsbereich die Aufgabe, vorhandene Gebäude in ihrer Geometrie und Struktur zu erfassen und daraus Rekonstruktionspläne und -technologien zu erarbeiten. Diese Erfassungsmaßnahmen sind sehr umfangreich und kostenintensiv. Ziel der hier vorzustellenden Ansatzes war es deshalb, ein kostengünstiges Verfahren zu entwickeln, das ein berührungsloses Aufmaß eben begrenzter Räume (polyedrischer Räume) mit einer den Erfordernissen entsprechenden Genauigkeit gewährleistet. Es werden im wesentlichen zwei Problemkreise behandelt. Der erste beinhaltet den Nachweis einer für die geplanten Anwendungen hinreichend genauen, maßstäblichen Rekonstruierbarkeit von ebenen Objekten (Wänden) aus monokularen Fotoaufnahmen. Anstelle des in der Photogrammetrie üblichen Weges über Kamerakalibrierung mittels exakt eingemessener Paßpunkte wurde ein Ansatz verfolgt, bei dem eine mittels Laserspotprojektoren auf dem Aufnahmeobjekt erzeugte parameterabhängige Maßfigur in Verbindung mit a priori bekannten Bildinhalten Grundlage für die maßstabsgerechte Rekonstruktion des Objektes ist. Der zweite Problemkreis behandelt die Zusammensetzung von eben begrenzten Räumen aus Einzelebenen (Wänden) die als Ergebnis des ersten Schrittes projektiv entzerrt, d.h. als orthogonale Draufsicht, allerdings verfahrensbedingt fehlerbehaftet vorliegen. Ziel ist hier, durch Nutzung von a-priori-Kenntnissen über die Raumstruktur mit Hilfe von Methoden der mathematischen Optimierung einen Genauigkeitsgewinn zu erzielen.
The theory of regular quaternionic functions of a reduced quaternionic variable is a 3-dimensional generalization of complex analysis. The Moisil-Theodorescu system (MTS) is a regularity condition for such functions depending on the radius vector r = ix+jy+kz seen as a reduced quaternionic variable. The analogues of the main theorems of complex analysis for the MTS in quaternion forms are established: Cauchy, Cauchy integral formula, Taylor and Laurent series, approximation theorems and Cauchy type integral properties. The analogues of positive powers (inner spherical monogenics) are investigated: the set of recurrence formulas between the inner spherical monogenics and the explicit formulas are established. Some applications of the regular function in the elasticity theory and hydrodynamics are given.
Für den Entwurf der i.a. aus langen schmalen Rechtecken bestehenden Schal- bzw. Werkpläne wird eine Entwurfsunterstützung vorgestellt, bei der die Größe der Rechtecke wie immer festgelegt wird, die Lage der Rechtecke aber durch topologische Angaben. Letztere bilden programmtechnisch Bedingungen, wobei zwischen Berühr- und Bündigkeitsbedingen unterschieden wird. Diese Angaben positionieren das neue Rechteck im Bezug zu einem bereits platzierten. Zum Beispiel erlaubt die Angabe, die Säule ist oberhalb des Fundamentes und belastet dieses mittig, eine eindeutige Festlegung der Lage der Säule bei gegebener Lage des Fundamentes und gegebenen Abmessungen beider Rechtecke. Die Formulierung mittels Bedingungen hat den Vorteil daß diese auch bei Änderung von Abmessungen gültig bleiben. Die hier vorgestellte Eingabeart der relativen Positionierung ist eine Erweiterung des Orthomodus, wie er bei Bau-CAD-Programmen stets gefunden wird.
Due to increasing numbers of wind energy converters, the accurate assessment of the lifespan of their structural parts and the entire converter system is becoming more and more paramount. Lifespan-oriented design, inspections and remedial maintenance are challenging because of their complex dynamic behavior. Wind energy converters are subjected to stochastic turbulent wind loading causing corresponding stochastic structural response and vibrations associated with an extreme number of stress cycles (up to 109 according to the rotation of the blades). Currently, wind energy converters are constructed for a service life of about 20 years. However, this estimation is more or less made by rule of thumb and not backed by profound scientific analyses or accurate simulations. By contrast, modern structural health monitoring systems allow an improved identification of deteriorations and, thereupon, to drastically advance the lifespan assessment of wind energy converters. In particular, monitoring systems based on artificial intelligence techniques represent a promising approach towards cost-efficient and reliable real-time monitoring. Therefore, an innovative real-time structural health monitoring concept based on software agents is introduced in this contribution. For a short time, this concept is also turned into a real-world monitoring system developed in a DFG joint research project in the authors’ institute at the Ruhr-University Bochum. In this paper, primarily the agent-based development, implementation and application of the monitoring system is addressed, focusing on the real-time monitoring tasks in the deserved detail.