Refine
Document Type
- Conference Proceeding (286)
- Article (3)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (174)
- Graduiertenkolleg 1462 (31)
- Professur Informatik im Bauwesen (24)
- Institut für Strukturmechanik (ISM) (22)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (IKI) (8)
- Professur Stahlbau (4)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (3)
- Professur Informatik in der Architektur (3)
- Professur Stochastik und Optimierung (3)
Keywords
- Computerunterstütztes Verfahren (289) (remove)
Die Liquiditätsplanung von Bauunternehmen XE "Liquiditätsplanung" gilt als ein wesentliches Steuerungs-, Kontroll- sowie Informationsinstrument für interne und externe Adressaten und übt eine Entscheidungsunterstützungsfunktion aus. Da die einzelnen Bauprojekte einen wesentlichen Anteil an den Gesamtkosten des Unternehmens ausmachen, besitzen diese auch einen erheblichen Einfluß auf die Liquidität und die Zahlungsfähigkeit der Bauunternehmung. Dem folgend ist es in der Baupraxis eine übliche Verfahrensweise, die Liquiditätsplanung zuerst projektbezogen zu erstellen und anschließend auf Unternehmensebene zu verdichten. Ziel der Ausführungen ist es, die Zusammenhänge von Arbeitskalkulation XE "Arbeitskalkulation" , Ergebnisrechnung XE "Ergebnisrechnung" und Finanzrechnung XE "Finanzrechnung" in Form eines deterministischen XE "Erklärungsmodells" Planungsmodells auf Projektebene darzustellen. Hierbei soll das Verständnis und die Bedeutung der Verknüpfungen zwischen dem technisch-orientierten Bauablauf und dessen Darstellung im Rechnungs- und Finanzwesen herausgestellt werden. Die Vorgänge aus der Bauabwicklung, das heißt die Abarbeitung der Bauleistungsverzeichnispositionen und deren zeitliche Darstellung in einem Bauzeitenplan sind periodisiert in Größen der Betriebsbuchhaltung (Leistung, Kosten) zu transformieren und anschließend in der Finanzrechnung (Einzahlungen., Auszahlungen) nach Kreditoren und Debitoren aufzuschlüsseln.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
Available construction time-cost trade-off analysis models can be used to generate trade-offs between these two important objectives, however, their application is limited in large-scale construction projects due to their impractical computational requirements. This paper presents the development of a scalable and multi-objective genetic algorithm that provides the capability of simultaneously optimizing construction time and cost large-scale construction projects. The genetic algorithm was implemented in a distributed computing environment that utilizes a recent standard for parallel and distributed programming called the message passing interface (MPI). The performance of the model is evaluated using a set of measures of performance and the results demonstrate the capability of the present model in significantly reducing the computational time required to optimize large-scale construction projects.
In many branches companies often lose the visibility of their human and technical resources of their field service. On the one hand the people in the fieldservice are often free like kings on the other hand they do not take part of the daily communication in the central office and suffer under the lacking involvement in the decisions inside the central office. The result is inefficiency. Reproaches in both directions follow. With the radio systems and then mobile phones the ditch began to dry up. But the solutions are far from being productive.
We study the Weinstein equation u on the upper half space R3+. The Weinstein equation is connected to the axially symmetric potentials. We compute solutions of the Weinstein equation depending on the hyperbolic distance and x2. These results imply the explicit mean value properties. We also compute the fundamental solution. The main tools are the hyperbolic metric and its invariance properties.
HYPERMONOGENIC POLYNOMIALS
(2006)
It is well know that the power function is not monogenic. There are basically two ways to include the power function into the set of solutions: The hypermonogenic functions or holomorphic Cliffordian functions. L. Pernas has found out the dimension of the space of homogenous holomorphic Cliffordian polynomials of degree m, but his approach did not include a basis. It is known that the hypermonogenic functions are included in the space of holomorphic Cliffordian functions. As our main result we show that we can construct a basis for the right module of homogeneous holomorphic Cliffordian polynomials of degree m using hypermonogenic polynomials and their derivatives. To that end we first recall the function spaces of monogenic, hypermonogenic and holomorphic Cliffordian functions and give the results needed in the proof of our main theorem. We list some basic polynomials and their properties for the various function spaces. In particular, we consider recursive formulas, rules of differentiation and properties of linear independency for the polynomials.
Traffic simulation is a valuable tool for the design and evaluation of road networks. Over the years, the level of detail to which urban and freeway traffic can be simulated has increased steadily, shifting from a merely qualitative macroscopic perspective to a very detailed microscopic view, where the behavior of individual vehicles is emulated realistically. With the improvement of behavioral models, however, the computational complexity has also steadily increased, as more and more aspects of real-life traffic have to be considered by the simulation environment. Despite the constant increase in computing power of modern personal computers, microscopic simulation stays computationally expensive, limiting the maximum network size than can be simulated on a single-processor computer in reasonable time. Parallelization can distribute the computing load from a single computer system to a cluster of several computing nodes. To this end, the exisiting simulation framework had to be adapted to allow for a distributed approach. As the simulation is ultimately targeted to be executed in real-time, incorporating real traffic data, only a spatial partition of the simulation was considered, meaning the road network has to be partitioned into subnets of comparable complexity, to ensure a homogenous load balancing. The partition process must also ensure, that the division between subnets does only occur in regions, where no strong interaction between the separated road segments occurs (i.e. not in the direct vicinity of junctions). In this paper, we describe a new microscopic reasoning voting strategy, and discuss in how far the increasing computational costs of these more complex behaviors lend themselves to a parallelized approach. We show the parallel architecture employed, the communication between computing units using MPIJava, and the benefits and pitfalls of adapting a single computer application to be used on a multi-node computing cluster.
The use of process models in the analysis, optimization and simulation of processes has proven to be extremely beneficial in the instances where they could be applied appropriately. However, the Architecture/Engineering/Construction (AEC) industries present unique challenges that complicate the modeling of their processes. A simple Engineering process model, based on the specification of Tasks, Datasets, Persons and Tools, and certain relations between them, have been developed, and its advantages over conventional techniques have been illustrated. Graph theory is used as the mathematical foundation mapping Tasks, Datasets, Persons and Tools to vertices and the relations between them to edges forming a directed graph. The acceptance of process modeling in AEC industries not only depends on the results it can provide, but the ease at which these results can be attained. Specifying a complex AEC process model is a dynamic exercise that is characterized by many modifications over the process model's lifespan. This article looks at reducing specification complexity, reducing the probability for erroneous input and allowing consistent model modification. Furthermore, the problem of resource leveling is discussed. Engineering projects are often executed with limited resources and determining the impact of such restrictions on the sequence of Tasks is important. Resource Leveling concerns itself with these restrictions caused by limited resources. This article looks at using Task shifting strategies to find a near-optimal sequence of Tasks that guarantees consistent Dataset evolution while resolving resource restrictions.
In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.
We establish the basis of a discrete function theory starting with a Fischer decomposition for difference Dirac operators. Discrete versions of homogeneous polynomials, Euler and Gamma operators are obtained. As a consequence we obtain a Fischer decomposition for the discrete Laplacian. For the sake of simplicity we consider in the first part only Dirac operators which contain only forward or backward finite differences. Of course, these Dirac operators do not factorize the classic discrete Laplacian. Therefore, we will consider a different definition of a difference Dirac operator in the quaternionic case which do factorizes the discrete Laplacian.
Recently there has been a surge of interest in PDEs involving fractional derivatives in different fields of engineering. In this extended abstract we present some of the results developedin [3]. We compute the fundamental solution for the three-parameter fractional Laplace operator Δ by transforming the eigenfunction equation into an integral equation and applying the method of separation of variables. The obtained solutions are expressed in terms of Mittag-Leffer functions. For more details we refer the interested reader to [3] where it is also presented an operational approach based on the two Laplace transform.
SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)
After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.
An introduction is given to Clifford Analysis over pseudo-Euclidean space of arbitrary signature, called for short Ultrahyperbolic Clifford Analysis (UCA). UCA is regarded as a function theory of Clifford-valued functions, satisfying a first order partial differential equation involving a vector-valued differential operator, called a Dirac operator. The formulation of UCA presented here pays special attention to its geometrical setting. This permits to identify tensors which qualify as geometrically invariant Dirac operators and to take a position on the naturalness of contravariant and covariant versions of such a theory. In addition, a formal method is described to construct the general solution to the aforementioned equation in the context of covariant UCA.
The methods currently used for scheduling building processes have some major advantages as well as disadvantages. The main advantages are the arrangement of the tasks of a project in a clear, easily readable form and the calculation of valuable information like critical paths. The main disadvantage on the other hand is the inflexibility of the model caused by the modeling paradigms. Small changes of the modeled information strongly influence the whole model and lead to the need to change many more details in the plan. In this article an approach is introduced allowing the creation of more flexible schedules. It aims towards a more robust model that lowers the need to change more than a few information while being able to calculate the important propositions of the known models and leading to further valuable conclusions.
Buildings can be divided into various types and described by a huge number of parameters. Within the life cycle of a building, especially during the design and construction phases, a lot of engineers with different points of view, proprietary applications and data formats are involved. The collaboration of all participating engineers is characterised by a high amount of communication. Due to these aspects, a homogeneous building model for all engineers is not feasible. The status quo of civil engineering is the segmentation of the complete model into partial models. Currently, the interdependencies of these partial models are not in the focus of available engineering solutions. This paper addresses the problem of coupling partial models in civil engineering. According to the state-of-the-art, applications and partial models are formulated by the object-oriented method. Although this method solves basic communication problems like subclass coupling directly it was found that many relevant coupling problems remain to be solved. Therefore, it is necessary to analyse and classify the relevant coupling types in building modelling. Coupling in computer science refers to the relationship between modules and their mutual interaction and can be divided into different coupling types. The coupling types differ on the degree by which the coupled modules rely upon each other. This is exemplified by a general reference example from civil engineering. A uniform formulation of coupling patterns is described analogously to design patterns, which are a common methodology in software engineering. Design patterns are templates for describing a general reusable solution to a commonly occurring problem. A template is independent of the programming language and the operating system. These coupling patterns are selected according to the specific problems of building modelling. A specific meta-model for coupling problems in civil engineering is introduced. In our meta-model the coupling patterns are a semantic description of a specific coupling design.
LIFETIME-ORIENTED OPTIMIZATION OF BRIDGE TIE RODS EXPOSED TO VORTEX-INDUCED ACROSS-WIND VIBRATIONS
(2006)
In recent years, damages in welded connections plates of vertical tie rods of several arched steel bridges have been reported. These damages are due to fatigue caused by wind-induced vibrations. In the present study, such phenomena are examined, and the corresponding lifetime of a reference bridge in Münster-Hiltrup, Germany, is estimated, based on the actual shape of the connection plate. Also, the results obtained are compared to the expected lifetime of a connection plate, whose geometry has been optimized separately. The structural optimization, focussing on the shape of the cut at the hanger ends, has been carried out using evolution strategies. The oscillation amplitudes have been computed by means of the Newmark-Wilson time-step method, using an appropriate load model, which has been validated by on-site experiments on the selected reference bridge. Corresponding stress-amplitudes are evaluated by multiplying the oscillation amplitudes with a stress concentration factor. This factor has been computed on the basis of a finite element model of the system "hanger-welding-connection plate", applying solid elements, according to the notch stress approach. The damage estimation takes into account the stochastics of the exciting wind process, as well as the stochastics of the material parameters (fatigue strength) given in terms of Woehler-curves. The shape optimization results in a substantial increase of the estimated hanger lifetime. The comparison of the lifetimes of the bulk plate and of the welding revealed that, in the optimized structure, the welding, being the most sensitive part in the original structure, shows much more resistance against potential damages than the bulk material.
The mathematical and technical foundations of optimization have been developed to a large extent. In the design of buildings, however, optimization is rarely applied because of insufficient adaptation of this method to the needs of building design. The use of design optimization requires the consideration of all relevant objectives in an interactive and multidisciplinary process. Disciplines such as structural, light, and thermal engineering, architecture, and economics impose various objectives on the design. A good solution calls for a compromise between these often contradictory objectives. This presentation outlines a method for the application of Multidisciplinary Design Optimization (MDO) as a tool for the designing of buildings. An optimization model is established considering the fact that in building design the non-numerical aspects are of major importance than in other engineering disciplines. A component-based decomposition enables the designer to manage the non-numerical aspects in an interactive design optimization process. A façade example demonstrates a way how the different disciplines interact and how the components integrate the disciplines in one optimization model. In this grid-based façade example, the materials switch between a discrete number of materials and construction types. For light and thermal engineering, architecture, and economics, analysis functions calculate the performance; utility functions serve as an important means for the evaluation since not every increase or decrease of a physical value improves the design. For experimental purposes, a genetic algorithm applied to the exemplary model demonstrates the use of optimization in this design case. A component-based representation first serves to manage non-numerical characteristics such as aesthetics. Furthermore, it complies with usual fabrication methods in building design and with object-oriented data handling in CAD. Therefore, components provide an important basis for an interactive MDO process in building design.
Safety operation of important civil structures such as bridges can be estimated by using fracture analysis. Since the analytical methods are not capable of solving many complicated engineering problems, numerical methods have been increasingly adopted. In this paper, a part of isotropic material which contains a crack is considered as a partial model and the proposed model quality is evaluated. EXtended IsoGeometric Analysis (XIGA) is a new developed numerical approach [1, 2] which benefits from advantages of its origins: eXtended Finite Element Method (XFEM) and IsoGeometric Analysis (IGA). It is capable of simulating crack propagation problems with no remeshing necessity and capturing singular field at the crack tip by using the crack tip enrichment functions. Also, exact representation of geometry is possible using only few elements. XIGA has also been successfully applied for fracture analysis of cracked orthotropic bodies [3] and for simulation of curved cracks [4]. XIGA applies NURBS functions for both geometry description and solution field approximation. The drawback of NURBS functions is that local refinement cannot be defined regarding that it is based on tensorproduct constructs unless multiple patches are used which has also some limitations. In this contribution, the XIGA is further developed to make the local refinement feasible by using Tspline basis functions. Adopting a recovery based error estimator in the proposed approach for evaluation of the model quality and performing the adaptive processes is in progress. Finally, some numerical examples with available analytical solutions are investigated by the developed scheme.
Reducing energy consumption is one of the major challenges for present day and will continue for future generations. The emerging EU directives relating to energy (EU EPBD and the EU Directive on Emissions Trading) now place demands on building owners to rate the energy performance of their buildings for efficient energy management. Moreover European Legislation (Directive 2006/32/EC) requires Facility Managers to reduce building energy consumption and operational costs. Currently sophisticated building services systems are available integrating off-the-shelf building management components. However this ad-hoc combination presents many difficulties to building owners in the management and upgrade of these systems. This paper addresses the need for integration concepts, holistic monitoring and analysis methodologies, life-cycle oriented decision support and sophisticated control strategies through the seamless integration of people, ICT-devices and computational resources via introducing the newly developed integrated system architecture. The first concept was applied to a residential building and the results were elaborated to improve current building conditions.
New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea that Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n×n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multilinear quantities.
The theory of regular quaternionic functions of a reduced quaternionic variable is a 3-dimensional generalization of complex analysis. The Moisil-Theodorescu system (MTS) is a regularity condition for such functions depending on the radius vector r = ix+jy+kz seen as a reduced quaternionic variable. The analogues of the main theorems of complex analysis for the MTS in quaternion forms are established: Cauchy, Cauchy integral formula, Taylor and Laurent series, approximation theorems and Cauchy type integral properties. The analogues of positive powers (inner spherical monogenics) are investigated: the set of recurrence formulas between the inner spherical monogenics and the explicit formulas are established. Some applications of the regular function in the elasticity theory and hydrodynamics are given.
In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.
We propose a new approach to the numerical solution of quasi-static elastic-plastic problems based on the Moreau-Yosida theorem. After the time discretization, the problem is expressed as an energy minimization problem for unknown displacement and plastic strain fields. The dependency of the minimization functional on the displacement is smooth whereas the dependency on the plastic strain is non-smooth. Besides, there exists an explicit formula, how to calculate the plastic strain from a given displacement field. This allows us to reformulate the original problem as a minimization problem in the displacement only. Using the Moreau-Yosida theorem from the convex analysis, the minimization functional in the displacements turns out to be Frechet-differentiable, although the hidden dependency on the plastic strain is non-differentiable. The seconds derivative exists everywhere apart from the elastic-plastic interface dividing elastic and plastic zones of the continuum. This motivates to implement a Newton-like method, which converges super-linearly as can be observed in our numerical experiments.
RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)
Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.
Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.
Polymer modification of mortar and concrete is a widely used technique in order to improve their durability properties. Hitherto, the main application fields of such materials are repair and restoration of buildings. However, due to the constant increment of service life requirements and the cost efficiency, polymer modified concrete (PCC) is also used for construction purposes. Therefore, there is a demand for studying the mechanical properties of PCC and entitative differences compared to conventional concrete (CC). It is significant to investigate whether all the assumed hypotheses and existing analytical formulations about CC are also valid for PCC. In the present study, analytical models available in the literature are evaluated. These models are used for estimating mechanical properties of concrete. The investigated property in this study is the modulus of elasticity, which is estimated with respect to the value of compressive strength. One existing database was extended and adapted for polymer-modified concrete mixtures along with their experimentally measured mechanical properties. Based on the indexed data a comparison between model predictions and experiments was conducted by calculation of forecast errors.
Ausgehend von den fundierten Erfahrungen, die für das Schweißen von verschiedensten Metallen vorliegen, wird an der Professur Stahlbau der Bauhaus-Universität Weimar ein neuartiges Verfahren zum CO2-Laserstrahlschweißen von Quarzglas numerisch untersucht. Dabei kommt die kommerzielle FE-Software SYSWELD® zum Einsatz. Die erforderlichen Versuche werden in Zusammenarbeit mit dem Institut für Fügetechnik und Werkstoffprüfung GmbH aus Jena realisiert. Die numerische Analyse wird eingesetzt, um geeignete Prozessparameter zu bestimmen und deren Auswirkungen auf die transienten thermischen und mechanischen Vorgänge, die während des Schweißvorgangs ablaufen abzubilden. Um die aus der Simulation erhaltenen Aussagen zu überprüfen, ist es erforderlich, das Berechnungsmodell mittels Daten aus Versuchsschweißungen zu kalibrieren. Dabei sind die verwendeten Materialmodelle sowie die der Simulation zugrunde gelegten Materialkennwerte zu validieren. Es stehen verschiedene rheologische Berechnungsmodelle zur Auswahl, die die viskosen Materialeigenschaften des Glases abbilden. Dabei werden die drei mechanischen Grundelemente, die HOOKEsche Feder, der NEWTONsche Dämpfungszylinder und das ST.-VENANT-Element miteinander kombiniert. Die Möglichkeit, thermische und mechanische Vorgänge innerhalb des Glases während des Schweißvorgangs und nach vollständiger Abkühlung, vorhersagen zu können, gestattet es den Schweißvorgang über eine Optimierung der Verfahrensparameter gezielt dahingehend zu beeinflussen, die Wirtschaftlichkeit des Schweißverfahrens zu verbessern, und ein zuverlässiges Schweißergebnis zu erhalten. Dabei können auch nur unter hohem experimentellen Aufwand durchführbare Versuche simuliert werden, um eine Vorhersage zu treffen, ob es zweckmäßig ist, den Versuch auch in der Praxis zu fahren. Dies führt zu einer Reduzierung des experimentellen Aufwandes und damit zu einer Verkürzung des Entwicklungszeitraumes für das angestrebte Verfahren.
Solid behavior as well as liquid behavior characterizes the flow of granular material in silos. The presented model is based on an appropriate interaction of a displacement field and a velocity field. The constitutive equations and the applied algorithm are developed from the exact solution for a standard case. The standard case evolves from a very tall vertical plane strain silo containing material that flows at a constant speed. No horizontal displacements and velocities take place. No changes regarding the field values arise in the vertical direction and in time. Tension is not allowed at any point. Coulomb friction represents the effects of the vertical walls. The interaction between the flowing material and the walls is covered by a forced boundary condition resulting in an additional matrix for the solid component as well as for the liquid component. The resulting integral equations are designed to be solved directly. Three coefficients describe the properties of the granular material. They govern elastic solid behavior in combination with viscous liquid behavior.
Within the scheduling of construction projects, different, partly conflicting objectives have to be considered. The specification of an efficient construction schedule is a challenging task, which leads to a NP-hard multi-criteria optimization problem. In the past decades, so-called metaheuristics have been developed for scheduling problems to find near-optimal solutions in reasonable time. This paper presents a Simulated Annealing concept to determine near-optimal construction schedules. Simulated Annealing is a well-known metaheuristic optimization approach for solving complex combinatorial problems. To enable dealing with several optimization objectives the Pareto optimization concept is applied. Thus, the optimization result is a set of Pareto-optimal schedules, which can be analyzed for selecting exactly one practicable and reasonable schedule. A flexible constraint-based simulation approach is used to generate possible neighboring solutions very quickly during the optimization process. The essential aspects of the developed Pareto Simulated Annealing concept are presented in detail.
Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.
In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.
In construction engineering, a schedule’s input data, which is usually not exactly known in the planning phase, is considered deterministic when generating the schedule. As a result, construction schedules become unreliable and deadlines are often not met. While the optimization of construction schedules with respect to costs and makespan has been a matter of research in the past decades, the optimization of the robustness of construction schedules has received little attention. In this paper, the effects of uncertainties inherent to the input data of construction schedules are discussed. Possibilities are investigated to improve the reliability of construction schedules by considering alternative processes for certain tasks and by identifying the combination of processes generating the most robust schedule with respect to the makespan of a construction project.
Für eine beherrschbare Koordination und Durchführung von Planungsaufgaben in Bauprojekten wird der Planungsprozess zunehmend in formalisierten Modellen – Prozessmodellen – beschrieben. Die Produktmodellforschung ihrerseits widmet sich der Speicherung von Planungsdaten in Form von objektorientierten Modellen im Rechner. Hauptaugenmerk sind dabei die Wahrung der Konsistenz und die Modellierung von Abhängigkeiten innerhalb dieses Planungsmaterials. Der Bezug zu den Akteuren der Planung wird nicht direkt hergestellt. Ein formal beschriebener Planungsprozesses kann in der Praxis noch nicht derart realisiert werden, dass ein Zugriff auf Einzelobjekte des Planungsprozesses gewährleistet ist. Bestehende Planungsunterstützungs- und Workflowmanagement-Systeme abstrahieren und ordnen das Planungsmaterial nach wie vor auf Dateiebene. Der vorliegende Artikel beschreibt eine Methode für die geeignete Verbindung von formalisierten Prozessmodellen in der Bauplanung mit den Einzelobjekten, die in den modellorientierten Objektmengen kodiert sind. Dabei wird die Zugehörigkeit bestimmter Objekte zu Plänen und Dokumenten (zum Zwecke des Datenaustauschs) nicht länger durch die physische Zuordnung zu Dateien festgelegt. Es wird ein formales Beschreibungsmittel vorgestellt, welches die entsprechende Teilmengenbildung aus der Gesamtheit der Planungsobjekte ermöglicht. Für die bisherigen Formen des Datenaustausches werden aus den Objektmodellen der Planung Teilmengen herausgelöst und physikalisch zwischen den Planern transportiert. Das neue Beschreibungsmittel hingegen erlaubt es, die Bildungsvorschrift für Objektteilmengen statt der Mengen selbst zwischen den Planern auszutauschen. Der Zugriff auf die konkreten Objekte findet dann direkt modellbasiert statt.
DECENTRALIZED APPROACHES TO ADAPTIVE TRAFFIC CONTROL AND AN EXTENDED LEVEL OF SERVICE CONCEPT
(2006)
Traffic systems are highly complex multi-component systems suffering from instabilities and non-linear dynamics, including chaos. This is caused by the non-linearity of interactions, delays, and fluctuations, which can trigger phenomena such as stop-and-go waves, noise-induced breakdowns, or slower-is-faster effects. The recently upcoming information and communication technologies (ICT) promise new solutions leading from the classical, centralized control to decentralized approaches in the sense of collective (swarm) intelligence and ad hoc networks. An interesting application field is adaptive, self-organized traffic control in urban road networks. We present control principles that allow one to reach a self-organized synchronization of traffic lights. Furthermore, vehicles will become automatic traffic state detection, data management, and communication centers when forming ad hoc networks through inter-vehicle communication (IVC). We discuss the mechanisms and the efficiency of message propagation on freeways by short-range communication. Our main focus is on future adaptive cruise control systems (ACC), which will not only increase the comfort and safety of car passengers, but also enhance the stability of traffic flows and the capacity of the road (“traffic assistance”). We present an automated driving strategy that adapts the operation mode of an ACC system to the autonomously detected, local traffic situation. The impact on the traffic dynamics is investigated by means of a multi-lane microscopic traffic simulation. The simulation scenarios illustrate the efficiency of the proposed driving strategy. Already an ACC equipment level of 10% improves the traffic flow quality and reduces the travel times for the drivers drastically due to delaying or preventing a breakdown of the traffic flow. For the evaluation of the resulting traffic quality, we have recently developed an extended level of service concept (ELOS). We demonstrate our concept on the basis of travel times as the most important variable for a user-oriented quality of service.
The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.
In the final decades many scientists were occupied intensively with the change of materials during a process and their mathematical descriptions. The extensive and extensive analyses were supported by the advanced computer science. A mathematical description of the phase transformation is a condition for a realistic FE simulation of the state of microstructure. It is possible to simulate the temperature and stress field also in complex construction based on the state of microstructure. In the last years a great number of mathematical models were expanded to describe the transformation between different phases. For the development of the models for transformation kinetics it is practical to subdivide into isothermal and non-isothermal processes according to the thermal conditions. Some models for the description of the transformation with non-isothermal processes represent extensions for isothermal of processes. A part of parameters for the describing equations can be derived from the time-temperature-transformation diagrams in the literature. Furthermore the two possibilities of transformation are considered by different models - diffusion controlled and not diffusion controlled. The material-specific characteristics can be simulated during the transformation for each individual phase in a realistic FE analyses. Also new materials can be simulated after a modification of the parameters in the describing equations for the phase transformation. The effects in the temperature and stress field are a substantial reason for the investigation of the phase transformation during the welding and TIG-dressing processes.
We briefly review and use the recent comprehensive research on the manifolds of square roots of −1 in real Clifford geometric algebras Cl(p,q) in order to construct the Clifford Fourier transform. Basically in the kernel of the complex Fourier transform the complex imaginary unit j is replaced by a square root of −1 in Cl(p,q). The Clifford Fourier transform (CFT) thus obtained generalizes previously known and applied CFTs, which replaced the complex imaginary unit j only by blades (usually pseudoscalars) squaring to −1. A major advantage of real Clifford algebra CFTs is their completely real geometric interpretation. We study (left and right) linearity of the CFT for constant multivector coefficients in Cl(p,q), translation (x-shift) and modulation (w -shift) properties, and signal dilations. We show an inversion theorem. We establish the CFT of vector differentials, partial derivatives, vector derivatives and spatial moments of the signal. We also derive Plancherel and Parseval identities as well as a general convolution theorem.
Summer overheating in buildings is a common problem, especially in office buildings with large glazed facades, high internal loads and low thermal mass. Phase change materials (PCM) that undergo a phase transition in the temperature range of thermal comfort can add thermal mass without increasing the structural load of the building. The investigated PCM were micro-encapsulated and mixed into gypsum plaster. The experiments showed a reduction of indoor-temperature of up to 4 K when using a 3 cm layer of PCM-plaster with micro-encapsulated paraffin. The measurement results could validate a numerical model that is based on a temperature dependent function for heat capacity. Thermal building simulation showed that a 3 cm layer of PCM-plaster can help to fulfil German regulations concerning heat protection of buildings in summer for most office rooms.
It is well-known that the solution of the fundamental equations of linear elasticity for a homogeneous isotropic material in plane stress and strain state cases can be equivalently reduced to the solution of a biharmonic equation. The discrete version of the Theorem of Goursat is used to describe the solution of the discrete biharmonic equation by the help of two discrete holomorphic functions. In order to obtain a Taylor expansion of discrete holomorphic functions we introduce a basis of discrete polynomials which fulfill the so-called Appell property with respect to the discrete adjoint Cauchy-Riemann operator. All these steps are very important in the field of fracture mechanics, where stress and displacement fields in the neighborhood of singularities caused by cracks and notches have to be calculated with high accuracy. Using the sum representation of holomorphic functions it seems possible to reproduce the order of singularity and to determine important mechanical characteristics.
This paper presents a specific modeling technique that is focused on preparing planning processes in civil engineering. Planning processes in civil engineering are characterized by some peculiarities so that the sequence of planning tasks needs to be determined for each planning project. Neither the use of optimized partial processes nor the use of lower detailed and optimized processes guarantee an optimal overall planning process. The modeling technique considers these peculiarities. In a first step, it is focused on the logic of the planning process. Algorithms based on the graph theory determine that logic. This approach ensures consistency and logical correctness of the description of a planning process at the early beginning in its preparation phase. Sets of data – the products of engineers like technical drawings, technical models, reports, or specifications – form the core of the presented modeling technique. The production of these sets of data requires time and money. This is expressed by a specific weighting of each set of data in the presented modeling technique. The introduction of these weights allows an efficient progress measurement and controlling of a planning project. For this purpose, a link between the modeling technique used in the preparation phase and the execution phase is necessary so that target and actual values are available for controlling purposes. The present paper covers the description of this link. An example is given to illustrate the use of the modeling technique for planning processes in civil engineering projects.
Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.
Planning and construction processes are characterized by the peculiarity that they need to be designed individually for each project. It is necessary to set up an individual schedule for each project. As a basis for a new project, schedules from already finished projects are used, but adaptions are always necessary. In practice, scheduling tools only document a process. Schedules cover a set of activities, their duration and a set of interdependencies between activities. The design of a process is up to the user. It is not necessary to specify each interdependency, and completeness and correctness need to be checked manually. No methodologies are available to guarantee properties such as correctness or completeness. The considerations presented in the paper are based on an approach where a planning and a construction process including the interdependencies between planning and construction activities are regarded as a result. Selected information need to be specified by a user, and a proposal for an order of planning and construction activities is computed. As a consequence, process properties such as correctness and completeness can be guaranteed with respect to user input. Especially in Germany, clients are allowed to modify their requirements at any time. This leads to modifications in the planning and construction processes. This paper covers a mathematical formulation for this problem based on set theory. A complex structure is set up covering objects and relations; and operations are defined that guarantee consistency in the underlying and versioned process description. The presented considerations are based on previous work. This paper can be regarded as the next step in a series of previous work describing how a suitable concept for handling, planning and construction processes in civil engineering can be formed.
Advanced finite elements are proposed for the mechanical analysis of heterogeneous materials. The approximation quality of these finite elements can be controlled by a variable order of B-spline shape functions. An element-based formulation is developed such that the finite element problem can iteratively be solved without storing a global stiffness matrix. This memory saving allows for an essential increase of problem size. The heterogeneous material is modelled by projection onto a uniform, orthogonal grid of elements. Conventional, strictly grid-based finite element models show severe oscillating defects in the stress solutions at material interfaces. This problem is cured by the extension to multiphase finite elements. This concept enables to define a heterogeneous material distribution within the finite element. This is possible by a variable number of integration points to each of which individual material properties can be assigned. Based on an interpolation of material properties at nodes and further smooth interpolation within the finite elements, a continuous material function is established. With both, continuous B-spline shape function and continuous material function, also the stress solution will be continuous in the domain. The inaccuracy implied by the continuous material field is by far less defective than the prior oscillating behaviour of stresses. One- and two-dimensional numerical examples are presented.
The present paper is part of a comprehensive approach of grid-based modelling. This approach includes geometrical modelling by pixel or voxel models, advanced multiphase B-spline finite elements of variable order and fast iterative solver methods based on the multigrid method. So far, we have only presented these grid-based methods in connection with linear elastic analysis of heterogeneous materials. Damage simulation demands further considerations. The direct stress solution of standard bilinear finite elements is severly defective, especially along material interfaces. Besides achieving objective constitutive modelling, various nonlocal formulations are applied to improve the stress solution. Such a corrective data processing can either refer to input data in terms of Young's modulus or to the attained finite element stress solution, as well as to a combination of both. A damage-controlled sequentially linear analysis is applied in connection with an isotropic damage law. Essentially by a high resolution of the heterogeneous solid, local isotropic damage on the material subscale allows to simulate complex damage topologies such as cracks. Therefore anisotropic degradation of a material sample can be simulated. Based on an effectively secantial global stiffness the analysis is numerically stable. The iteration step size is controlled for an adequate simulation of the damage path. This requires many steps, but in the iterative solution process each new step starts with the solution of the prior step. Therefore this method is quite effective. The present paper provides an introduction of the proposed concept for a stable simulation of damage in heterogeneous solids.
A fast solver method called the multigrid preconditioned conjugate gradient method is proposed for the mechanical analysis of heterogeneous materials on the mesoscale. Even small samples of a heterogeneous material such as concrete show a complex geometry of different phases. These materials can be modelled by projection onto a uniform, orthogonal grid of elements. As one major problem the possible resolution of the concrete specimen is generally restricted due to (a) computation times and even more critical (b) memory demand. Iterative solvers can be based on a local element-based formulation while orthogonal grids consist of geometrical identical elements. The element-based formulation is short and transparent, and therefore efficient in implementation. A variation of the material properties in elements or integration points is possible. The multigrid method is a fast iterative solver method, where ideally the computational effort only increases linear with problem size. This is an optimal property which is almost reached in the implementation presented here. In fact no other method is known which scales better than linear. Therefore the multigrid method gains in importance the larger the problem becomes. But for heterogeneous models with very large ratios of Young's moduli the multigrid method considerably slows down by a constant factor. Such large ratios occur in certain heterogeneous solids, as well as in the damage analysis of solids. As solution to this problem the multigrid preconditioned conjugate gradient method is proposed. A benchmark highlights the multigrid preconditioned conjugate gradient method as the method of choice for very large ratio's of Young's modulus. A proposed modified multigrid cycle shows good results, in the application as stand-alone solver or as preconditioner.
The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions.
Performing parameter identification prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental setup, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and to reduce the number of monitoring points. This Paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability of soft clays. With an initial setup of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the afore mentioned parameters. Starting from these identified parameters, the optimal measurement setup is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity, and the results are discussed.
Seit die Datenverarbeitung in ihrer Komplexität sich der Thematik des Computer Integrated Manufacturing widmet gehört die Produktionsplanung und Steuerung zu jenen Bereichen, in denen eine Computerunterstützung am vordringlichsten erschien. Später sind betriebswirtschaftliche Gesamtlösungen entstanden, die (bis heute recht unpräzise) als Enterprise Resource Planning (ERP)-Systeme bezeichnet werden und in ihren Logistik-Modulen auch Funktionen der Produktionsplanung abdecken. Alle bekannten MRP-, PPS- und auch ERP-Systeme beruhen auf einer Sukzessivplanung. Advanced Planning and Scheduling (APS) Systems finden seit etwa 1995 zunehmend Interesse. Neben Demand Planning, Production Planning and Scheduling, Distribution Planning, Transportation Planning und Supply Chain Planning werden Lösungen für Anzahl und Standorte von Produktionsstätten und Auslieferungslagern, Zuordnung zu Produktionsstätten, Kapazitätsbestimmung für Arbeitskräfte und Betriebsmittel je Standort, Lagerhaltung je Teil und Lager, Bestimmung benötigter Transportmittel und Häufigkeit ihres Einsatzes, Zuordnung von Lagern zu Produktionsstätten von Märkten zu Lagern u.a.m. von APS-Systemen erwartet. D.h. APS-Systeme ergänzen ERP-Lösungen, nutzen die bereits durch das ERP-System vorhandenen Daten und benötigen neuartige Algorithmen und (Meta-) Heuristiken. Im Rahmen des Vortrages werden Modelle und Echtzeitalgorithmen zur Optimierung der Logistik für Prozesse mit kurzfristigen Anforderungen, geographisch verteilter Produktion, Lagerhaltung der Ausgangs-, Zwischen- und Endprodukte und wechselnden Transport-Bedingungen aus der Sicht der praktischen Umsetzung und Anwendung in Form einer ASP-Lösung aufgezeigt und diskutiert.
One of the most promising and recent advances in computer-based planning is the transition from classical geometric modeling to building information modeling (BIM). Building information models support the representation, storage, and exchange of various information relevant to construction planning. This information can be used for describing, e.g., geometric/physical properties or costs of a building, for creating construction schedules, or for representing other characteristics of construction projects. Based on this information, plans and specifications as well as reports and presentations of a planned building can be created automatically. A fundamental principle of BIM is object parameterization, which allows specifying geometrical, numerical, algebraic and associative dependencies between objects contained in a building information model. In this paper, existing challenges of parametric modeling using the Industry Foundation Classes (IFC) as a federated model for integrated planning are shown, and open research questions are discussed.
NUMERICAL SIMULATION OF THERMO-HYGRAL ALKALI-SILICA REACTION MODEL IN CONCRETE AT THE MESOSCALE
(2010)
This research aims to model Alkali-Silica Reaction gel expansion in concrete under the influence of hygral and thermal loading, based on experimental results. ASR provokes a heterogeneous expansion in concrete leading to dimensional changes and eventually the premature failure of the concrete structure. This can result in map cracking on the concrete surface which will decrease the concrete stiffness. Factors that influence ASR are parameters such as the cement alkalinity, the number of deleterious silica from the aggregate used, concrete porosity, and external factors like temperature, humidity and external source of alkali from ingression of deicing salts. Uncertainties of the influential factors make ASR a difficult phenomenon to solve; hence my approach to this matter is to solve the problem using stochastic modelling, where a numerical simulation of concrete cross-section with integration of experimental results from Finger-Institute for Building Materials Science at the Bauhaus-Universität Weimar. The problem is formulated as a multi-field problem, combining heat transfer, fluid transfer and the reaction rate model with the mechanical stress field. Simulation is performed as a mesoscale model considering aggregates and mortar matrix. The reaction rate model will be conducted using experimental results from concrete expansions due to ASR gained from concrete prism tests. Expansive strains values for transient environmental conditions due to the reaction rate will be determined from calculation based on the reaction rate model. Results from these models will be able to predict the rate of ASR expansion and the cracking propagation that may arise.