56.03 Methoden im Bauingenieurwesen
Refine
Document Type
- Conference Proceeding (599) (remove)
Institute
- Professur Informatik im Bauwesen (331)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (173)
- Graduiertenkolleg 1462 (31)
- Institut für Strukturmechanik (ISM) (21)
- Professur Angewandte Mathematik (18)
- Institut für Konstruktiven Ingenieurbau (IKI) (8)
- Professur Stahlbau (4)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (3)
- Professur Baubetrieb und Bauverfahren (3)
- Professur Informatik in der Architektur (3)
Keywords
- Computerunterstütztes Verfahren (284)
- Architektur <Informatik> (198)
- CAD (156)
- Angewandte Informatik (143)
- Angewandte Mathematik (143)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (72)
- Modellierung (49)
- Bauwerk (40)
- Verteiltes System (36)
- Building Information Modeling (35)
The paper presents a linear static analysis on continuous orthotropic thin-walled shell structures simply supported at the transverse ends with a random deformable contour of the cross section. The external loads can be random as well. The class of this structures involves most of the bridges, scaffold bridges, some roof structures etc. A numerical example of steel continuous structures on five spans with an open contour of the cross-section has been solved. The examination of the structure has used the following two computation models: a prismatic structure consisting of isotropic strips, a plates and ribs, with considering their real interaction, and a smooth orthotropic plate equivalent to the structure in the first model. The displacements and forces of the structure characterizing its stressed and deformed condition have been determined. The results obtained from the two solutions have been analyzed. The study on the structure is made with the force method in combination with the analytical finite strip method (AFSM) in displacements. The basic system is obtained by separating the superstructure from the understructure at the places of intermediate supports and consists of two parts. The first part is a single span thin-walled prismatic shell structure; the second part presents supports (columns, space frames etc.). The connection between the superstructure and intermediate supports is made under random supporting conditions. The forces at the supporting points in the direction of the connections removed are assumed to be the basic unknowns of the force method. The solution of the superstructure has been accomplished by the AFSM in displacements. The structure is divided in only one (transverse) direction into a finite number of plain strips connected to each other in longitudinal linear nodes. The three displacements of the points on the node lines and the rotation around those lines have been assumed to be the basic unknown in each node. The boundary conditions of each strip of the basic system correspond to the simply support along the transverse ends and the restraint along the longitudinal ones. The particular strip of the basic system has been solved by the method of the single trigonometric series. The method is reduced to solving a discrete structure in displacements and restoring its continuity at the places of the sections made in respect to both the displacements and forces. The two parts of the basic system have been solved in sequence under the action of single values of each of the basic unknowns and with the external load. The solution of the support part is accomplished using software for analyzing structures by the FEM. The basic unknown forces have been determined from system of canonic equations, the conditions of the deformations continuity on the places of the removed connections under superstructure and intermediate supports. The final displacements and forces at a random point of a continuous superstructure have been determined using the principle of superposition. The computations have been carried by software developed with Visual Fortran version 5.0 for PC.
The problem F|n=2|F is to minimize the given objective function F(C1,m, C2,m) of completion times Ci,m of two jobs i Î J={1, 2} processed on m machines M={1, 2, …, m}. Both jobs have the same technological route through m machines. Processing time ti,k of job iÎ J on machine kÎ M is known. Operation preemptions are not allowed. Let R2m be space of non-negative 2m-dimensional real vectors t=(t1,1,…, t1,m, t2,1,…, t2,m) with Chebyshev’s distance d(t, t*). To solve problem F|n=2|F, we can use the geometric algorithm, which includes the following steps: 1) construct digraph (V, A) for problem F|n=2|F and find so-called border vertices in (V, A); 2) construct the set of trajectories corresponding to the shortest paths Rt in digraph (V, A) from the origin vertex to each of the border vertices; 3) find an optimal path in the set Rt that represents a schedule with minimal value of the objective function F. Let path tu Î Rt be optimal for the problem F|n=2|F with operation processing times defined by vector t. If for any small positive real number e > 0 there exists vector t*Î R2m such that d(t, t*) = e and path tu is not optimal for the problem F|n=2|F with operation processing times defined by vector t*, then optimality of path tu is not stable. The main result of the paper is the proof of necessary and sufficient conditions for optimality stability of path tu. If objective function F is continuous non-decreasing (e.g., makespan, total completion time, maximal lateness or total tardiness), then to test whether optimality of the path tu Î Rt is stable takes O(m log m) time.
Hydro- und morphodynamischen Prozesse in Binnengewässern und im Küstennahbereich erzeugen hochkomplexe Phänomene. Zur Beurteilung der Entwicklung von Küstenzohnen, von Flussbetten sowie von Eingriffen des Menschen in Form von Schutzbauwerken sind geeignete numerische Modellwerkzeuge notwendig. Es wird ein holistischer Modellansatz zur Approximation gekoppelter Seegangs-, Strömungs- und Morphodynamischer Prozesse auf der Basis stabilisierter Finiter Elemente vorgestellt. Der Großteil der Modellgleichungen der Hydro- und Morphodynamik sind Transportgleichungen. Dem Transportcharakter dieser Gleichungen entsprechend wird ein stabilisiertes Finites Element Verfahren auf Dreiecken vorgestellt. Die vorgestellte Approximation entspricht einem streamline upwinding Petrov-Galerkin-Verfahrens für vektorwertige mehrdimensionale Probleme, bei dem der Fehler eines Standard-Galerkin-Verfahrens mit Hilfe eines Upwinding-Koeffizienten minimiert wird. Die Wahl des Upwinding-Koeffizienten ist übertragbar auf andere Problemklassen und basiert ausschließlich auf dem Charakter der zugrundeliegene Das Modell wurde für Seegangs- und Strömungs-Untersuchungen im Jade-Weser-Ästuar an der deutschen Nordseeküste eingesetzt.
In recent years special hypercomplex Appell polynomials have been introduced by several authors and their main properties have been studied by different methods and with different objectives. Like in the classical theory of Appell polynomials, their generating function is a hypercomplex exponential function. The observation that this generalized exponential function has, for example, a close relationship with Bessel functions confirmed the practical significance of such an approach to special classes of hypercomplex differentiable functions. Its usefulness for combinatorial studies has also been investigated. Moreover, an extension of those ideas led to the construction of complete sets of hypercomplex Appell polynomial sequences. Here we show how this opens the way for a more systematic study of the relation between some classes of Special Functions and Elementary Functions in Hypercomplex Function Theory.
The cost of keeping large area urban computer aided architectural design (CAAD) models up to date justifies wider use and access. This paper reviews the potential for collaborative groupwork creation and maintenance of such models and suggests an approach to data entry, data management and generation of appropriate levels of detail models from a Geographic Information System (GIS). Staff at the University of the West of England (UWE) modelled a large area of Bristol to demonstrate millennium landmark proposals. It became swiftly apparent that continued amendment of the model to keep it an accurate reflection of changes on the ground was a major data management problem. Piecing in new CAAD models received from Architectural Practices to visualise them in context as part of the planning negotiation process has often taken staff several days of work for each instance. The model is so complex and proprietary that Bristol City operates a specialist visualisation bureau service. UWE later modelled the environs of the Tower of London to support bids for funding and to provide the context for judging the visual impact of iterative design development. Further research continued to develop more effective approaches to. Data conversion and amalgamation from all the diverse sources was the major impediment to effective group working to create the models. It became apparent that a GIS would assist retrieving all the appropriate data that described the part of the model under creation. It was possible to predict that management of many historic part models stepping back through time, allowing for different expert interpretations to co-exist would be in itself a major task requiring a spatial database/GIS. UWE started afresh from the original source data, to explore the collaborative use of GIS and Virtual Reality Modelling Language (VRML) to integrate models and interventions from various sources and to generate an overall navigable interactive whole. Current exploration of the combination of event driven behaviours and Structured Query Language is seeking to define how appropriately to modify objects in the VRML model on demand. This is beginning to realise the potential for use of this process for: asynchronous group modelling on the lines of a collaborative virtual design studio; historic building maintenance management; visitor management; interpretation of historic sites to visitors and public planning information.
Spatial data acquisition, integration, and modeling for real-time project life-cycle applications
(2004)
Current methods for site modeling employs expensive laser range scanners that produce dense point clouds which require hours or days of post-processing to arrive at a finished model. While these methods produce very detailed models of the scanned scene, useful for obtaining as-built drawings of existing structures, the associated computational time burden precludes the methods from being used onsite for real-time decision-making. Moreover, in many project life-cycle applications, detailed models of objects are not needed. Results of earlier research conducted by the authors demonstrated novel, highly economical methods that reduce data acquisition time and the need for computationally intensive processing. These methods enable complete local area modeling in the order of a minute, and with sufficient accuracy for applications such as advanced equipment control, simple as-built site modeling, and real-time safety monitoring for construction equipment. This paper describes a research project that is investigating novel ways of acquiring, integrating, modeling, and analyzing project site spatial data that do not rely on dense, expensive laser scanning technology and that enable scalability and robustness for real-time, field deployment. Algorithms and methods for modeling objects of simple geometric shape (geometric primitives from a limited number of range points, as well as methods provide a foundation for further development required to address more complex site situations, especially if dynamic site information (motion of personnel and equipment). Field experiments are being conducted to establish performance parameters and validation for the proposed methods and models. Initial experimental work has demonstrated the feasibility of this approach.
The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency.
What is nowadays called (classic) Clifford analysis consists in the establishment of a function theory for functions belonging to the kernel of the Dirac operator. While such functions can very well describe problems of a particle with internal SU(2)-symmetries, higher order symmetries are beyond this theory. Although many modifications (such as Yang-Mills theory) were suggested over the years they could not address the principal problem, the need of a n-fold factorization of the d’Alembert operator. In this paper we present the basic tools of a fractional function theory in higher dimensions, for the transport operator (alpha = 1/2 ), by means of a fractional correspondence to the Weyl relations via fractional Riemann-Liouville derivatives. A Fischer decomposition, fractional Euler and Gamma operators, monogenic projection, and basic fractional homogeneous powers are constructed.
The aim of this paper we discuss explicit series constructions for the fundamental solution of the Helmholtz operator on some important examples non-orientable conformally at manifolds. In the context of this paper we focus on higher dimensional generalizations of the Klein bottle which in turn generalize higher dimensional Möbius strips that we discussed in preceding works. We discuss some basic properties of pinor valued solutions to the Helmholtz equation on these manifolds.
This research focuses on an approach to describe principles in architectural layout planning within the domain of revitalization. With the aid of mathematical rules, which are executed by a computer, solutions to design problems are generated. Provided that "design" is in principle a combinatorial problem, i.e. a constraint-based search for an overall optimal solution of a problem, an exemplary method will be described to solve such problems in architectural layout planning. To avoid conflicts relating to theoretical subtleness, a customary approach adopted from Operations Research has been chosen in this work. In this approach, design is a synonym for planning, which could be described as a systematic and methodical course of action for the analysis and solution of current or future problems. The planning task is defined as an analysis of a problem with the aim to prepare optimal decisions by the use of mathematical methods. The decision problem of a planning task is represented by an optimization model and the application of an efficient algorithm in order to aid finding one or more solutions to the problem. The basic principle underlying the approach presented herein is the understanding of design in terms of searching for solutions that fulfill specific criteria. This search is executed by the use of a constraint programming language.
The paper is dedicated to decidability exploration of market segmentation problem with the help of linear convolution algorithms. Mathematical formulation of this problem represents interval task of bipartite graph cover by stars. Vertices of the first partition correspond to types of commodities, vertices of the second – to customers groups. Appropriate method is offered for interval problem reduction to two-criterion task that has one implemented linear convolution algorithm. Unsolvability with the help of linear convolution algorithm of multicriterion, and consequently interval, market segmentation problem is proved.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
The contribution presents a model that is able to simulate construction duration and cost for a building project. This model predicts set of expected project costs and duration schedule depending on input parameters such as production speed, scope of work, time schedule, bonding conditions and maximum and minimum deviations from scope of work and production speed. The simulation model is able to calculate, on the basis of input level of probability, the adequate construction cost and time duration of a project. The reciprocal view attends to finding out the adequate level of probability for construction cost and activity durations. Among interpretive outputs of the application software belongs the compilation of a presumed dynamic progress chart. This progress chart represents the expected scenario of development of a building project with the mapping of potential time dislocations for particular activities. The calculation of a presumed dynamic progress chart is based on an algorithm, which calculates mean values as a partial result of the simulated building project. Construction cost and time models are, in many ways, useful tools in project management. Clients are able to make proper decisions about the time and cost schedules of their investments. Consequently, building contractors are able to schedule predicted project cost and duration before any decision is finalized.
Today´s procedures for the awarding of public construction performance contracts are mainly paper-based. Although the usage of electronic means is permitted in the VOB, the regulations are not sufficient yet. Especially software agents within the AEC-bidding process were not considered at all. The acceptance of an agent-based virtual marketplace for AEC-bidding depends on a reliable and trustworthy public key infrastructure according to the (German) digital signature act. Only if confidentiality, integrity, non-repudiation, and authentication are provided reliably, users will assign sensitive business processes like public tendering procedures to software agents. The development of a secure agent-based virtual marketplace for AEC-bidding according to legal regulations is an entirely new approach from a technical as well as from a legal point of view. The objective of this research project is the development of intelligent software-agents which are able to legally call for bids, to calculate proposals, and to award the successful bidder.
Ein simultanes Lösungsverfahren für Fluid-Struktur-Wechselwirkungen aus dem Bereich des Bauingenieurwesens wird vorgestellt. Die Modellierung der Tragwerksdynamik erfolgt mit der geometrisch nichtlinearen Elastizitätstheorie in total Lagrangescher Formulierung. Die Strömung wird mit den inkompressiblen Navier-Stokes-Gleichungen beschrieben. Wenn Turbulenzeffekte massgeblich sind, kommen die Reynolds-Gleichungen in Verbindung mit dem k-omega-Turbulenzmodell von Wilcox zum Einsatz. Zur Beschreibung von komplexen freien Oberflächen wird die Level-Set-Methode eingesetzt. Die einheitliche Diskretisierung von Fluid und Struktur mit der Raum-Zeit-Finite-Element-Methode führt zu einem konsistenten Berechnungsmodell für das gekoppelte System. Da die isoparametrischen Raum-Zeit-Elemente ihre Geometrie in Zeitrichtung ändern können, erlaubt die Methode eine natürliche Beschreibung des infolge der Strukturbewegung zeitveränderlichen Strömungsgebiets. Die gewichtete Integralformulierung der Kopplungsbedingungen mit globalen Freiwerten für die Interface-Spannungen sichert eine konservative Kopplung von Fluid und Struktur. Ausgewählte Anwendungsbeispiele zeigen die Leistungsfähigkeit der entwickelten Methodik und belegen die guten Konvergenzeigenschaften des simultanen Lösungsverfahrens.
Within the scheduling of construction projects, different, partly conflicting objectives have to be considered. The specification of an efficient construction schedule is a challenging task, which leads to a NP-hard multi-criteria optimization problem. In the past decades, so-called metaheuristics have been developed for scheduling problems to find near-optimal solutions in reasonable time. This paper presents a Simulated Annealing concept to determine near-optimal construction schedules. Simulated Annealing is a well-known metaheuristic optimization approach for solving complex combinatorial problems. To enable dealing with several optimization objectives the Pareto optimization concept is applied. Thus, the optimization result is a set of Pareto-optimal schedules, which can be analyzed for selecting exactly one practicable and reasonable schedule. A flexible constraint-based simulation approach is used to generate possible neighboring solutions very quickly during the optimization process. The essential aspects of the developed Pareto Simulated Annealing concept are presented in detail.
A practical framework for generating cross correlated fields with a specified marginal distribution function, an autocorrelation function and cross correlation coefficients is presented in the paper. The contribution promotes a recent journal paper [1]. The approach relies on well known series expansion methods for simulation of a Gaussian random field. The proposed method requires all cross correlated fields over the domain to share an identical autocorrelation function and the cross correlation structure between each pair of simulated fields to be simply defined by a cross correlation coefficient. Such relations result in specific properties of eigenvectors of covariance matrices of discretized field over the domain. These properties are used to decompose the eigenproblem which must normally be solved in computing the series expansion into two smaller eigenproblems. Such decomposition represents a significant reduction of computational effort. Non-Gaussian components of a multivariate random field are proposed to be simulated via memoryless transformation of underlying Gaussian random fields for which the Nataf model is employed to modify the correlation structure. In this method, the autocorrelation structure of each field is fulfilled exactly while the cross correlation is only approximated. The associated errors can be computed before performing simulations and it is shown that the errors happen especially in the cross correlation between distant points and that they are negligibly small in practical situations.
From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.
SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)
After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.
The design of safety-critical structures, exposed to cyclic excitations demands for non-degrading or limited-degrading behavior during extreme events. Among others, the structural behavior is mainly determined by the amount of plastic cycles, completed during the excitation. Existing simplified methods often ignore this dependency, or assume/request sufficient cyclic capacity. The paper introduces a new performance based design method that considers explicitly a predefined number of re-plastifications. Hereby approaches from the shakedown theory and signal processing methods are utilized. The paper introduces the theoretical background, explains the steps of the design procedure and demonstrates the applicability with help of an example. This project was supported by German Science Foundation (Deutsche Forschungsgemeinschaft, DFG)
Site superintendents performing project management tasks on construction sites need to access project documents and need to collect information that they observe while inspecting the site. Often, information that is observed on a construction site needs to be integrated into electronic documents or project control systems. In the future, we expect integrated product and process models to be the medium for storing and handling construction project management information. Even though mobile computing devices today are already capable of storing and handling such integrated product and process data models, the user interaction with such large and complex models is difficult and not adequately addressed in the existing research. In this paper, we introduce a system that supports project management tasks on construction sites effectively and efficiently by making integrated product and process models accessible. In order to effectively and efficiently enter or access information, site superintendents need visual representations of the project data that are flexible with respect to the level of detail, the decomposition structure, and the type of visual representation. Based on this understanding of the information and data collection needs, we developed the navigational model framework and the application Site Data Collection System (SiDaCoS), which implements that framework. The navigational model framework allows site superintendents to create customized representations of information contained in a product and process model that correspond to their data access and data collection needs on site.
Die Berücksichtigung stochastischer System- und Lastparameter bei der nach EC zulässigen Analyse des Tragwerksverhaltens unter Berücksichtigung globalen nichtlinearen Systemverhaltens sind notwendig, da dies ein anderes Sicherheitskonzept erfordert. Wird der plastische Grenzlastfaktor (PGLF), der die Ausnutzung der Systemkapazitäten bis zum Kollaps ermöglicht, zur Grenzzustandsbeurteilung herangezogen, wird dies besonders deutlich. Für das Modell eines ebenen Stahlbetontragwerks wird starr-ideal-plastisches Materialverhalten vorausgesetzt. Die Bestimmung des PGLFs für ein gegebenes Lastbild kann ausgehend von einem Extremalprinzip über die Lösung einer Optimierungsaufgabe erfolgen. Diese direkte Bestimmung des Kollapses bereitet aber bei der stochastischen Analyse Schwierigkeiten, da die zugehörigen Grenzzustandsgleichungen (GZG) nicht gutartig sind. Es wird die stochastische Methode des Multi-Modal Importance Sampling (MMIS) vorgeschlagen, die unter Berücksichtigung der Eigenschaften dieses mechanischen Modells die Versagenswahrscheinlichkeit bestimmt, d.h. das Verfahren nimmt auf die nur stückweise Stetigkeit GZG des speziellen Problems Rücksicht. Es setzt die zugehörige Grenzzustandsfunktion voraus. Die wesentlichen Bemessungspunkte werden durch Anwendung des Betaverfahrens gesucht und dann mit einem Importance-Sampling-Algorithmus mit multimodaler Sampling Dichte die Versagenswahrscheinlichkeit bestimmt . Das Verfahren sucht und berücksichtigt die wesentlichen Versagensbereiche des Problems mit vertretbarem Aufwand. Verbesserungen könnten sowohl bei den enthaltenen Such- und Iterationsalgorithmen als auch bei der Wahl der einzelnen Sampling-Dichten erzielt werden, was Gegenstand weiterer Untersuchungen ist.
In ersten Teil des Vortrages wird eine Heuristik zur Approximation der Pareto Menge multikriterielle verallgemeinerter Job-Shop Scheduling-Probleme vorgestellt. Der Algorithmus basiert auf einer genetischen lokale Suche Heuristik. Die Mittelwertbildung der Startzeiten der Vorgänge wurde hierbei als Rekombinationsoperator verwendet. Als lokale Suche wurde ein Threshold Accepting Algorithmus implementiert. Der Algorithmus wurde auf einer großen Menge von Benchmark-Instanzen und den Zielfunktionen Makespan, Tardiness, Lateness und Summe der Fertigstellungszeiten getestet. Die Ergebnisse zu den verschiedenen Zielfunktionen werden vorgestellt. Die vom Algorithmus erzeugte Approximation der Pareto Menge kann eine große Anzahl von Lösungen enthalten. Aus dieser Menge muss der Entscheidungsträger eine Lösung auswählen. Da die Datenmenge schon bei moderaten Problemdimensionen sehr umfangreich wird, stellt dies ein Problem für den Entscheider dar. Deshalb muss die große Menge der Lösungen auf eine überschaubare Anzahl reduziert werden. Bei dieser Reduktion der Lösungen muss die Diversität der verbleibenden Lösungen beachtet werden. Diese Reduzierung wird als Short Listing bezeichnet und im zweiten Teil des Vortrages vorgestellt. Im ersten Schritt des Short Listings werden die Lösungen mittels Abstandsmaßen im Lösungsraum geclustert. Die dazu verwendeten Abstandsmaße werden auf den Permutationen der Vorgänge auf den Ressourcen definiert. Es wurden fünf Abstandsmaße und zwei Clusterverfahren, ein hierarchisches und ein nicht-hierarchisches, untersucht. Im zweiten Schritt wird aus jedem Cluster jeweils eine Lösung ausgewählt und dem Entscheidungsträger vorgelegt. Dabei wurden zwei Methoden untersucht. Im ersten Fall wurde die bezüglich einer Ranking-Funktion beste Lösung und im zweiten Fall die Medianlösung bezüglich des Abstandsmaßes ausgewählt. Die Abstandsmaße, Clusteralgorithmen und Auswahlmethoden wurden auf einer großen Menge von Benchmark-Instanzen verglichen. Die Ergebnisse werden im Vortrag vorgestellt.
In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.
A new application of software technology is the application area of smart living or sustainable living. Within this area application platforms are designed and realized with the goal to support value added services. In this context value added services integrates microelectronics, home automation and services to enhance the attractiveness of flats, homes and buildings. Especially real estate companies or service providers dealing with home services are interested in an effective design and management of their services. Service Engineering is the approved approach for designing customer oriented service processes. Service engineering consists of several phases; from situation analysis to service creation and service design to service management. This article will describe how the method service blueprint can be used to design service processes. Smart living includes all actions to enlarge a flat to a smart home for living. One special requirement of this application domain is the use of local components (actuators, sensors) within service processes. This article will show how this extended method supports service providers to improve the quality of customer oriented service processes and the derivation of needed interfaces of involved actors. For the civil engineering process it will be possible to derive needed information from a built in home automation system. The aim is to show, how to get needed smart local components to fullfill later offered it-supported value added services. Value added services focused on inhabitants are grouped to consulting and information, care and supervision, leisure time activities, repairs, mobility and delivery, safety and security, supply and disposal.
In distributed project organisations and collaboration there is a need for integrating unstructured self-contained text information with structured project data. We consider this a process of text integration in which various text technologies can be used to externalise text content and consolidate it into structured information or flexibly interlink it with corresponding information bases. However, the effectiveness of text technologies and the potentials of text integration greatly vary with the type of documents, the project setup and the available background knowledge. The goal of our research is to establish text technologies within collaboration environments to allow for (a) flexibly combining appropriate text and data management technologies, (b) utilising available context information and (c) the sharing of text information in accordance to the most critical integration tasks. A particular focus is on Semantic Service Environments that leverage on Web service and Semantic Web technologies and adequately support the required systems integration and parallel processing of semi-structured and structured information. The paper presents an architecture for text integration that extends Semantic Service Environments with two types of integration services. Backbone to the Information Resource Sharing and Integration Service is a shared environment ontology that consolidates information on the project context and the available model, text and general linguistic resources. It also allows for the configuration of Semantic Text Analysis and Annotation Services to analyse the text documents as well as for capturing the discovered text information and sharing it through semantic notification and retrieval engines. A particular focus of the paper is the definition of the overall integration process configuring a complementary set of analyses and information sharing components.
SELECTION AND SCALING OF GROUND MOTION RECORDS FOR SEISMIC ANALYSIS USING AN OPTIMIZATION ALGORITHM
(2015)
The nonlinear time history analysis and seismic performance based methods require a set of scaled ground motions. The conventional procedure of ground motion selection is based on matching the motion properties, e.g. magnitude, amplitude, fault distance, and fault mechanism. The seismic target spectrum is only used in the scaling process following the random selection process. Therefore, the aim of the paper is to present a procedure to select a sets of ground motions from a built database of ground motions. The selection procedure is based on running an optimization problem using Dijkstra’s algorithm to match the selected set of ground motions to a target response spectrum. The selection and scaling procedure of optimized sets of ground motions is presented by examining the analyses of nonlinear single degree of freedom systems.
In spite of the extensive research in dynamic soil-structure interaction (SSI), there still exist miscon-ceptions concerning the role of SSI in the seismic performance of structures, especially the ones founded on soft soil. This is due to the fact that current analytical SSI models that are used to evaluate the influence of soil on the overall structural behavior are approximate models and may involve creeds and practices that are not always precise. This is especially true in the codified approaches which in-clude substantial approximations to provide simple frameworks for the design. As the direct numerical analysis requires a high computational effort, performing an analysis considering SSI is computationally uneconomical for regular design applications. This paper outlines the set up some milestones for evaluating SSI models. This will be achieved by investigating the different assumptions and involved factors, as well as varying the configurations of R/C moment-resisting frame structures supported by single footings which are subject to seismic excita-tions. It is noted that the scope of this paper is to highlight, rather than fully resolve, the above subject. A rough draft of the proposed approach is presented in this paper, whereas a thorough illustration will be carried out throughout the presentation in the course of the conference.
From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.
This paper concerns schedule synchronization problems in public transit networks. In particular, it consists of three main parts. In the first the subject area is introduced, the terms are defined and framework for optimal synchronization in the form of problem representation and formulation is proposed. The second part is devoted to transfer synchronization problem when passengers changing transit lines at transfer points. The intergrated Tabu Search and Genetic solution method is developed with respect to this specific problem. The third part deals with headways harmonization problem i.e. synchronization of different transit lines schedules on a common segments of routes. For the solution of this problem a new bilevel optimization method is proposed with zones harmonization at the bottom level and co-ordination of zones, by time buffers assigned to timing points, at the upper level. Finally, the synchronization problems are numerically illustrated by real-life examples of the public transport lines in Cracow.
Although there are some good reasons to design engineering software as a stand-alone application for a single computer, there are also numerous possibilities for creating distributed engineering applications, in particular using the Internet. This paper presents some typical scenarios how engineering applications can benefit from including network capabilities. Also, some examples of Internet-based engineering applications are discussed to show how the concepts presented can be implemented.
The paper introduces a systematic construction management approach, supporting expansion of a specified construction process, both automatically and semi-automatically. Throughout the whole design process, many requirements must be taken into account in order to fulfil demands defined by clients. In implementing those demands into a design concept up to the execution plan, constraints such as site conditions, building code, and legal framework are to be considered. However, complete information, which is needed to make a sound decision, is not yet acquired in the early phase. Decisions are traditionally taken based on experience and assumptions. Due to a vast number of appropriate available solutions, particularly in building projects, it is necessary to make those decisions traceable. This is important in order to be able to reconstruct considerations and assumptions taken, should there be any changes in the future project’s objectives. The research will be carried out by means of building information modelling, where rules deriving from standard logics of construction management knowledge will be applied. The knowledge comprises a comprehensive interaction amongst bidding process, cost-estimation, construction site preparation as well as specific project logistics – which are usually still separately considered. By means of these rules, favourable decision taking regarding prefabrication and in-situ implementation can be justified. Modifications depending on the available information within current design stage will consistently be traceable.
In this paper experimental studies and numerical analysis carried out on reinforced concrete beam are partially reported. They aimed to apply the rigid finite element method to calculations for reinforced concrete beams using discrete crack model. Hence rotational ductility resulting from crack occurrence had to be determined. A relationship for calculating it in static equilibrium was proposed. Laboratory experiments proved that dynamic ductility is considerably smaller. Therefore scaling of the empirical parameter was carried out. Consequently a formula for its value depending on reinforcement ratio was obtained.
The topic of structural robustness is covered extensively in current literature in structural engineering. A few evaluation methods already exist. Since these methods are based on different evaluation approaches, the comparison is difficult. But all the approaches have one in common, they need a structural model which represents the structure to be evaluated. As the structural model is the basis of the robustness evaluation, there is the question if the quality of the chosen structural model is influencing the estimation of the structural robustness index. This paper shows what robustness in structural engineering means and gives an overview of existing assessment methods. One is the reliability based robustness index, which uses the reliability indices of a intact and a damaged structure. The second one is the risk based robustness index, which estimates the structural robustness by the usage of direct and indirect risk. The paper describes how these approaches for the evaluation of structural robustness works and which parameters will be used. Since both approaches needs a structural model for the estimation of the structural behavior and the probability of failure, it is necessary to think about the quality of the chosen structural model. Nevertheless, the chosen model has to represent the structure, the input factors and reflect the damages which occur. On the example of two different model qualities, it will be shown, that the model choice is really influencing the quality of the robustness index.
In construction engineering, a schedule’s input data, which is usually not exactly known in the planning phase, is considered deterministic when generating the schedule. As a result, construction schedules become unreliable and deadlines are often not met. While the optimization of construction schedules with respect to costs and makespan has been a matter of research in the past decades, the optimization of the robustness of construction schedules has received little attention. In this paper, the effects of uncertainties inherent to the input data of construction schedules are discussed. Possibilities are investigated to improve the reliability of construction schedules by considering alternative processes for certain tasks and by identifying the combination of processes generating the most robust schedule with respect to the makespan of a construction project.
Most of the existing seismic resistant design codes are based on the response spectrum theory. The influence of inelastic deformations can be evaluated by considering inelastic type of resisting force and then the inelastic spectrum is considerably different from the elastic one. Also, the influence of stiffness degradation and strength deterioration can be accounted for by including more precise models from material point of view. In some recent papers the corresponding changes in response spectra due to the P- Ä effect are discussed. The experience accumulated from the recent earthquakes indicates that structural pounding may considerably influence the response of structures and should be taken into account in design procedures. The most convenient way to do that is to predict the influence of the pounding on the response spectra for accelerations, velocities and displacements. Generally speaking the contact problems such as pounding are characterized by large extent of nonlinearity and slow convergence of the computational procedures. Thus obtaining spectra where the contact problem is accounted for seems very attractive from engineering point of view because could easy be implemented into the design procedures. However it is worth nothing that there is not rigorous mathematical proof that the original system can be decomposed into single equations related to single degree of freedom systems. It is the porpose of the paper to study the influence of the pounding on the response spectra and to evaluate the amplification due to the impact. For this purpose two adjacent SDOF systems are considered that are able to interact during the vibration process. This problem is solved versus the elastic stiffness ratio, which appears to be very important for such assemblage. The contact between masses is numerically simulated using opening gap elements as links. Comparisons between calculated response spectra and linear response spectra are made in order to derive analytical relationships to simply obtain the contribution of pounding. The results are graphically illustrated in response spectra format and the influence of the stiffness ratio is clarified.
The management of resources is an essential task in each construction company. Today, ERP systems and e-Business systems are available to assist construction companies to efficiently organise the allocation of their personnel and equipment within the company, but they cannot provide the company with the idle resources for every single task that has to be performed during a construction project. Therefore, companies should have an alternative solution to better exploit expensive resources and compensate their fixed costs, but also have them available at the right time for their own business activities. This paper outlines the approach taken by the EU funded project “e-Sharing” (IST-2001-33325) to support resource management between construction companies. It will describe requirements for the management of construction resources, its core features, and the integration approach. Therefore, we will outline the approach of an integrated resource type model supporting the management and classification of construction equipment, construction tasks and qualification profiles. The development is based on a cross-domain analysis and evaluation of existing models. ...
The technological processes, schedules, parallel algorithms, etc., having some technological limitations and exacting increases of efficiency of their execution can be described through digraphs, on which the appropriate optimization problem (construction of optimal scheduling of tops of digraph) can be solved. The problems, researched in the given operation, have a generally following statement: The problem 1: Under the given graph G and option value h to construct parallel scheduling of tops of digraph of minimum length. Let's designate the problem S(G, h, l). The problem 2: Under the given graph G and option value l to construct parallel scheduling of tops of digraph of minimum width. Let's designate the problem S(G, l, h). The problem 3: Under the given graph G, option value h and periods of execution of operations di, i=1, …, n to construct parallel scheduling of tops of digraph of minimum length. Let's designate the problem S(G, h, di, l). The problems 1,2,3 in a case when h-arbitrary have exponential complexity. In operation the method of solution of the problem S(T, h, di, l) is offered on the basis of choice of tops having greatest weight. The approach to solution of the problem S(G, 3, l) is offered, where G the graph satisfying property : S[i] =S [i], i=1, …, l. For obtaining a rating of width of scheduling on an available estimator of length, we offer to use iterative algorithm of polynomial complexity, on which each step the current value of width of scheduling is set, which is used for specification of length of scheduling.
RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)
Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.
Die Verfügbarkeit und die sichere Beherrschung moderner und kostengünstiger Informations- und Kommunikationstechnologien gestattet es auch kleinen unternehmerischen Einheiten im Bauwesen sich in betriebsübergreifenden Netzwerken zu organisieren und sich besser in die Wertschöpfungskette am Bau zu integrieren. In sogenannten >Virtuellen Organisationen< (VO) können diese für die Volkswirtschaft wichtigen Unternehmen ihre Wettbewerbsvorteile wie höhere Flexibilität, schnelle Reaktion auf Kundenwünsche und Marktnähe nutzen und somit ihr langfristiges Bestehen in einem liberalen Wettbewerb auf einem erweiterten EU-Markt sichern und ausbauen. Die derzeit existierenden Hindernisse im Informationsfluss zwischen Baustelle und Büro können durch den Einsatz mobiler, funkvernetzter Endgeräte, wie beispielsweise >Smart-Phones<, und einer neu zu gestaltenden Infrastruktur zum Wissensmanagement und zur kontextsensitiven Informationsdarstellung abgebaut werden. Im Rahmen dieser Veröffentlichung werden konzeptionelle Ansätze eines integrierten Informationsmanagements zur Unterstützung von VO, die im Rahmen des BMBF-Projektes >IuK-System Bau< derzeit entwickelt werden, vorgestellt.
In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.
We investigate aspects of tram-network section reliability, which operates as a part of the model of whole city tram-network reliability. Here, one of the main points of interest is the character of the chronological development of the disturbances (namely the differences between time of departure provided in schedule and real time of departure) on subsequent sections during tram line operation. These developments were observed in comprehensive measurements done in Krakow, during one of the main transportation nodes (Rondo Mogilskie) rebuilding. All taken building activities cause big disturbances in tram lines operation with effects extended to neighboring sections. In a second part, the stochastic character of section running time will be analyzed more detailed. There will be taken into consideration sections with only one beginning stop and also with two or three beginning stops located at different streets at an intersection. Possibility of adding results from sections with two beginning stops to one set will be checked with suitable statistical tests which are used to compare the means of the two samples. Section running time may depend on the value of gap between two following trams and from the value of deviation from schedule. This dependence will be described by a multi regression formula. The main measurements were done in the city center of Krakow in two stages: before and after big changes in tramway infrastructure.
Die zunehmend erforderliche Kooperation verschiedener Beteiligter unterschiedlicher Fachbereiche und der Einsatz hochspezialisierter Fachapplikationen in heterogenen Systemumgebungen unterstreichen die Bedeutung und Notwendigkeit neuer Konzepte und Möglichkeiten zur Schaffung einer computergestützten Integrationsebene. Ziel einer computergestützten Integrationsebene ist die Verbesserung der Kooperation und Kommunikation unter den Beteiligten. Grundlage dafür ist die Etablierung eines effizienten und fehlerfreien Daten- und Informationsaustausches zwischen den verschiedenen Fachplanern und -applikationen. Die Basis für die Datenintegrationsebene bildet ein digitales Bauwerksmodell im Sinne eines >virtuellen Bauwerks<, welches alle relevanten Daten und Informationen über ein zu planendes oder real existierendes Bauwerk zur Verfügung stellt. Bei der Verwirklichung einer Bauwerksmodell-orientierten Datenintegrationsebene und deren Modellverwaltung erweist sich speziell die Definition des Bauwerksmodells also die Spezifikation der relevanten auszutauschenden Daten als äußerst komplex. Der hier vorzustellende Relationen-orientierte Ansatz, d.h. die Realisierung des Daten- und Informationsaustauschs mittels definierter Relationen und Beziehungen zwischen dynamisch modifizierbaren Domänenmodellen, bietet Ansätze zur: * Verringerung und Beherrschung der Komplexität des Bauwerksmodells (Teilmodellbildung) * Realisierung eines effizienten Datenaustauschs (Relationenmanagement) Somit stellt der Relationenorientierte Ansatz einen adäquaten Lösungsweg zur Modellierung eines digitalen Bauwerksmodells als Datenintegrationsebene für den Lebenszyklus eines Bauwerkes dar.
Für planende Ingenieure und Architekten besteht seit jeher im Rekonstruktionsbereich die Aufgabe, vorhandene Gebäude in ihrer Geometrie und Struktur zu erfassen und daraus Rekonstruktionspläne und -technologien zu erarbeiten. Diese Erfassungsmaßnahmen sind sehr umfangreich und kostenintensiv. Ziel der hier vorzustellenden Ansatzes war es deshalb, ein kostengünstiges Verfahren zu entwickeln, das ein berührungsloses Aufmaß eben begrenzter Räume (polyedrischer Räume) mit einer den Erfordernissen entsprechenden Genauigkeit gewährleistet. Es werden im wesentlichen zwei Problemkreise behandelt. Der erste beinhaltet den Nachweis einer für die geplanten Anwendungen hinreichend genauen, maßstäblichen Rekonstruierbarkeit von ebenen Objekten (Wänden) aus monokularen Fotoaufnahmen. Anstelle des in der Photogrammetrie üblichen Weges über Kamerakalibrierung mittels exakt eingemessener Paßpunkte wurde ein Ansatz verfolgt, bei dem eine mittels Laserspotprojektoren auf dem Aufnahmeobjekt erzeugte parameterabhängige Maßfigur in Verbindung mit a priori bekannten Bildinhalten Grundlage für die maßstabsgerechte Rekonstruktion des Objektes ist. Der zweite Problemkreis behandelt die Zusammensetzung von eben begrenzten Räumen aus Einzelebenen (Wänden) die als Ergebnis des ersten Schrittes projektiv entzerrt, d.h. als orthogonale Draufsicht, allerdings verfahrensbedingt fehlerbehaftet vorliegen. Ziel ist hier, durch Nutzung von a-priori-Kenntnissen über die Raumstruktur mit Hilfe von Methoden der mathematischen Optimierung einen Genauigkeitsgewinn zu erzielen.
The theory of regular quaternionic functions of a reduced quaternionic variable is a 3-dimensional generalization of complex analysis. The Moisil-Theodorescu system (MTS) is a regularity condition for such functions depending on the radius vector r = ix+jy+kz seen as a reduced quaternionic variable. The analogues of the main theorems of complex analysis for the MTS in quaternion forms are established: Cauchy, Cauchy integral formula, Taylor and Laurent series, approximation theorems and Cauchy type integral properties. The analogues of positive powers (inner spherical monogenics) are investigated: the set of recurrence formulas between the inner spherical monogenics and the explicit formulas are established. Some applications of the regular function in the elasticity theory and hydrodynamics are given.
Für den Entwurf der i.a. aus langen schmalen Rechtecken bestehenden Schal- bzw. Werkpläne wird eine Entwurfsunterstützung vorgestellt, bei der die Größe der Rechtecke wie immer festgelegt wird, die Lage der Rechtecke aber durch topologische Angaben. Letztere bilden programmtechnisch Bedingungen, wobei zwischen Berühr- und Bündigkeitsbedingen unterschieden wird. Diese Angaben positionieren das neue Rechteck im Bezug zu einem bereits platzierten. Zum Beispiel erlaubt die Angabe, die Säule ist oberhalb des Fundamentes und belastet dieses mittig, eine eindeutige Festlegung der Lage der Säule bei gegebener Lage des Fundamentes und gegebenen Abmessungen beider Rechtecke. Die Formulierung mittels Bedingungen hat den Vorteil daß diese auch bei Änderung von Abmessungen gültig bleiben. Die hier vorgestellte Eingabeart der relativen Positionierung ist eine Erweiterung des Orthomodus, wie er bei Bau-CAD-Programmen stets gefunden wird.
Due to increasing numbers of wind energy converters, the accurate assessment of the lifespan of their structural parts and the entire converter system is becoming more and more paramount. Lifespan-oriented design, inspections and remedial maintenance are challenging because of their complex dynamic behavior. Wind energy converters are subjected to stochastic turbulent wind loading causing corresponding stochastic structural response and vibrations associated with an extreme number of stress cycles (up to 109 according to the rotation of the blades). Currently, wind energy converters are constructed for a service life of about 20 years. However, this estimation is more or less made by rule of thumb and not backed by profound scientific analyses or accurate simulations. By contrast, modern structural health monitoring systems allow an improved identification of deteriorations and, thereupon, to drastically advance the lifespan assessment of wind energy converters. In particular, monitoring systems based on artificial intelligence techniques represent a promising approach towards cost-efficient and reliable real-time monitoring. Therefore, an innovative real-time structural health monitoring concept based on software agents is introduced in this contribution. For a short time, this concept is also turned into a real-world monitoring system developed in a DFG joint research project in the authors’ institute at the Ruhr-University Bochum. In this paper, primarily the agent-based development, implementation and application of the monitoring system is addressed, focusing on the real-time monitoring tasks in the deserved detail.
Ideally, multiple computational building evaluation routines (particularly simulation tools) should be coupled in real-time to the representational design model to provide timely performance feed-back to the system user. In this paper we demonstrate how this can be achieved effectively and conveniently via homology-based mapping. We consider two models as homologous if they entail isomorphic topological information. If the general design representation (i.e., a shared object model) is generated in a manner so as to include both the topological building information and pointers to the semantic information base, it can be used to directly derive the domain representations (>enriched< object models with detailed configurational information and filtered semantic data) needed for evaluation purposes. As a proof of concept, we demonstrate a computational design environment that dynamically links an object-oriented space-based design model, with structurally homologous object models of various simulation routines.
A distributed geotechnical remote analysis of data system (Distributed G-RAD) can benefit both owners and contractors in providing better quality control and assurance on geotechnical projects. The Distributed G-RAD approach involves efficient data acquisition using PDAs with GPS capability, radio frequency identification (RFID) tags for labeling soil samples, laser scanning for measuring lift thickness and volumes of stockpiles and borrow pits. Spatial data storage is provided using a geographic information system (GIS). Portions of this system are already developed while other parts are still being considered. This paper also describes how RFID and laser scanning technologies can be used in the larger Distributed G-RAD system.
For the management or reorganisation of existing buildings, data concerning dimensions and construction are necessary. Often these data are given exclusively by paper-based drawings and no digital data such as a computer based product model or even a CAD-model are available. In order to perform mass calculation, damage mapping or a recalculation of the structure these drawings of the building under consideration have to be analysed manually by the engineer. This is a very time-consuming job. In order to close this gap between drawings of an existing building and a digital product model an approach is presented in this paper to digitise a drawing, to build up geometric and topologic models and to recognise construction parts of the building. Finally all recognised parts are transformed into a three-dimensional geometric model which provides all necessary geometric information for the product model. During this import process the semantics of a ground floor plan has to be converted into a 3D-model.
After more than hundred years of arguments in favour and against quaternions, of exciting odysseys with new insights as well as disillusions about their usefulness the mathematical world saw in the last 40 years a burst in the application of quaternions and its generalizations in almost all disciplines that are dealing with problems in more than two dimensions. Our aim is to sketch some ideas - necessarily in a very concise and far from being exhaustive manner - which contributed to the picture of the recent development. With the help of some historical reminiscences we firstly try to draw attention to quaternions as a special case of Clifford Algebras which play the role of a unifying language in the Babylon of several different mathematical languages. Secondly, we refer to the use of quaternions as a tool for modelling problems and at the same time for simplifying the algebraic calculus in almost all applied sciences. Finally, we intend to show that quaternions in combination with classical and modern analytic methods are a powerful tool for solving concrete problems thereby giving origin to the development of Quaternionic Analysis and, more general, of Clifford Analysis.
The process of analysis and design in structural engineering requires the consideration of different partial models, for example loading, structural materials, structural elements, and analysis types. The various partial models are combined by coupling several of their components. Due to the large number of available partial models describing similar phenomena, many different model combinations are possible to simulate the same aspects of a structure. The challenging task of an engineer is to select a model combination that ensures a sufficient, reliable prognosis. In order to achieve this reliable prognosis of the overall structural behavior, a high individual quality of the partial models and an adequate coupling of the partial models is required. Several methodologies have been proposed to evaluate the quality of partial models for their intended application, but a detailed study of the coupling quality is still lacking. This paper proposes a new approach to assess the coupling quality of partial models in a quantitative manner. The approach is based on the consistency of the coupled data and applies for uni- and bidirectional coupled partial models. Furthermore, the influence of the coupling quality on the output quantities of the partial models is considered. The functionality of the algorithm and the effect of the coupling quality are demonstrated using an example of coupled partial models in structural engineering.
Um die entsprechende Qualität der ÖPNV zu erreichen, sollte man das Verhältnis zwischen der Bedienungsqualität und dem Kostenaufwand analysieren. In den Bedingungen der anwachsenden Konkurenz aus der Seite der PKW und anderen Fuhrunternehmern soll man die Handlungen, die grösste Effektivität gewährleisten, aufnehmen. Es gibt viele Möglichkeiten die diese Qualität verbessern können, wie z.B. steigernde Frequenz der Fahrzeuge, Vergrößerung der Geschwindigkeit, Pünktlichkeit und Regelmäßigkeit, Einführung der Niederfußfahrzeuge und vieles mehr. Ich versuche in meinem Referat die Aspekte zu analysieren, die mit der Frequenz verbunden sind, d.h. die Erhöhung der Frequenz und das Erhalten der konstanten Häufigkeit der Linienbusse. Das Referat umfasst die Modelle und die Beispiele. In Verbindung mit den Untersuchungen, die die Bereitschaft der Bezahlung für die Verbesserung der Qualität betreffen, kann man schon Entscheidungen treffen, die unterschiedlichen Standard der Fahrt bestimmen.
Quality is one of the most important properties of a product. Providing the optimal quality can reduce costs for rework, scrap, recall or even legal actions while satisfying customers demand for reliability. The aim is to achieve ``built-in'' quality within product development process (PDP). The common approach therefore is the robust design optimization (RDO). It uses stochastic values as constraint and/or objective to obtain a robust and reliable optimal design. In classical approaches the effort required for stochastic analysis multiplies with the complexity of the optimization algorithm. The suggested approach shows that it is possible to reduce this effort enormously by using previously obtained data. Therefore the support point set of an underlying metamodel is filled iteratively during ongoing optimization in regions of interest if this is necessary. In a simple example, it will be shown that this is possible without significant loss of accuracy.
Over the last decade, the technology of constructing buildings has been dramatically developed especially with the huge growth of CAD tools that help in modeling buildings, bridges, roads and other construction objects. Often quality control and size accuracy in the factory or on construction site are based on manual measurements of discrete points. These measured points of the realized object or a part of it will be compared with the points of the corresponding CAD model to see whether and where the construction element fits into the respective CAD model. This process is very complicated and difficult even when using modern measuring technology. This is due to the complicated shape of the components, the large amount of manually detected measured data and the high cost of manual processing of measured values. However, by using a modern 3D scanner one gets information of the whole constructed object and one can make a complete comparison against the CAD model. It gives an idea about quality of objects on the whole. In this paper, we present a case study of controlling the quality of measurement during the constructing phase of a steel bridge by using 3D point cloud technology. Preliminary results show that an early detection of mismatching between real element and CAD model could save a lot of time, efforts and obviously expenses.
Known as a sophisticated phenomenon in civil engineering problems, soil structure interaction has been under deep investigations in the field of Geotechnics. On the other hand, advent of powerful computers has led to development of numerous numerical methods to deal with this phenomenon, resulting in a wide variety of methods trying to simulate the behavior of the soil stratum. This survey studies two common approaches to model the soil’s behavior in a system consisting of a structure with two degrees of freedom, representing a two-storey frame structure made of steel, with the column resting on a pile embedded into sand in laboratory scale. The effect of soil simulation technique on the dynamic behavior of the structure is of major interest in the study. Utilized modeling approaches are the so-called Holistic method, and substitution of soil with respective impedance functions.
One of the basic types of strength calculations is the calculation of limit equilibrium of constructions. This report describes new method for solving the problem of limit equilibrium. The rigid-plastic system in this method is substituted with an «equivalent» elastic system with specially constructed rigidities. This is why it is called the method of pseudorigidities. An iteration algorithm was developed for finding pseudorigidities. This algorithm is realized in a special software procedure. Conjunction of this procedure with any elastic calculation program (base program) creates a program solving rigid-plastic problems. It is proved, that iterations will be converge to the solution for the problem of limit equilibrium. The solution of tests show, that pseudorigidity method is universal. It allows the following: - to solve problems of limit equilibrium for various models (arch, beam, frame, plate, beam-wall, shell, solid); - to take into account both linearized and square-law fluidity conditions; - to solve problems for various kinds of loads (concentrated, distributed, given by a generalized vector); - to take into account the existing various of fluidity criteria in different sections etc. The iterative PRM process quickly converges. The accuracy of PRM is very high even in case of rough finite-element structuring. The author has used this method for design protection systems from extreme loads due to equipment of nuclear power stations, pipelines, cargo in any transportation.
Prädiktive Wärmeflussregelung solaroptimierter Wohngebäude - Thermische Simulation komplexer Gebäude
(2003)
Moderne Wohngebäude heute, besonders aber die der Zukunft, werden sich zu einem großen Teil selbst mit Wärme versorgen. Hierbei spielen Entwicklungen im Bereich der Fassadenkonstruktionen eine zentrale Rolle. Zum erhöhten solaren Wärmeeintrag kommen die Eigenschaften der Wärmespeicherung und verzögerten Wärmeabgabe hinzu. Aufgrund dieser Eigenschaften definiert sich das Gebäudeverhalten neu und komplexer. Im Rahmen der Entwicklungen von Wärmeflussregelungen in solarthermisch beheizten Wohngebäuden mit neuartigen Fassaden wird in diesem Beitrag auf die Modellentwicklung moderner Fassadenkonstruktionen hinsichtlich Transparenter Wärmedämmung und Phasenwechselmaterialien im Wandverbund, kombiniert mit der regelungstechnisch interessanten Neuentwicklung von Verschattungseinrichtungen über schaltbare Schichten am Fenster, eingegangen. Der Beitrag beschreibt den Entwurf eines Regelungskonzeptes für die thermische Raumklimaregelung. Diese Untersuchungen basieren auf komplexen Simulationsmodellen, wobei bei der Modellentwicklung der neuartigen Fassadenelemente auf reale Messwerte zurückgegriffen werden konnte.
Prozeßoptimierung in der logistischen Kette erfordert eine interdisziplinäre betriebsüber-greifende Projektarbeit. Im Zeitalter der Globalisierung der Märkte und der stetigen Verbesserung der Wettbewerbsfähigkeit mittelständischer Unternehmen ist eine innerbetriebliche und überbetriebliche Ressourcen- und Tourenplanung (Optimierung) über eine Informationsvernetzung ebenso notwendig wie die Aufbereitung von Informationen und die Analyse von Geschäftsprozessen. Die Prozessoptimierung umfaßt die Aufgaben der Analyse, Gestaltung, Planung und Kontrolle von Prozessen. Supply Chain Management (SCM) ist die übergreifende Prozessoptimierung in der logistischen Kette, d.h. die logische Weiterführung der PPS auf die Lieferbeziehungen. Das Strukturmodell der logistischen Kette umfaßt die Prozesse der * Produktentstehung * Entwicklung * Auftragsgewinnung (Vertrieb, Marketing) * Produktionsplanung * Beschaffung * Produktion * Distribution und Entsorgung Diese Prozesse werden durch das Supply Chain Management nach unternehmensspezifischen Zielsetzungen in Richtung Kunden, Lieferanten und Dienstleistern gestaltet und optimiert (Optimierung der Wertschöpfungskette). Anwendungssystem der Informatik übernehmen die Informationsversorgung in der logistischen Kette.
Vor dem Hintergrund einer sich verschärfenden Umweltgesetzgebung sowie der Erkenntnis, dass Bauabfälle sich grundsätzlich für eine Stoffkreislaufführung eignen, hat der Rückbau von Bauwerken in den letzten Jahren verstärkt an Bedeutung gewonnen. Aufgrund oftmals strenger Zeit- und Kostenvorgaben für einen Rückbau, begrenzter Verfügbarkeit von Personal und Betriebsmitteln, einer weitgehenden Unikatfertigung sowie wechselnder Standorte kommt dabei der projektorientierten Planung von Maßnahmen auf der Baustelle große Bedeutung zu. Im Rahmen des Beitrages werden Ansätze zur Modellierung und Lösung der sich hieraus ergebenden Probleme zur Planung und Optimierung von (Rück-) Bauabläufen unter Verwendung von Projektplanungsmodellen und -methoden vorgestellt. Hierbei werden neben betriebswirtschaftlichen auch umweltrelevante sowie technische Fragestellungen im Zusammenhang mit der Planung von Rückbauprojekten aufgegriffen. Eine Anwendung der Planungsansätze auf reale Gebäude zeigt, dass sich durch Kombination von Gebäuderückbau und Aufbereitungstechnik eine Qualitätsverbesserung von Recyclingbaustoffen erzielen lässt. Zur Umsetzung der hierzu erforderlichen Maßnahmen werden unter Berücksichtigung individueller abfallwirtschaftlicher Rahmenbedingungen der jeweiligen Planungsregion, gebäude- und baustellenbezogener Besonderheiten, technischer sowie kapazitiver Restriktionen Ablaufpläne für den Gebäuderückbau berechnet. Die für unterschiedliche Planungsregionen vorgenommenen Modellrechnungen weisen nach, dass sich die Demontage von Gebäuden gegenüber einem konventionellen Gebäudeabbruch unter bestimmten Rahmenbedingungen bereits wirtschaftlich vorteilhaft realisieren lässt. Abgerundet wird der Beitrag durch einen Ausblick auf Möglichkeiten einer Realisierung komplexer Bauabläufe auch unter strengen Zeitvorgaben sowie bei begrenzten Platzverhältnissen mit Hilfe fertigungssynchroner Ressourceneinsatzplanung sowie auf die Berücksichtigung von Unsicherheiten in der Planung und Ausführung von Rückbauprojekten.
The presented work focuses on collaboration- experiences gathered with complex design and engineering projects, using the learning platform POLE- Europe. Within the POLE environment student-teams from different universities, disciplines and cultural backgrounds are assigned to real-world projects with clearly defined design - tasks, usually to be accomplished within one semester while working in a virtual environment for most of the time. The concept of POLE and the information and collaboration technology is described.
This paper presents a specific modeling technique that is focused on preparing planning processes in civil engineering. Planning processes in civil engineering are characterized by some peculiarities so that the sequence of planning tasks needs to be determined for each planning project. Neither the use of optimized partial processes nor the use of lower detailed and optimized processes guarantee an optimal overall planning process. The modeling technique considers these peculiarities. In a first step, it is focused on the logic of the planning process. Algorithms based on the graph theory determine that logic. This approach ensures consistency and logical correctness of the description of a planning process at the early beginning in its preparation phase. Sets of data – the products of engineers like technical drawings, technical models, reports, or specifications – form the core of the presented modeling technique. The production of these sets of data requires time and money. This is expressed by a specific weighting of each set of data in the presented modeling technique. The introduction of these weights allows an efficient progress measurement and controlling of a planning project. For this purpose, a link between the modeling technique used in the preparation phase and the execution phase is necessary so that target and actual values are available for controlling purposes. The present paper covers the description of this link. An example is given to illustrate the use of the modeling technique for planning processes in civil engineering projects.
Building project, with many different players involved, requires open and commonly accepted standard for product model description. Product model based design tools support easy comparisons of design alternatives and optimisation of design solution technical quality. This supports client s decision-making and design target comparisons through the whole building project. Use of product models enable these tasks to meet both schedule and cost requirements Olof Granlund is using product models and interoperable software as the main tool in projects. The use and the realised benefits are illustrated by examples from 3 different real projects: University building, where product models were used already in the very early phases by the whole design team. Office building for research organisation, where product models were used in so called self-reporting building system. Headquarters for international company, where product models were widely used for building performance analysis and visualisations in design phase as well as for facilities management system configuration for operational phase.
Processing technical and environmental data on building materials, components, and systems has become more important during the last few years. Increased sensitivity towards environmental and energy problems has lead to the demand for simulation and evaluation of the long term behavior of buildings. The results of such simulations are expected to enable architects and engineers to develop a broader, interdisciplinary understanding of the impact of their products (buildings) on the environment. However, conducting such evaluations is currently hampered by the lack of comprehensive, up-to-date, and ecologically relevant data on building materials, components, and systems. To address this problem, this paper proposes an approach to deal with the absent or uncertain attributes of building materials, components, and systems. In the past, various information systems have been developed to provide data on a limited set of building materials, including precise values pertaining to some of their characteristics, such as availability, manufacturers, costs, etc. These traditional information systems have difficulty in dealing with uncertain, incomplete and sparse data. However, uncertainty and incompleteness characterize the nature of most of the available and environmentally related characteristics of materials, components, and systems. In this paper, a fuzzy-logic-based augmentation of traditional information systems is proposed towards providing management, utilization and manipulation of incomplete and uncertain data.
The concept is presented of the sensitivity analysis of the limit state of the structure with respect to selected basic variables. The sensitivity is presented in the form of the probability distribution of the limit state of the structure. The analysis is performed by the problem-oriented Monte Carlo simulation procedure. The procedure is based on the problem's definition of the elementary event, as a structural limit state. Thus the sample space consists of limit states of the structure. Defined on the sample space the one-dimensional random multiplier is introduced. This multiplier refers to the dominant basic variable (group of variables) of the problem. Numerical procedure results in the set of random numbers. Normalized relative histogram of this set is an estimator of the PDF of the limit state of the structure. Estimators of reliability, or the probability of failure are statistical characteristics of this histogram. The procedure is illustrated by the example of sensitivity analysis of the serviceability limit state of monumental structure. It is the colonnade of Licheń Basilica, situated in central Poland. Limit state of the structure is examined with reference to the upper deck horizontal deflection. Wind actions are taken as dominant variables. An assumption is made that the wind load intensities acting on the lower and on the upper storey of the colonnade, respectively, are identically distributed, but correlated random variables. Three correlation variants of these variables are considered. Relevant limit state histograms are analysed thereafter. The paper ends with the conclusions referring to the method and some general remarks on the fully probabilistic design.
Preparation and provision of building information for planning within existing built contexts
(2004)
A prerequisite for planning within existing built contexts is precise information regarding the building substance, its construction and materials, possible damages and any modifications and additions that may have occurred during its lifetime. Using the information collected in a building survey the user should be able to “explore” the building in virtual form, as well as to assess the information contained with regard to a specific planning aspect. The functionality provided by an information module should cover several levels of information provision ranging from ‘simple retrieval’ of relevant information to the analysis and assessment of stored information with regard to particular question sets. Through the provision of basic functionality at an elementary level and the ability to extend this using plug-ins, the system concept of an open extendable system is upheld. Using this modular approach, different levels of information provision can be provided as required during the planning process.
The paper proposes a new method for general 3D measurement and 3D point reconstruction. Looking at its features, the method explicitly aims at practical applications. These features especially cover low technical expenses and minimal user interaction, a clear problem separation into steps that are solved by simple mathematical methods (direct, stable and optimal with respect to least error squares), and scalability. The method expects the internal and radial distortion parameters of the used camera(s) as inputs, and a plane quadrangle with known geometry within the scene. At first, for each single picture the 3D position of the reference quadrangle (with respect to each camera coordinate frame) is calculated. These 3D reconstructions of the reference quadrangle are then used to yield the relative external parameters of each camera regarding the first one. With known external parameters, triangulation is finally possible. The differences from other known procedures are outlined, paying attention to the stable mathematical methods (no usage of nonlinear optimization) and the low user interaction with good results at the same time.
Plausibilität im Planungsprozess - Digitale Planungshilfen für die Bebaubarkeit von Grundstücken
(2003)
Die digitale Unterstützung der Planungsprozesse ist ein aktueller Forschungs- und Arbeitsschwerpunkt der Professur Informatik in der Architektur (InfAR) und der Juniorprofessur Architekturinformatik der Fakultät Architektur an der Bauhaus-Universität Weimar. Verankert in dem DFG Sonderforschungsbereich 524 >Werkzeuge und Konstruktionen für die Revitalisierung von Bauwerken< werden Konzepte und Prototypen für eine fachlich orientierte Planungsunterstützung entwickelt. Die Vielfalt der unterschiedlichen Faktoren, die Einfluss auf den Planungsprozess nehmen können, sowie deren Abhängigkeiten voneinander werden von heutigen Planungssystemen in nur unzureichender Weise aufbereitet und verwaltet. Diese Faktoren bedingen Planungstools, deren Aufgabe die Beschaffung, Verarbeitung, Integration und Verwaltung von Informationen sowie die Veranschaulichung der komplexen Informationszusammenhänge ist. Die Entwicklung solcher Systeme ist technisch möglich. Die Schwierigkeit liegt in der Beschaffung und Strukturierung der für den Planungsprozess relevanten Informationen sowie in ihrer Aufbereitung und Integration in eine digitale Planungsumgebung. Das Bestreben des Forschungsprojektes ist es, Grundlagen für digitale Werkzeuge zu entwickeln die zu plausiblen Lösungen im Planungsprozess und somit zu erhöhter Planungssicherheit für die am Bau beteiligten Auftragnehmer und Auftraggeber führen. Es wird angestrebt Programm-Module zu entwickeln, die den Planer bei der Ermittlung von Lösungswegen zu einer Fachfrage inhaltlich unterstützen und die Nachvollziehbarkeit und Richtigkeit einer Planungsentscheidung gewährleisten und plausibel darlegen. Mit Hilfe der Module sollen Entscheidungsfindungen katalysiert werden. Die Bauvorhaben der Zukunft werden zu einem großen Teil im Bestand liegen. Dieses Faktum erfordert planerische Maßnahmen, für die vollends Werkzeuge und Hilfsmittel fehlen, die zu plausiblen und sicheren Planungsentscheidungen führen. Die Entwicklung solcher Hilfsmittel ist Ziel dieser Forschung. Der Beitrag stellt prototypische Software-Module vor, die sich mit der Problematik der potenziellen Bebaubarkeit einer Grundstücksfläche auseinander setzen. Die Module verarbeiten Regeln, die den einschlägigen Normen und Verordnungen entnommen sind, die bei der Erarbeitung einer Planungslösung eingehalten werden müssen.
The approach discussed here is part of research into an overall concept for digital instruments which support the entire planning process and help in enabling planning decisions to be based upon clear reasoning and plausible arguments. Such specialist systems must take into account currently available technology, such as networked working patterns, object-orientation, building and product models as well as the working method of the planner. The paper describes a plausibility instrument for the formulation of colour scheme proposals for building interiors and elevations. With the help of intuitively usable light simulations, colour, material and spatial concepts can be assessed realistically. The software prototype “Coloured Architecture” is conceived as a professional extension to conventional design tools for the modelling of buildings. As such it can be used by the architect in the earliest design phases of the planning process as well as for colour implementation on location.
Complex gridshell structures used in architecturally ambitious constructions remain as appealing as ever in the public realm. This paper describes the theory and approach behind the software realisation of a tool which helps in finding the affine self-weight geometry of gridshell structures. The software tool DOMEdesign supports the formal design process of lattice and grid shell structures based upon the laws of physics. The computer-aided simulation of suspension models is used to derive structurally favourable forms for domes and arches subject to compression load, based upon the input of simple architectonic parameters. Irregular plans, three-dimensional topography, a choice different kinds of shell lattice structures and the desired height of the dome are examples of design parameters which can be used to modify the architectural design. The provision of data export formats for structural dimensioning and visualisation software enables engineers and planners to use the data in future planning and to communicate the design to the client.
Diskrete Arbeitsvorgänge lassen sich mit Hilfe von Petri--Netzen formal beschreiben. Petri--Netze basieren auf der Graphentheorie. Die Elemente zweier Knotenmengen werden Stellen und Transitionen genannt und sind durch gerichtete Kanten miteinander verknüpft. Stellen repräsentieren Bedingungen oder Zustände und Transitionen Ereignisse oder Vorgänge. Durch Petri--Netze ist es möglich nicht nur eine statische Vorgänger--Nachfolger--Struktur abzubilden, vielmehr können ebenso Ereignisse, Alternativen und Nebenläufigkeiten modelliert werden. In diesem Beitrag wird vorgestellt, wie ein Bauablauf gegeben durch ein Vorgangsknoten-Netzplan mit sehr wenigen Schritten auf ein Bedingungs/Ereignis-Netz abgebildet werden kann. Alle notwenigen Teilschritte wie das Bilden von Teilnetzen oder das Vergröbern und Verfeinern von Knoten basieren auf einem mathematisch abgesicherten Fundament. Im Gegensatz zu anderen Formulierungen von Bauabläufen ist die Theorie der Petri-Netze eine allgemeingültige Theorie und kann in vielen Bereichen eingesetzt werden. Die Verwendung einer solchen mathematischen Abstraktion ermöglicht die Wiederverwendung von bereits entwickelten Lösungsansätzen. So können die gewonnenen Erfahrungen auch bei der Modellie-rung von anderen Arbeitsvorgängen verwendet werden.
Im Zusammenhang mit der Revitalisierung von Plattenbauwerken kommt der Bewertung bzw. Neubewertung der vorhandenen Aussteifungssysteme große Bedeutung zu. Zur Aussteifung werden im allgemeinen raumbreite, raumhohe und meist unbewehrte Betonelemente verwendet, die lediglich konstruktiv miteinander verbunden sind. Im Rahmen der Untersuchung wird ein vorhandenes Berechnungsmodell zur physikalisch nichtlinearen Analyse von Aussteifungssystemen mehrgeschossiger Gebäude unter Horizontallast so erweitert, dass unbewehrte Horizontalfugen berücksichtigt werden können. Die Analyse der Aussteifungssysteme wird auf der Basis der Methode der mathematischen Optimierung realisiert, wobei die aussteifenden Wände als offene dünnwandige, schlanke Stäbe betrachtet werden, die über dehnstarre Deckenscheibe (Diaphragmen) gekoppelt sind. Im Beitrag wird gezeigt, dass sich bei Berücksichtigung des physikalisch nichtlinearen Tragverhaltens und der damit verbundenen Schnittgrößenumlagerungen Tragreserven erschließen lassen, die auf die Standsicherheit sowohl vorhandener als auch revitalisierter Gebäude angerechnet werden können.
Physikalisch nichtlineare Analyse von Aussteifungssystemen unter Einbeziehung von Lastfolgeeffekten
(2003)
Die physikalisch nichtlineare Analyse von Stahlbetontragwerken unter Berücksichtigung des Einspielverhaltens (adaptives Tragverhalten) mit Methoden der mathematischen Optimierung ist seit Jahren Gegenstand intensiver Forschungsarbeiten am Lehrstuhl Massivbau I der Bauhaus-Universität Weimar. Die dabei entwickelten Modelle und Algorithmen werden im folgenden Beitrag exemplarisch auf die Untersuchung von Aussteifungssystemen in Großtafelbauweise angewendet. Da bei diesen Gebäuden die aussteifende Konstruktion aus zusammengesetzten großformatigen Betonfertigteilen besteht, wird das Gesamttragverhalten maßgebend durch das Fugentragverhalten bestimmt. Die physikalische Nichtlinearität wird durch das Aufreißen der unbewehrten Horizontalfugen und den verschieblichen Verbund in den Vertikalfugen charakterisiert und entsprechend im Berechnungsmodell berücksichtigt. Beispielrechnungen belegen, dass für die beschriebenen Aussteifungssysteme signifikante Spannungsumlagerungen infolge des nichtlinearen Fugentragverhaltens auftreten. Weiterhin können Lastfolgeeffekte rechnerisch nachgewiesen werden. Im Gegensatz zu seismisch beanspruchten Systemen, die in kürzester Zeit wiederholt extrem beansprucht werden, ist die Eintrittswahrscheinlichkeit bemessungsrelevanter Windlasten gering.
Summer overheating in buildings is a common problem, especially in office buildings with large glazed facades, high internal loads and low thermal mass. Phase change materials (PCM) that undergo a phase transition in the temperature range of thermal comfort can add thermal mass without increasing the structural load of the building. The investigated PCM were micro-encapsulated and mixed into gypsum plaster. The experiments showed a reduction of indoor-temperature of up to 4 K when using a 3 cm layer of PCM-plaster with micro-encapsulated paraffin. The measurement results could validate a numerical model that is based on a temperature dependent function for heat capacity. Thermal building simulation showed that a 3 cm layer of PCM-plaster can help to fulfil German regulations concerning heat protection of buildings in summer for most office rooms.
One of the most promising and recent advances in computer-based planning is the transition from classical geometric modeling to building information modeling (BIM). Building information models support the representation, storage, and exchange of various information relevant to construction planning. This information can be used for describing, e.g., geometric/physical properties or costs of a building, for creating construction schedules, or for representing other characteristics of construction projects. Based on this information, plans and specifications as well as reports and presentations of a planned building can be created automatically. A fundamental principle of BIM is object parameterization, which allows specifying geometrical, numerical, algebraic and associative dependencies between objects contained in a building information model. In this paper, existing challenges of parametric modeling using the Industry Foundation Classes (IFC) as a federated model for integrated planning are shown, and open research questions are discussed.
This study contributes to the identification of coupled THM constitutive model parameters via back analysis against information-rich experiments. A sampling based back analysis approach is proposed comprising both the model parameter identification and the assessment of the reliability of identified model parameters. The results obtained in the context of buffer elements indicate that sensitive parameter estimates generally obey the normal distribution. According to the sensitivity of the parameters and the probability distribution of the samples we can provide confidence intervals for the estimated parameters and thus allow a qualitative estimation on the identified parameters which are in future work used as inputs for prognosis computations of buffer elements. These elements play e.g. an important role in the design of nuclear waste repositories.
Traffic simulation is a valuable tool for the design and evaluation of road networks. Over the years, the level of detail to which urban and freeway traffic can be simulated has increased steadily, shifting from a merely qualitative macroscopic perspective to a very detailed microscopic view, where the behavior of individual vehicles is emulated realistically. With the improvement of behavioral models, however, the computational complexity has also steadily increased, as more and more aspects of real-life traffic have to be considered by the simulation environment. Despite the constant increase in computing power of modern personal computers, microscopic simulation stays computationally expensive, limiting the maximum network size than can be simulated on a single-processor computer in reasonable time. Parallelization can distribute the computing load from a single computer system to a cluster of several computing nodes. To this end, the exisiting simulation framework had to be adapted to allow for a distributed approach. As the simulation is ultimately targeted to be executed in real-time, incorporating real traffic data, only a spatial partition of the simulation was considered, meaning the road network has to be partitioned into subnets of comparable complexity, to ensure a homogenous load balancing. The partition process must also ensure, that the division between subnets does only occur in regions, where no strong interaction between the separated road segments occurs (i.e. not in the direct vicinity of junctions). In this paper, we describe a new microscopic reasoning voting strategy, and discuss in how far the increasing computational costs of these more complex behaviors lend themselves to a parallelized approach. We show the parallel architecture employed, the communication between computing units using MPIJava, and the benefits and pitfalls of adapting a single computer application to be used on a multi-node computing cluster.
In civil engineering it is very difficult and often expensive to excite constructions such as bridges and buildings with an impulse hammer or shaker. This problem can be avoided with the output-only method as special feature of stochastic system identification. The permanently existing ambient noise (e.g. wind, traffic, waves) is sufficient to excite the structures in their operational conditions. The output-only method is able to estimate the observable part of a state-space-model which contains the dynamic characteristics of the measured mechanical system. Because of the assumption that the ambient excitation is white there is no requirement to measure the input. Another advantage of the output-only method is the possibility to get high detailed models by a special method, called polyreference setup. To pretend the availability of a much larger set of sensors the data from varying sensor locations will be collected. Several successive data sets are recorded with sensors at different locations (moving sensors) and fixed locations (reference sensors). The covariance functions of the reference sensors are bases to normalize the moving sensors. The result of the following subspace-based system identification is a high detailed black-box-model that contains the weighting function including the well-known dynamic parameters eigenfrequencies and mode shapes of the mechanical system. Emphasis of this lecture is the presentation of an extensive damage detection experiment. A 53-year old prestressed concrete tied-arch-bridge in Hünxe (Germany) was deconstructed in 2005. Preliminary numerous vibration measurements were accomplished. The first experiment for system modification was an additional support near the bridge bearing of one main girder. During a further experiment one hanger from one tied arch was cut through as an induced damage. Some first outcomes of the described experiments will be presented.
It is well known that complex quaternion analysis plays an important role in the study of higher order boundary value problems of mathematical physics. Following the ideas given for real quaternion analysis, the paper deals with certain orthogonal decompositions of the complex quaternion Hilbert space into its subspaces of null solutions of Dirac type operator with an arbitrary complex potential. We then apply them to consider related boundary value problems, and to prove the existence and uniqueness as well as the explicit representation formulae of the underlying solutions.
This paper deals with the development of a new multi-objective evolution strategy in combination with an integrated pollution-load and water-quality model. The optimization algorithm combines the advantages of the Non-Dominated Sorting Genetic Algorithm and Self-Adaptive Evolution Strategies. The identification of a good spread of solutions on the pareto-optimum front and the optimization of a large number of decision variables equally demands numerous simulation runs. In addition, statements with regard to the frequency of critical concentrations and peak discharges require continuous long-term simulations. Therefore, a fast operating integrated simulation model is needed providing the required precision of the results. For this purpose, a hydrological deterministic pollution-load model has been coupled with a river water-quality and a rainfall-runoff model. Wastewater treatment plants are simulated in a simplified way. The functionality of the optimization and simulation tool has been validated by analyzing a real catchment area including sewer system, WWTP, water body and natural river basin. For the optimization/rehabilitation of the urban drainage system, both innovative and approved measures have been examined and used as decision variables. As objective functions, investment costs and river water quality criteria have been used.
This paper contributes to discussion on introduction of exclusive lane for public transport. Analyses of results have been presented on effectiveness of usage of a 4 - lane and a 6 - lane street with and without an exclusive lane for buses. Two basic sub-models have been applied: - the binary logit model for modal split estimation; it takes into consideration the relation of travel time performed by private cars to public transport, - the polynomial model for predicting link impedance; the real travel time is affected by relation of traffic volume to the design capacity The parameters of both sub-models have been calibrated for the Polish conditions. The relationships are determined between number of person trips and traffic volume of buses/private cars, market share of public transport in motorised trips, average travel time lost in trips, average operation cost. An iterative technique addresses a feedback between the modal split and traffic volumes. Numerical calculations by use of EXCEL and MATLAB were carried out. Typical values of corridor length, occupancy rate of passenger car, design capacity of the bus, access and egress time to and from the parking/bus-stop in urban areas were applied. On the basis of the estimated results, the marginal parameter values: number of people carried at which separated lane for buses is most effective , traffic volume for private cars, traffic volume for buses and share of public transport in trips have been got by consideration of the average travel time lost and operational cost as criteria: Then introduction of street lanes with and without exclusive lane for buses in relation to number of person trips can be optimized.
Steel structural design is an integral part of the building construction process. So far, various methods of design have been applied in practice to satisfy the design requirements. This paper attempts to acquire the Differential Evolution Algorithms in automatization of specific synthesis and rationalization of design process. The capacity of the Differential Evolution Algorithms to deal with continuous and/or discrete optimization of steel structures is also demonstrated. The goal of this study is to propose an optimal design of steel frame structures using built-up I-sections and/or a combination of standard hot-rolled profiles. All optimized steel frame structures in this paper generated optimization solutions better than the original solution designed by the manufacturer. Taking the criteria regarding the quality and efficiency of the practical design into consideration, the produced optimal design with the Differential Evolution Algorithms can completely replace conventional design because of its excellent performance.
The sizing of simple resonators like guitar strings or laser mirrors is directly connected to the wavelength and represents no complex optimisation problem. This is not the case with liquid-filled acoustic resonators of non-trivial geometries, where several masses and stiffnesses of the structure and the fluid have to fit together. This creates a scenario of many competing and interacting resonances varying in relative strength and frequency when design parameters change. Hence, the resonator design involves a parameter-tuning problem with many local optima. As its solution evolutionary algorithms (EA) coupled to a forced-harmonic FE simulation are presented. A new hybrid EA is proposed and compared to two state-of-theart EAs based on selected test problems. The motivating background is the search for better resonators suitable for sonofusion experiments where extreme states of matter are sought in collapsing cavitation bubbles.
Ausgehend von mathematischen Überlegungen haben wir einfache Modellansätze zur Bearbeitung des folgenden Optimierungsproblems erarbeitet und numerische Tests durchgeführt: Eine Landkarte wird in Quadrate unterteilt, wobei jedes Quadrat mit einem Faktor zu bewerten ist. Dieser Wichtungsfaktor sei klein, wenn das Gebiet problemlos passierbar ist und entsprechend groß, wenn es sich um ein Naturschutz-gebiet, einen See oder ein schwer befahrbares Gebiet handelt. Gesucht wird nach einer günstigen Verbindung vom Punkt A zum Punkt B, wobei die durch den Wichtungsfaktor gegebenen landschaftlichen Besonderheiten zu berücksichtigen sind. Wir formulieren das Problem zunächst als Variationsproblem. Eine notwendige Bedingung, der die Lösungsfunktion genügen muß, ist die Euler-Lagrangesche Differentialgleichung. Mit Hilfe der Hamiltonschen Funktion ist es möglich, diese Differentialgleichung in kanonischer Form zu schreiben. Durch Vereinfachung des Modelles gelingt es, das System der kanonischen Gleichungen so zu konkretisieren, daß es als Ausgangspunkt für numerische Untersuchungen betrachtet werden kann. Dazu verwandeln wir die ursprüngliche Landschaft in eine >Berglandschaft<, wobei hohe Berge schwer passierbare Gebiete charakterisieren. Das einfachste Modell ist ein einzelner Berg, der mit Hilfe der Dichtefunktion einer zweidimensionalen Normalverteilung erzeugt wird. Zusätzlich haben wir Berechnungen an zwei sich überlagernden Bergen sowie einer Schlucht durchgeführt.
We consider an industrial application consisting of the mass minimization of a frame in an injection moulding machine. This frame has to compensate the forces acting on the mould inside the machine and has to fulfill certain critical constraints. The deformation of that frame with constant thickness is described by the plain stress state equations for linear elasticity. If the thickness varies then we use a generalized plain stress state with constant thickness in the coarse grid elements. These direct problems are solved by an adaptive multigrid solver. The mass minimization problem leads to a constrained minimization problem for a non-linear functional which will be solved by some standard optimization algorithm which requires the gradients with respect to design parameters. For the shape optimization problem, we assume that the machine components consist of simple geometrical primitives determined by a few design parameters. Therefore, we calculate the gradient in the shape optimization by means of numerical differentiation which requires the solution of approximately 4 direct problems per design parameter. The adaptive solver guarantees the detection of critical regions automatically, and ensures a good approximation to the exact solution of the direct problem. This rather slow approach can be significantly accelerated by using the adjoint method to express the gradient. It will be combined with a direct implementation of several terms that appear after applying the chain rule to the gradient.
Performing parameter identification prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental setup, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and to reduce the number of monitoring points. This Paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability of soft clays. With an initial setup of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the afore mentioned parameters. Starting from these identified parameters, the optimal measurement setup is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity, and the results are discussed.
In this paper, a circulation-type society is expressed by recurrent architecture network described with multi-agent model which consists of the following agents: user, builder, reuse maker, fabricator, waste disposer, material maker and earth bank (see Fig.1). Structural members, materials, resources and monies move among these agents. Each agent has its own rules and aims, regarding structural damages, lifetime, cost reduction, numbers of structural members and structural systems. Reasonable prices of members (fresh, reused, recycled and disposed) can be optimized by GAs in this system considering equal distribution of monies among agents.
Anwendungen der Methoden des Operations Research in der Praxis des Projektmanagements sind so gut wie kaum zu finden. Diese Anwendungslücke ist umso erstaunlicher, als es eine große Zahl von theoretischen Veröffentlichungen zum Themenkreis >Operations Research und Projektmanagement< gibt . Der Beitrag arbeitet die verschiedenen Ursachen für die bislang geringe praktische Relevanz der Disziplin >Operations Research< heraus und macht Vorschläge, wie in Zukunft anwendungsfreundlichere Modelle entwickelt werden können.
The Laguerre polynomials appear naturally in many branches of pure and applied mathematics and mathematical physics. Debnath introduced the Laguerre transform and derived some of its properties. He also discussed the applications in study of heat conduction and to the oscillations of a very long and heavy chain with variable tension. An explicit boundedness for some class of Laguerre integral transforms will be present.
In many branches companies often lose the visibility of their human and technical resources of their field service. On the one hand the people in the fieldservice are often free like kings on the other hand they do not take part of the daily communication in the central office and suffer under the lacking involvement in the decisions inside the central office. The result is inefficiency. Reproaches in both directions follow. With the radio systems and then mobile phones the ditch began to dry up. But the solutions are far from being productive.
Online-Lehre im Einsatz
(2003)
Das im Rahmen des vom BMBF geförderten Projektes PORTIKO (Multimediale Lehr- und Lernplattform für den Studiengang Bauingenieurwesen) umfasst 10 Bauingenieurinstitute in Braunschweig und Dresden und das Fernstudium an der TU Dresden. Daneben sind drei Stabsprojekte für Didaktik, Teleteaching und Multimedia-Plattform als Dienstleister und Koordinatoren für die anderen Teilprojekte tätig. An der TU Braunschweig ist das Institut für Baustoffe, Massivbau und Brandschutz unter anderem für die Grundfachausbildung des Massivbau (5. + 6. Semester) und für eine spezielle Vertiefungsrichtung, den Brand- und Katastrophenschutz (7. bis 9. Semester), zuständig. In diesem Beitrag wird zunächst ein Überblick über die verwendete Plattform sowie die eingesetzten Hardware- und Softwaretechniken gegeben. Anschließend wird von ersten Erfahrungen über den Einsatz der Online-Lehre und die daraus gewonnen unterschiedlichen strategischen Ansätze für das weiter Vorgehen nach Projektende im Massivbau und im Brandschutz vorgestellt. Um den Praxisbezug zu vertiefen, wurden in PORTIKO zwei Projektbereiche gebildet, die auf Grund der engen Beziehung zwischen den Fächern gezielt zusammenarbeiten sollen. Dabei handelt es sich um die Bereiche >Virtuelles Haus< und >Virtuelle Infrastruktur<. Am Beispiel des Virtuellen Hauses soll gezeigt werden, wie die einzelnen Fächer mit Hilfe abgestimmter Übungsbeispiele den Studierenden die Notwendigkeit der Teamarbeit und Interaktion verschiedener Disziplinen vor Augen führen. Den Studenten stehen durch die Online-Plattform eine Vielzahl von Möglichkeiten zur Verfügung, z. B. mit Videoclips und Fotos über Versuche und Baustellen angereicherte Skripte, audio-kommentierte Vorlesungsfolien, alte Klausuren mit Musterlösungen, Aufgabenstellungen für Hausübungen, usw.. Den Studierenden können in der Online-Lehre interaktive Anwendungen zur Verfügung gestellt werden, die ein spielerisches Erproben und Beobachten von Parametereinflüssen ermöglichen. Solche interaktiven Übungen werden mit Java, Flash und Authorware erstellt. Hierzu werden Beispiele vorgestellt.
Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.
A lot of real-life problems lead frequently to the solution of a complicated (large scale, multicriteria, unstable, nonsmooth etc.) nonlinear optimization problem. In order to cope with large scale problems and to develop many optimum plans a hiearchical approach to problem solving may be useful. The idea of hierarchical decision making is to reduce the overall complex problem into smaller and simpler approximate problems (subproblems) which may thereupon treated independently. One way to break a problem into smaller subproblems is the use of decomposition-coordination schemes. For finding proper values for coordination parameters in convex programming some rapidly convergent iterative methods are developed, their convergence properties and computational aspects are examined. Problems of their global implementation and polyalgorithmic approach are discussed as well.
ON THE NAVIER-STOKES EQUATION WITH FREE CONVECTION IN STRIP DOMAINS AND 3D TRIANGULAR CHANNELS
(2006)
The Navier-Stokes equations and related ones can be treated very elegantly with the quaternionic operator calculus developed in a series of works by K. Guerlebeck, W. Sproeossig and others. This study will be extended in this paper. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one basically needs to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. With special variants of quaternionic holomorphic multiperiodic functions we obtain explicit formulas for three dimensional parallel plate channels, rectangular block domains and regular triangular channels. The explicit knowledge of the integral kernels makes it then possible to evaluate the operator equations in order to determine the solutions of the boundary value problem explicitly.
In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.
As numerical techniques for solving PDE or integral equations become more sophisticated, treatments of the generation of the geometric inputs should also follow that numerical advancement. This document describes the preparation of CAD data so that they can later be applied to hierarchical BEM or FEM solvers. For the BEM case, the geometric data are described by surfaces which we want to decompose into several curved foursided patches. We show the treatment of untrimmed and trimmed surfaces. In particular, we provide prevention of smooth corners which are bad for diffeomorphism. Additionally, we consider the problem of characterizing whether a Coons map is a diffeomorphism from the unit square onto a planar domain delineated by four given curves. We aim primarily at having not only theoretically correct conditions but also practically efficient methods. As for FEM geometric preparation, we need to decompose a 3D solid into a set of curved tetrahedra. First, we describe some method of decomposition without adding too many Steiner points (additional points not belonging to the initial boundary nodes of the boundary surface). Then, we provide a methodology for efficiently checking whether a tetrahedral transfinite interpolation is regular. That is done by a combination of degree reduction technique and subdivision. Along with the method description, we report also on some interesting practical results from real CAD data.
With the aid of factorization of the Schrödinger operator by quaternionic differential operators of first order proposed in recent works by S. Bernstein and K. Gürlebeck we study the system describing forcefree magnetic fields with nonconstant proportionality factor, the static Maxwell system for inhomogeneous media, the Beltrami condition and the Dirac equation with different types of potentials depending on one variable. We obtain integral representations for solutions of these systems.
The paper is devoted to a study of properties of homogeneous solutions of massless field equation in higher dimensions. We first treat the case of dimension 4. Here we use the two-component spinor language (developed for purposes of general relativity). We describe how are massless field operators related to a higher spin analogues of the de Rham sequence - the so called Bernstein-Gel'fand-Gel'fand (BGG) complexes - and how are they related to the twisted Dirac operators. Then we study similar question in higher (even) dimensions. Here we have to use more tools from representation theory of the orthogonal group. We recall the definition of massless field equations in higher dimensions and relations to higher dimensional conformal BGG complexes. Then we discuss properties of homogeneous solutions of massless field equation. Using some recent techniques for decomposition of tensor products of irreducible $Spin(m)$-modules, we are able to add some new results on a structure of the spaces of homogenous solutions of massless field equations. In particular, we show that the kernel of the massless field equation in a given homogeneity contains at least on specific irreducible submodule.
Monogenic functions play a role in quaternion analysis similarly to that of holomorphic functions in complex analysis. A holomorphic function with nonvanishing complex derivative is a conformal mapping. It is well-known that in Rn+1, n ≥ 2 the set of conformal mappings is restricted to the set of Möbius transformations only and that the Möbius transformations are not monogenic. The paper deals with a locally geometric mapping property of a subset of monogenic functions with nonvanishing hypercomplex derivatives (named M-conformal mappings). It is proved that M-conformal mappings orthogonal to all monogenic constants admit a certain change of solid angles and vice versa, that change can characterize such mappings. In addition, we determine planes in which those mappings behave like conformal mappings in the complex plane.