31.80 Angewandte Mathematik
Refine
Document Type
- Conference Proceeding (358)
- Article (261)
- Master's Thesis (3)
- Doctoral Thesis (2)
- Bachelor Thesis (1)
Institute
- Professur Informatik im Bauwesen (281)
- Institut für Strukturmechanik (ISM) (202)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (82)
- Professur Stochastik und Optimierung (42)
- Graduiertenkolleg 1462 (32)
- Professur Angewandte Mathematik (17)
- Institut für Konstruktiven Ingenieurbau (IKI) (4)
- Professur Baubetrieb und Bauverfahren (2)
- Professur Computer Vision in Engineering (2)
- Professur Modellierung und Simulation - Mechanik (2)
Keywords
- Angewandte Mathematik (328)
- Strukturmechanik (183)
- Computerunterstütztes Verfahren (153)
- Angewandte Informatik (146)
- Architektur <Informatik> (75)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (74)
- Modellierung (44)
- Stochastik (40)
- Building Information Modeling (35)
- CAD (35)
The 19th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 4th till 6th July 2012. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!
The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!
Long-span cable supported bridges are prone to aerodynamic instabilities caused by wind and this phenomenon is usually a major design criterion. If the wind speed exceeds the critical flutter speed of the bridge, this constitutes an Ultimate Limit State. The prediction of the flutter boundary therefore requires accurate and robust models. This paper aims at studying various combinations of models to predict the flutter phenomenon.
Since flutter is a coupling of aerodynamic forcing with a structural dynamics problem, different types and classes of models can be combined to study the interaction. Here, both numerical approaches and analytical models are utilised and coupled in different ways to assess the prediction quality of the hybrid model. Models for aerodynamic forces employed are the analytical Theodorsen expressions for the motion-enduced aerodynamic forces of a flat plate and Scanlan derivatives as a Meta model. Further, Computational Fluid Dynamics (CFD) simulations using the Vortex Particle Method (VPM) were used to cover numerical models.
The structural representations were dimensionally reduced to two degree of freedom section models calibrated from global models as well as a fully three-dimensional Finite Element (FE) model. A two degree of freedom system was analysed analytically as well as numerically.
Generally, all models were able to predict the flutter phenomenon and relatively close agreement was found for the particular bridge. In conclusion, the model choice for a given practical analysis scenario will be discussed in the context of the analysis findings.
Identification of modal parameters of a space frame structure is a complex assignment due to a large number of degrees of freedom, close natural frequencies, and different vibrating mechanisms. Research has been carried out on the modal identification of rather simple truss structures. So far, less attention has been given to complex three-dimensional truss structures. This work develops a vibration-based methodology for determining modal information of three-dimensional space truss structures. The method uses a relatively complex space truss structure for its verification. Numerical modelling of the system gives modal information about the expected vibration behaviour. The identification process involves closely spaced modes that are characterised by local and global vibration mechanisms. To distinguish between local and global vibrations of the system, modal strain energies are used as an indicator. The experimental validation, which incorporated a modal analysis employing the stochastic subspace identification method, has confirmed that considering relatively high model orders is required to identify specific mode shapes. Especially in the case of the determination of local deformation modes of space truss members, higher model orders have to be taken into account than in the modal identification of most other types of structures.
In the superelliptic shell joined to a circular cylinder bending stresses are absent when it is subjected to uniform pressure.Some geometrical characteristics have been found. Expressions for determining stresses in the shell crest(in the singular point of plane type) are suggested. The problem of a theoretical critical buckling load of an elongated shell supported by frames is studied. A critical buckling load for two shells with different specifications was found experimentally.
A method of automatic maintenance of vibration amplitude of a number of mechanisms at given level, when exiting force amplitude is varied greatly is given. For this purpose a pendulum is attached to a mechanism through a viscoelastic hinge. Load of a pendulum can move along an arm and it is viscoelastic connected to it.
Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.
This paper concerns schedule synchronization problems in public transit networks. In particular, it consists of three main parts. In the first the subject area is introduced, the terms are defined and framework for optimal synchronization in the form of problem representation and formulation is proposed. The second part is devoted to transfer synchronization problem when passengers changing transit lines at transfer points. The intergrated Tabu Search and Genetic solution method is developed with respect to this specific problem. The third part deals with headways harmonization problem i.e. synchronization of different transit lines schedules on a common segments of routes. For the solution of this problem a new bilevel optimization method is proposed with zones harmonization at the bottom level and co-ordination of zones, by time buffers assigned to timing points, at the upper level. Finally, the synchronization problems are numerically illustrated by real-life examples of the public transport lines in Cracow.
Mobile Software-Agenten für neuartige Funktionen und Nutzeffekte in intelligenten Gebäudesystemen
(2003)
Anwendungsbezogene Software innerhalb vernetzter Gebäude nimmt immer mehr zu. Neue Standards erlauben den einfachen Fernzugriff, um neue Services zu installieren oder um Updates aufspielen zu können. Zu diesem Thema wird der OSGi-Standard vorgestellt, der ein Management von Software während des Betriebs vornehmen kann. Außerdem nimmt die Netzlast der heterogenen Netze innerhalb und zu den Häusern stetig zu. Hier können mobile Softwareagenten ihre Vorteile gegenüber herkömmlichen, statischen Kommunikationsmechanismen hervorheben. Im folgenden Text wird die Integration solcher mobilen Softwareagenten in bestehende Standards intelligenter Häuser beschrieben und anhand des Innovationszentrum Intelligentes Haus Duisburg (www.inhaus-duisburg.de) beispielhaft erläutert. Nach der Einleitung wird in Kapitel 2 der aktuelle Stand der Technik beschrieben. Dabei wird vor allem auf den OSGi-Standard und die Technik der mobilen Softwareagenten eingegangen. Im Kapitel 3 wird stehen vor allem Voranalysen zur Fernwartung, Optimierungen von Regelungen und die Integration dynamischer Netzteilnehmer im Vordergrund, die durch die beschriebenen Mechanismen erleichtert werden. Im Kapitel 4 werden die Ergebnisse kurz zusammengefaßt und einen Ausblick gegeben.
In this paper, wavelet energy damage indicator is used in response surface methodology to identify the damage in simulated filler beam railway bridge. The approximate model is addressed to include the operational and surrounding condition in the assessment. The procedure is split into two stages, the training and detecting phase. During training phase, a so-called response surface is built from training data using polynomial regression and radial basis function approximation approaches. The response surface is used to detect the damage in structure during detection phase. The results show that the response surface model is able to detect moderate damage in one of bridge supports while the temperatures and train velocities are varied.
We give a sufficient and a necessary condition for an analytic function "f" on the unit disk "D" with Hadamard gap to belong to a class of weighted logarithmic Bloch space as well as to the corresponding little weighted logarithmic Bloch space under some conditions posed on the defined weight function. Also, we study the relations between the class of weighted logarithmic Bloch functions and some other classes of analytic functions by the help of analytic functions in the Hadamard gap class.
The research of the best building design requires a concerted design approach of both structure and foundation. Our work is an application of this approach. Our objective is also to create an interactive tool, which will be able to define, at the early design stages, the orientations of structure and foundation systems that satisfy as well as possible the client and the architect. If the concerns of these two actors are primarily technical and economical, they also wish to apprehend the environmental and social dimensions of their projects. Thus, this approach bases on alternative studies and on a multi-criterion analysis. In this paper, we present the context of our work, the problem formulation, which allows a concerted design of Structure and Foundation systems and the feasible solutions identifying process.
The p-Laplace equation is a nonlinear generalization of the Laplace equation. This generalization is often used as a model problem for special types of nonlinearities. The p-Laplace equation can be seen as a bridge between very general nonlinear equations and the linear Laplace equation. The aim of this paper is to solve the p-Laplace equation for 2 < p < 3 and to find strong solutions. The idea is to apply a hypercomplex integral operator and spatial function theoretic methods to transform the p-Laplace equation into the p-Dirac equation. This equation will be solved iteratively by using a fixed point theorem.
In order to minimize the probability of foundation failure resulting from cyclic action on structures, researchers have developed various constitutive models to simulate the foundation response and soil interaction as a result of these complex cyclic loads. The efficiency and effectiveness of these model is majorly influenced by the cyclic constitutive parameters. Although a lot of research is being carried out on these relatively new models, little or no details exist in literature about the model based identification of the cyclic constitutive parameters. This could be attributed to the difficulties and complexities of the inverse modeling of such complex phenomena. A variety of optimization strategies are available for the solution of the sum of least-squares problems as usually done in the field of model calibration. However for the back analysis (calibration) of the soil response to oscillatory load functions, this paper gives insight into the model calibration challenges and also puts forward a method for the inverse modeling of cyclic loaded foundation response such that high quality solutions are obtained with minimum computational effort. Therefore model responses are produced which adequately describes what would otherwise be experienced in the laboratory or field.
Wahrnehmung und Verarbeitung von Ereignissen bei der verteilten Planung im baulichen Brandschutz
(2003)
Der Bauplanungsprozess ist durch ein hohes Maß an Kooperation zwischen Planungsbeteiligten verschiedener Fachrichtungen gekennzeichnet. Hierbei werden zum einen Planungen auf der Basis von Planungsinformationen anderer Planungsbeteiligter detailliert, zum anderen geben Planungen einzelner auch wichtige Rahmenbedingungen für die Gesamtplanung vor. Der vorliegende Beitrag beschreibt einen Ansatz zur ganzheitlichen Unterstützungen verteilter Planungen am Beispiel des baulichen Brandschutzes. Der Antrag trägt hierbei der verteilten und parallelen Planung Rechnung, wie sie heute bei der Planung großer und mittlerer Bauwerke angewendet wird. Die Verteiltheit wird nicht nur für die Planungsbeteiligten modelliert, sondern auch die einzelnen Planungsinformationen liegen im gemeinsamen Kooperationsverbund verteilt vor. Der Fokus dieses Beitrags liegt auf der Wahrnehmung von Planungsänderungen und Ereignissen während der Planung und die Verarbeitung dieser Informationen um eine durchgängige Planung zu gewährleisten. Dies wird zum einen durch das CoBE Awarenessmodell erreicht, mit dem Ereignisse erkannt und dem Informationsverbund zur Verfügung gestellt werden können. Zum anderen werden die Ereignisbehandlung und die darauf folgende fachgerechte Informationsverarbeitung mit Hilfe eines Multi-Agentensystems beschrieben.
As an optimization that starts from a randomly selected structure generally does not guarantee reasonable optimality, the use of a systemic approach, named the ground structure, is widely accepted in steel-made truss and frame structural design. However, in the case of reinforced concrete (RC) structural optimization, because of the orthogonal orientation of structural members, randomly chosen or architect-sketched framing is used. Such a one-time fixed layout trend, in addition to its lack of a systemic approach, does not necessarily guarantee optimality. In this study, an approach for generating a candidate ground structure to be used for cost or weight minimization of 3D RC building structures with included slabs is developed. A multiobjective function at the floor optimization stage and a single objective function at the frame optimization stage are considered. A particle swarm optimization (PSO) method is employed for selecting the optimal ground structure. This method enables generating a simple, yet potential, real-world representation of topologically preoptimized ground structure while both structural and main architectural requirements are considered. This is supported by a case study for different floor domain sizes.
Over the last decade, the technology of constructing buildings has been dramatically developed especially with the huge growth of CAD tools that help in modeling buildings, bridges, roads and other construction objects. Often quality control and size accuracy in the factory or on construction site are based on manual measurements of discrete points. These measured points of the realized object or a part of it will be compared with the points of the corresponding CAD model to see whether and where the construction element fits into the respective CAD model. This process is very complicated and difficult even when using modern measuring technology. This is due to the complicated shape of the components, the large amount of manually detected measured data and the high cost of manual processing of measured values. However, by using a modern 3D scanner one gets information of the whole constructed object and one can make a complete comparison against the CAD model. It gives an idea about quality of objects on the whole. In this paper, we present a case study of controlling the quality of measurement during the constructing phase of a steel bridge by using 3D point cloud technology. Preliminary results show that an early detection of mismatching between real element and CAD model could save a lot of time, efforts and obviously expenses.
Humans are able to think, to feel, and to sense. We are also able to compute but not very well. In contrast, computers are giants in computing. Yet, they can not do anything else besides computing. Appropriate combinations of the different gifts and strengths of human and computer may result in impressive performances. In the 3-Hirn approach one human and two computers are involved. On the computers different programs are running. The human starts the machines and inspects the solutions they propose. He compares these candidate solutions and finally decides for one of the alternatives. So, the human makes the final choice from a small number of computer proposals. In performance-oriented chess, 3-Hirn combinations consisting of an amateur player and commer-cial software have reached world class level. 3-Hirn is a Decision Support System with Multiple Choice Structure. Such Multiple Choice Systems will be exhibited and discussed.
SYSBAT - An Application to the Building ProductionBased on Computer Supported Cooperative Work
(2003)
Our proposed solution is to enable partners of a construction project to share all the technical data produced and handled during the building production process by building a system through the use of internet technology. The system links distributed databases and allows building partners to access remotely and manipulate specific information. It provides an updated building representation that is being enriched and refined all along the building production process. A recent collaboration with Nemetschek France (subsidiary company of Nemetschek AG, AEC CAD software leader) focus on a building product repository available in a web context. The aim is to help building project actors to choose a technical solution that fits its professional needs, and maintain our information system with up to date information. It starts with the possibility to build on line building product catalogs, in order to link Allplan CAD entities with building technical features. This paper presents the conceptual approaches on which our information system is built. Starting from a general organization diagram organization, we focus on the product and the description branches of construction works (including last IFC model specifications). Our aim is to add decisional support to the construction works selection process. To do so, we consider the actor's role upon the system and the pieces of information each one needs to achieve a given task.
This work describes an algorithm and corresponding software for incorporating general nonlinear multiple-point equality constraints in a implicit sparse direct solver. It is shown that direct addressing of sparse matrices is possible in general circumstances, circumventing the traditional linear or binary search for introducing (generalized) constituents to a sparse matrix. Nested and arbitrarily interconnected multiple-point constraints are introduced by processing of multiplicative constituents with a built-in topological ordering of the resulting directed graph. A classification of discretization methods is performed and some re-classified problems are described and solved under this proposed perspective. The dependence relations between solution methods, algorithms and constituents becomes apparent. Fracture algorithms can be naturally casted in this framework. Solutions based on control equations are also directly incorporated as equality constraints. We show that arbitrary constituents can be used as long as the resulting directed graph is acyclic. It is also shown that graph partitions and orderings should be performed in the innermost part of the algorithm, a fact with some peculiar consequences. The core of our implicit code is described, specifically new algorithms for direct access of sparse matrices (by means of the clique structure) and general constituent processing. It is demonstrated that the graph structure of the second derivatives of the equality constraints are cliques (or pseudo-elements) and are naturally included as such. A complete algorithm is presented which allows a complete automation of equality constraints, avoiding the need of pre-sorting. Verification applications in four distinct areas are shown: single and multiple rigid body dynamics, solution control and computational fracture.
Die Finite-Elemente-Methode entwickelte sich in den letzten beiden Jahrzehnten zu einem wichtigen und mächtigen Werkzeug für Berechnungen im Ingenieurwesen. Waren zu Beginn dieser Entwicklung nur kleine Probleme lösbar, sind mit der heutigen Rechentechnik Systeme mit vielen Tausend Freiheitsgraden berechenbar. Durch diese Entwicklung werden Berechnungen von sehr komplizierten Strukturen möglich. Besonders in der Automobilindustrie kann mit einem solchen Verfahren die Konstruktion von Strukturen verbessert und optimiert werden. Um gute Ergebnisse bei den Berechnungen erzielen zu können müssen Programme entwickelt werden, die entsprechende mathematische Methoden enthalten. Besonders im Maschinenbau, aber auch in anderen Ingenieurbereichen wie dem Bauwesen, werden häufig gekrümmte dünne Schalenstrukturen untersucht. Eine effiziente und logische Konsequenz daraus ist die Nutzung von Schalenelementen innerhalb der FE-Berechnungen. Wird nun noch Wert auf eine realitätsnahe Modellierung gelegt, dann lässt es sich oft nicht vermeiden von der im Bauwesen üblichen Theorie erster Ordnung in eine nichtlineare Berechnungstheorie zu wechseln. Hierfür sind Methoden notwendig, die es vermögen diese Theorie abzubilden. Sollen Schalenstrukturen mit großen Verschiebungen betrachtet werden, ist es notwendig, die linearen Elementformulierungen um die nichtlinearen Ansätze der Strukturmechanik zu erweitern. Die Grundlage dieser Formulierung stellt oft die Lagrange'sche Betrachtungsweise dar, die Berechnungen an Strukturen mit großen Verformungen zulässt. Die Inhalte dieser Formulierung werden in Abschnitt 1.5 dieser Arbeit betrachtet. Räumlich veränderlichen Strukturen, also solche mit großen Verformungen, sind im Allgemeinen mit großen Rotationen verknüpft. Diese Rotationen werden bei Volumenelementen durch die unterschiedliche Verschiebung zweier benachbarter Elementknoten realisiert. Bei der Formulierung von dünnen Schalenelementen wird hingegen die Struktur als gekrümmte Raumfläche betrachtet. Da in Dickenrichtung nur ein Elementknoten zur Verfügung steht, muss die Rotation über eine andere Formulierung in die Berechnung einfließen. Ansätze zu allgemeinen großen Rotationen werden im Kapitel 2 betrachtet und für den Einsatz in einer Elementformulierung vorbereitet. Für die beschriebenen Schalenstrukturen werden häufig vierknotige Elemente genutzt, da mit ihnen Strukturen in einfacher Weise abgebildet werden können. Ein weiterer Vorteil besteht in der sich ergebenden geringen Bandbreite der Elementmatrizen. Diese Elementgruppe besitzt jedoch bei der klassischen isoparametrischen Formulierung einen großen Nachteil, der in der Erzeugung von parasitären Steifigkeitsanteilen besteht. Um dieses Sperrverhalten, was auch als 'Locking' bekannt ist, zu minimieren wurden in der Vergangenheit verschiedene Ansätze entwickelt. Ein sehr effizienter Ansatz zur Minimierung des Transversalschublockings bei bilinearen Schalenelementen stellt das Verfahren der veränderten Verzerrungsverläufe auf Elementebene dar. Dieses Verfahren wird vielfach in der Literatur aufgegriffen und als 'Assumed-Natural-Strain'-Ansatz oder als 'Mixed Interpolation of Tensorial Components' bezeichnet. Dieses Verfahren wird im Abschnitt 1.6 vorgestellt. Das Programmsystem SLang ermöglicht eine Berechnung von Strukturen mittels der Finite-Elemente-Methode. Um mit diesem Programm auch nichtlineare Probleme an Schalentragwerken berechnen zu können, wird im Rahmen dieser Diplomarbeit ein vierknotiges nichtlineares Schalenelement implementiert, das die genannten Ansätze für große Verformungen und finite Rotationen enthält. Für die Vermeidung von Transversalschublocking wird ein ANS-Ansatz in die Formulierung integriert. Das Kapitel 3 beschreibt die Formulierung dieses SHELL4N-Elementes. Dort werden die Elementmatrizen und deren Aufbau ausführlich dargestellt. Einige numerische Berechnungsbeispiele mit diesem neuen Element werden zur Evaluierung im Kapitel 4 dieser Arbeit dargestellt.
A multicriterial statement of the above mentioned problem is presented. It differes from the classical statement of Spanning Tree problem. The quality of solution is estimated by vector objective function which contains weight criteria as well as topological criteria (degree and diameter of tree). Many real processes are not determined yet. And that is why the investigation of the stability is very important. Many errors are connected with calculations. The stability analysis of vector combinatorial problems allows to discover the value of changes in the initial data for which the optimal solution is not changed. Furthermore, the investigation of the stability allows to construct the class of the problems on base of the one problem by means of the parameter variations. Analysis of the problems with belong to this class allows to obtaine axact and adecuate discription of model
>CyberCity< ist ein Konzept, das durch ein virtuelles Abbild der räumlichen Realität einer Stadt (Berlin) eine uns bekannte Wahrnehmungsumgebung als Orientierungs- und Navigationserleichterung bereitstellt, um über diesen virtuellen Browser möglichst schnell und anschaulich an eine gewünschte Information zu kommen. Dieses Umgebungsmodell ist auch als Simulationsmodell für die Visualisierung stadträumlicher Beurteilungen neuer Projekte, verkehrstechnischer Massnahmen und ökologischer Belastungen geeignet. Insbesondere ist es als Orientierungsumgebung für die Telepräsenz über die Kommunikationsnetze gedacht, die über die virtuellen Repräsentanten (Avatare) eine besondere gesellschaftliche Brisanz erhält.
Poland is not situated in any seismic region of the earth, however there are still areas were underground mining is being conducted. In these areas, so-called 'paraseismic tremors', are very frequent phenomena. In the situation when a building examination is realized in order to define its safety, it is necessary to make a complete analysis, in which an influence of tremors should be included. To decide if a building is able to carry out any dynamic loads or not, it is necessary to compute its dynamic characteristics, i.e. natural frequencies. It is not possible using any standard techniques. After diagnosis a building in situ by an expert, computer techniques together with specialized software for dynamic, static, and strength analyses become a suitable tool. In this paper a special attention was paid to a typical twelve-store WGP (Wroclaw Great Plate) prefabricated building, concerning special type of joints. During dynamic actions these joints have a decisive influence on building's behavior. Paraseismic tremors are especially dangerous for these buildings and can be the reason of pre-failure states. It can be difficult and very expensive to prepare laboratory investigations of the part of a building or of a separate joint; therefore the computer modeling suitable to investigate behavior of such elements and whole buildings under different kinds of loads was used.
Die Autoren stellen Grundlagen und Methoden zur Erstellung von Ausschreibungen und zur Durchführung der Kalkulation vor, die direkt mit dreidimensionalen Bauteilmodellen arbeiten. Dies trägt dazu bei, den gesamten Lebenszyklus eines Bauwerkes durchgängig über 3-D-Modelle beschreiben zu können. Im ersten Abschnitt werden grundlegende Überlegungen zum Einsatz von PLM/PDM-Technologien im Bauwesen angestellt. Im Anschluß daran wird die herkömmliche Ausschreibungsmethodik analysiert. Die Unterschiede zwischen einem konventionellen Leistungsverzeichnis und einem dreidimensionalen Bauteilkatalog (Objektdatenbank) hinsichtlich der Datenstruktur werden dargestellt. Hieraus abgeleitet werden Methoden und Prozesse bei der Erstellung einer 3-D-Leistungsbeschreibung entwickelt. Dazu wird eine geeignete Benutzer-Schnittstelle eines dreidimensionalen Bauteilkataloges vorgestellt. Schließlich werden praxisrelevante Probleme bei der Verwendung von Bauteilkatalogen erörtert. Ein wesentlicher Baustein ist die Kalkulation. Hier werden Kalkulationsmethoden basierend auf einem zweidimensionalen Leistungsverzeichnis und einem dreidimensionalen Bauteilkatalog miteinander verglichen. Ergänzt wird dies um Lösungen zur Anbindung von elektronischen Marktplätzen an den Bauteilkatalog zum Zweck der Preisbildung. Schließlich wird ein Ausblick gegeben, wie eine Synthese zwischen dreidimensionalem Bauteilkatalog und textlichen Standard-Leistungsbeschreibungen erreicht werden kann.
The paper introduces a systematic construction management approach, supporting expansion of a specified construction process, both automatically and semi-automatically. Throughout the whole design process, many requirements must be taken into account in order to fulfil demands defined by clients. In implementing those demands into a design concept up to the execution plan, constraints such as site conditions, building code, and legal framework are to be considered. However, complete information, which is needed to make a sound decision, is not yet acquired in the early phase. Decisions are traditionally taken based on experience and assumptions. Due to a vast number of appropriate available solutions, particularly in building projects, it is necessary to make those decisions traceable. This is important in order to be able to reconstruct considerations and assumptions taken, should there be any changes in the future project’s objectives. The research will be carried out by means of building information modelling, where rules deriving from standard logics of construction management knowledge will be applied. The knowledge comprises a comprehensive interaction amongst bidding process, cost-estimation, construction site preparation as well as specific project logistics – which are usually still separately considered. By means of these rules, favourable decision taking regarding prefabrication and in-situ implementation can be justified. Modifications depending on the available information within current design stage will consistently be traceable.
We investigate aspects of tram-network section reliability, which operates as a part of the model of whole city tram-network reliability. Here, one of the main points of interest is the character of the chronological development of the disturbances (namely the differences between time of departure provided in schedule and real time of departure) on subsequent sections during tram line operation. These developments were observed in comprehensive measurements done in Krakow, during one of the main transportation nodes (Rondo Mogilskie) rebuilding. All taken building activities cause big disturbances in tram lines operation with effects extended to neighboring sections. In a second part, the stochastic character of section running time will be analyzed more detailed. There will be taken into consideration sections with only one beginning stop and also with two or three beginning stops located at different streets at an intersection. Possibility of adding results from sections with two beginning stops to one set will be checked with suitable statistical tests which are used to compare the means of the two samples. Section running time may depend on the value of gap between two following trams and from the value of deviation from schedule. This dependence will be described by a multi regression formula. The main measurements were done in the city center of Krakow in two stages: before and after big changes in tramway infrastructure.
From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.
In modernen Gebäuden nimmt die Komplexität der Heizungstechnik ständig zu. Damit wird es auch immer schwieriger, ein ökologisch und ökonomisch vernünftiges Zusammenspiel der Komponenten zu gewährleisten. Die Vernetzung der verschiedenen Komponenten eines Heizsystems mittels Netzwerktechnik aus der EDV soll helfen, die Energieeffizienz zu erhöhen. Embedded Systems und Mikrocontroller fungieren als Regler für die Teilsysteme. Durch Kommunikation untereinander sollen sie ihr Regelverhalten aneinander anpassen. Eine Internetanbindung ermöglicht die Nutzung weiterer Informationen für die Betriebsführung. Außerdem kann der Internetanschluss für die Fernwartung der Anlage genutzt werden. Mit kleinen, im Webbrowser eines Rechners ausgeführten Java-Programmen, sogenannten Applets, können die Betriebszustände von Heizsystemen in Echtzeit visualisiert werden. Durch das Aufzeichnen von Betriebsdaten wird deren Analyse ermöglicht.
Die verteilte Bearbeitung gemeinsamer Produktmodelle ist im Bauwesen Gegenstand der aktuellen Forschung. Der vorgestellte Lösungsansatz bewegt sich in einem Spannungsfeld: Zum einen sollen die zu bearbeitenden Teilmengen des Produktmodells sehr flexibel durch die Planer zu bilden sein, zum anderen müssen Revisions- und Freigabestände dauerhaft und unveränderlich definiert werden. In einer versionierten Umgebung mit vielen Abhängigkeiten sind diese Anforderungen schwierig zu erfüllen. Der vorgestellte Lösungsansatz zeigt die Bildung von Revisions- und Freigabeständen, ohne die flexible verteilte Bearbeitung einzuschränken. Die Freigabestände müssen bestimmte Eigenschaften erfüllen: Es darf beispielsweise nur eine Version eines Objekts enthalten sein und es müssen die Bindungen zu anderen Objektversionen in einer konsistenten Weise berücksichtigt werden. Es wird eine mathematische Beschreibung gewählt, die auf der Mengenlehre und der Graphentheorie basiert.
Die Sicherung der Wettbewerbsfähigkeit im Bereich des Bauwesens, insbesondere kleinerer und mittelständischer Betriebe erfordert ein aktives Handeln als Antwort auf die sich ändernde Wettbewerbssituation. Einen wesentlichen Wettbewerbsvorteil können kleine unternehmerische Einheiten durch höhere Flexibilität, schnelle Reaktion auf Kundenwünsche oder aktuelle Situationen auf der Baustelle und Marktnähe erreichen. Dazu ist es nötig, die Informations- und Kommunikationsströme durch Einsatz standardisierter und kostengünstiger Hard- und Software wie z.B. Handhelds zu unterstützen und insbesondere die existierenden Hindernisse im Informationsfluss zwischen Baustelle und Büro zu beseitigen. Am Beispiel der Projekte >IuK - SystemBau< und >eSharing< wird eine Einführungsstrategie für >Mobile Computing< in kleinen unternehmerischen Einheiten des Bauwesens (KMU) basierend auf einer umfangreichen Anforderungsanalyse vorgestellt. Folgende Aspekte sollen beschrieben werden: durchgängiger Einsatz der Technik unter Beachtung der verschiedenen Qualifikationsniveaus, Einführungsunterstützung durch Schulungen, Prozessanalyse und mögliche Integration in bestehende Software-Umgebungen sowie Feldtests.
Detailuntersuchungen an Tragwerken führen bei FE-Berechnungen immer wieder auf das Problem einer geeigneten Netzgestaltung. Während in weiten Bereichen ein grobes Netz ausreicht, muß an kritischen Stellen ein sehr feines Netz gewählt werden, um gerade dort hinreichend genaue Ergebnisse zu erhalten. Bei der Realisierung lokaler Netzverdichtungen stellt die Gestaltung des Übergangs vom groben zum feinen Netz das Hauptproblem dar. Im Beitrag wird hierzu eine Familie von FE-Übergangselementen vorgestellt, mit denen sich eine voll-kompatible Kopplung von wenigen großen Elementen mit vielen kleinen Elementen bereits über nur eine Stufe erzielen läßt. Diese neu entwickelten sogenannten pNh-Elemente ermöglichen an einer oder mehreren Seiten den Anschluß von N kleineren Elementen (Elementseiten für h-Verfeinerung). Das wird durch N stückweise definierte Ansatzfunktionen an den entsprechenden Seiten erreicht, wobei die Teilung nicht äquidistant sein braucht. Darüber hinaus ist es möglich, Elemente unterschiedlichen Polynomgrades p an den Standardseiten und den Verfeinerungsseiten anzuschließen. Der praktische Einsatz der Übergangselemente setzt geeignete automatische oder halbautomatische Netzgeneratoren voraus, die diese Elemente einbeziehen. Im Rahmen einer substrukturorientierten Modellierung läßt sich dies besonders günstig realisieren. Im Beitrag wird gezeigt, wie durch Zerlegung des Gesamtmodells in Bereiche mit grobem Netz, mit Übergangsnetz und mit feinem Netz, eine effektive Generierung der Netzverdichtungen zu erreichen ist. An einem praktischen Beispiel aus dem Bauingenieurwesen werden die Vorteile des vorgestellten Übergangselementkonzeptes umfassend demonstriert.
Authors' own research in applied unicriterial and multicriterial optimisation of bar structures, and also an analysis of accessible bibliography on structural synthesis allows to present herein an attempt to define a general algorithm for proceeding in formulation of a structural optimisation problem. A practical aspect of such an algorithm consists, in author's opinion, in enabling a designer a correct creation of a mathematical model of synthesis problems, independently of known mathematical methods employed to looking for an unconditional extremum of function of several variables. A proposed algorithm is not a ready-for-use tool for solving all the optimisation problems, but it constitutes an easy-to-expand theoretical basis. This basis should allow a designer to create a proper set of compromises on the way to construct a mathematical model of a specific optimisation problem. The algorithm, presented in the paper, is constructed as a sequence of the one-after-another problem questions, on which the designer answers: yes or no, and a set of selections from the knowledge base consisting of the elements of an optimisation problem components. The order of making questions adopted by the authors in the algorithm is subjective, however it is supported by their experience, both in applied optimisation and in designing of structures like trusses or frames.
Maxwell's equations can be rewritten in terms of a Dirac operator D+a. The advantage is that in this setting Maxwell's equations are treated as a system of first order differential equations. To ensure the uniqueness of a non-homogeneous differential equation in the whole space additional conditions are needed.
A realistic and reliable model is an important precondition for the simulation of revitalization tasks and the estimation of system properties of existing buildings. Thereby, the main focus lies on the parameter identification, the optimization strategies and the preparation of experiments. As usual structures are modeled by the finite element method. This as well as other techniques are based on idealizations and empiric material properties. Within one theory the parameters of the model should be approximated by gradually performed experiments and their analysis. This approximation method is performed by solving an optimization problem, which is usually non-convex, of high dimension and possesses a non-differentiable objective function. Therefore we use an optimization procedure based on genetic algorithms which was implemented by using the program package SLang...
Models in the context of engineering can be classified in process based and data based models. Whereas the process based model describes the problem by an explicit formulation, the data based model is often used, where no such mapping can be found due to the high complexity of the problem. Artificial Neuronal Networks (ANN) is a data based model, which is able to “learn“ a mapping from a set of training patterns. This paper deals with the application of ANN in time dependent bathymetric models. A bathymetric model is a geometric representation of the sea bed. Typically, a bathymetry is been measured and afterwards described by a finite set of measured data. Measuring at different time steps leads to a time dependent bathymetric model. To obtain a continuous surface, the measured data has to be interpolated by some interpolation method. Unlike the explicitly given interpolation methods, the presented time dependent bathymetric model using an ANN trains the approximated surface in space and time in an implicit way. The ANN is trained by topographic measured data, which consists of the location (x,y) and time t. In other words the ANN is trained to reproduce the mapping h = f(x,y,t) and afterwards it is able to approximate the topographic height for a given location and date. In a further step, this model is extended to take meteorological parameters into account. This leads to a model of more predictive character.
Die Informatik im Bauwesen oder kurz Bauinformatik hat sich in den vergangenen Jahren in Deutschland kontinuierlich entwickelt und sie hat einen festen Platz in den Fakultäten im Bauingenieurwesen an den deutschen Universitäten erhalten. Diese Entwicklung war anfänglich sehr stark fixiert auf die Berechnungen des physikalischen Verhaltens von Bauwerken. In den letzten Jahren kamen weitere Gebiete hinzu. Hier ist vor allem die Zeichnungserstellung (CAD) hervorzuheben. Diese Entwicklungen haben einen großen Reifegrad erlangt und sie haben einen festen Platz in Lehre und Forschung in der Bauinformatik. Die Rolle der Bauinformatik allgemein hat in den letzten Jahren eine stürmische Entwicklung genommen. Die Breite der Anwendungen hat ständig zugenommen und die Forderung nach einer durchgängigen Nutzbarkeit aller Informationen, die in diesem Zusammenhang bearbeitet werden, wird immer stärker gestellt. Besonders zwei Problemstellungen stehen im Mittelpunkt vieler Entwicklungen in der Bauindustrie und in den entsprechenden Softwarehäusern. Dies sind die Entwicklung und Nutzung technischer Modelle zur Unterstützung des Bauprozesses und die Unterstützung betriebswirtschaftlicher Anforderungen in der Bauindustrie. Vor allem die zweite Anforderung war bisher kaum Gegenstand der Bauinformatik. Aktuelle Bestrebungen vor allem aus der Praxis wirken darauf hin, diesen Zustand zu ändern. Die Fakultät Bauingenieurwesen an der Bauhaus-Universität beginnt diesen Anforderungen dadurch Rechnung zu tragen, daß kontinuierlich die notwendigen Voraussetzungen geschaffen werden, um diese Thematik in Lehre und Forschung kompetent aufnehmen zu können. Externe Referenten, die an der Spitze entsprechender Entwicklungen sind, werden in die Lehre mit eingebunden. Dies soll eine kontinuierliche Weiterentwicklung einleiten, die helfen soll, eine zeitgemäße Ausbildung der zukünftigen Bauingenieure sicherzustellen.
Die Versagenswahrscheinlichkeit nach einem Grenzzustand wird gewöhnlich mit dem Integral I der Basisvariablen-Verteilungsdichte über den Versagensbereich bestimmt. Dabei ist eine geschlossene Lösung nur im Spezialfall normalverteilter Basisvariablen bei Linearität der Grenzzustandsgleichung möglich. In anderen Fällen sind verschiedene Näherungsverfahren gebräuchlich, die auf den Momenten der Basisvariablen und geeignet gewählten Indizes als Sicherheitskenngrößen beruhen. Eine größere Genauigkeit bieten die Zuverlässigkeitstheorien erster bzw. zweiter Ordnung, die ebenfalls von I ausgehen. Im Beitrag wird ein neuartiges Verfahren vorgestellt, dessen Ausgangspunkt nicht I, sondern das Kraftgrößenverfahren als einem Standardalgorithmus des konstruktiven Ingenieurbaus ist. Die Einbeziehung der maßgebenden Zufallsgrößen in die Matrix der Vorzahlen und die Belastungszahlen führt zur Verallgemeinerung des Systems der Elastizitätsgleichungen zum zufälligen System der Elastizitätsgleichungen. Dessen Lösung, die durch den Übergang zu einem deterministischen Ersatzsystem gewonnen wird, liefert die statisch Unbestimmten als Funktionen der im System wirkenden Zufallsgrößen (z.B. E-Modul der Stäbe und Belastung). Da dieser Zusammenhang analytisch vorliegt, kann die Wirkung einzelner Zufallseinflüsse auf die statisch Unbestimmten und die daraus folgenden sicherheitsrelevanten Zustandsgrößen beurteilt werden. Die Dichtefunktion der Grenzzustandsgleichung kann berechnet oder durch Simulation ermittelt werden. Daraus folgt . Nicht normalverteilte Zufallsgrößen werden durch Entwicklung in orthogonale Polynome Gaußscher Zufallsgrößen berücksichtigt.
Seit mehr als fünfzig Jahren werden zur Untersuchung der Tragwerkssicherheit auch Methoden der Wahrscheinlichkeitsrechnung herangezogen. Ungeachtet der inzwischen erreichten Fortschritte und der offensichtlichen Vorzüge, konnte dieses Vorgehen in der Praxis bis jetzt noch nicht ausreichend Fuß fassen. Im Beitrag wird das Problem der Tragwerkssicherheit mit einem neuartigen Verfahren behandelt. Im Unterschied zu den üblichen probabilistischen Methoden geht es nicht von Verteilungsfunktionen aus. Vielmehr werden die maßgebenden Zufallsgrößen in den Mittelpunkt gestellt und direkt in die Rechenvorschrift eingeführt. Als mathematisches Hilfsmittel dienen die WIENERschen Chaos-Polynome. Sie stellen im Raum der Zufallsgrößen mit beschränkter Varianz eine Basis dar, mit der sich eine beliebige Zufallsgröße nach orthogonalen Polynomen GAUSSscher Zufallsgrößen entwickeln läßt. So entsteht ein effektiver Formalismus, der sich eng an die herkömmliche Deformationsmethode anlehnt und als deren probabilistische Verallgemeinerung angesprochen werden darf. Die Methode liefert die Grenzzustandsbedingung als Funktion der auf das Tragwerk wirkenden Zufallsgrößen. Die Versagenswahrscheinlichkeit kann daher durch Monte-Carlo-Simulation bestimmt werden. Die mit der Auswertung des Wahrscheinlichkeitsintegrals der First Order Reliability Method (FORM) verbundenen Schwierigkeiten werden vermieden. An einem Beispieltragwerk wird dargestellt, wie sich Veränderungen gewisser Konstruktionsparameter auf die Versagenswahrscheinlichkeit auswirken.
Die heutige Situation in der Tragwerksplanung ist durch das kooperative Zusammenwirken einer größeren Anzahl von Fachleuten verschiedener Disziplinen (Architektur, Tragwerksplanung, etc.) in zeitlich befristeten Projektgemeinschaften gekennzeichnet. Bei der Abstimmung der hierdurch bedingten komplexen, dynamischen und vernetzten Planungsprozesse kommt es dabei häufig zu Planungsmängeln und Qualitätseinbußen. Dieser Artikel zeigt auf, wie mit Hilfe der Agententechnologie Lösungsansätze zur Verbesserung der Planungssituation erreicht werden können. Hierzu wird ein Agentenmodell für die vernetzt-kooperative Tragwerksplanung vorgestellt und anhand der Planung einer Fußgängerbogenbrücke anschaulich demonstriert. Das Agentenmodell erfasst (1) die beteiligten Fachplaner und Organisationen, (2) die tragwerksspezifischen Planungsprozesse, (3) die zugehörigen (Teil-)Produktmodelle und (4) die genutzte (Ingenieur-)Software. Hieraus leiten sich die drei Teilmodelle (1) agentenbasiertes Kooperationsmodell, (2) agentenbasierte Produktmodellintegration und (3) Modell zur agentenbasierten Software-Integration ab. Der Fokus des Artikels liegt auf der Darstellung des agentenbasierten Kooperationsmodells.
Die Aufgaben des Bauwesens erfordern den direkten Zugriff auf Objekte einer Datenbasis, die in dem Arbeitsspeicher einer Sitzung und in mehreren Binärdateien verteilt sind. In den Methoden einer Anwendung soll jedes Objekt unabhängig vom Ort seiner Speicherung mit seinem Namen als persistentem Identifikator direkt ansprechbar sein. Bei Bedarf sollen Objekte automatisch aus Dateien nachgeladen werden. Der Anwender soll bestimmen können, in welcher Datei ein bestimmtes Objekt gespeichert wird. Die Zugriffszeit auf ein Objekt soll im Mittel mit der Zugriffszeit auf ein Objekt in einer Java-Methode vergleichbar sein. Ein Konzept einer generalisierten Datenbasis wird vorgestellt. Seine Leistungsfähigkeit wird mit der vorhandenen Software für die Aufbewahrung und Verwaltung von Daten im Bauwesen verglichen. Es erweist sich als zweckmäßig, bereits im Entwurf einer Anwendung streng zwischen persistenten und transienten Objekten zu unterscheiden. Alle persistenten Objekte der Anwendung werden benannt. Unbenannte persistente Objekte der Java Plattform, beispielsweise Kollektionen und graphische Objekte, sind ebenfalls mit einem Griff (einem von der Datenbasis zugeteilten Namen) speicherbar. Der Zugriff auf ein Objekt ist schnell, da nur primitive Datentypen und Strings in binärer Form ohne Rückgriff auf Datenbanken gespeichert werden.
Der Schwerpunkt von Forschung und Entwicklung auf dem Gebiet der Tragwerksplanungs-Software lag in den letzten Jahren auf der Erweiterung des funktionalen Umfangs. In der Folge ist es notwendig, den gestiegenen Funktionsumfang einem möglichst breiten Anwenderkreis durch ingenieurgemäß gestaltete Arbeitsumgebungen zugänglich zu machen, so dass ein möglichst effizientes und fehlerarmes Arbeiten ermöglicht wird. Aus der Sicht der Tragwerksplaner muss eine ingenieurgemäß gestaltete Software eine dem spezifischen Arbeitsablauf angepasste Nutzer-Software-Interaktion aufweisen. Dabei sind die benötigten Funktionalitäten in ein einheitliches System zu integrieren und eine Anpassbarkeit durch den Anwender sicherzustellen. Die Berücksichtigung dieser Anforderungen mit herkömmlichen Mitteln würde einen unverhältnismäßig hohen Entwicklungsaufwand erfordern. Infolgedessen muss aus der Sicht der Software-Entwickler eine moderne Software-Architektur für die Tragwerksplanung eine Erhöhung des Wiederverwendungsgrades und eine unabhängige Erweiterbarkeit als zusätzliche Anforderungen erfüllen. In diesem Beitrag wird ein auf Verbunddokumenten basierendes Konzept vorgestellt, mit dem eine Zusammenführung von Standard-Software und fachspezifischen Software-Komponenten zu einer ingenieurgemäßen Arbeitsumgebung ermöglicht wird. Damit kann die Analyse und die Dokumentation eines Tragelementes einschließlich der zugehörigen Datenhaltung innerhalb eines Verbunddokumentes erfolgen. Gleichzeitig kann der software-technische Wiederverwendungsgrad durch die Definition eines Component Frameworks als unabhängig erweiterbare Software-Architektur und durch den Einsatz von Software-Komponenten mit eigener Nutzeroberfläche über das bisher erreichte Niveau hinaus gesteigert werden. Die Umsetzbarkeit des Konzeptes wird durch eine Pilotimplementierung demonstriert.
Die Behandlung von geometrischen Singularitäten bei der Lösung von Randwertaufgaben der Elastostatik stellt erhöhte Anforderungen an die mathematische Modellierung des Randwertproblems und erfordert für eine effiziente Auswertung speziell angepasste Berechnungsverfahren. Diese Arbeit beschäftigt sich mit der systematischen Verallgemeinerung der Methode der komplexen Spannungsfunktionen auf den Raum, wobei der Schwerpunkt in erster Linie auf der Begründung des mathematischen Verfahrens unter besonderer Berücksichtigung der praktischen Anwendbarkeit liegt. Den theoretischen Rahmen hierfür bildet die Theorie quaternionenwertiger Funktionen. Dementsprechend wird die Klasse der monogenen Funktionen als Grundlage verwendet, um im ersten Teil der Arbeit ein räumliches Analogon zum Darstellungssatz von Goursat zu beweisen und verallgemeinerte Kolosov-Muskhelishvili Formeln zu konstruieren. Im Hinblick auf die vielfältigen Anwendungsbereiche der Methode beschäftigt sich der zweite Teil der Arbeit mit der lokalen und globalen Approximation von monogenen Funktionen. Hierzu werden vollständige Orthogonalsysteme monogener Kugelfunktionen konstruiert, infolge dessen neuartige Darstellungen der kanonischen Reihenentwicklungen (Taylor, Fourier, Laurent) definiert werden. In Analogie zu den komplexen Potenz- und Laurentreihen auf der Grundlage der holomorphen z-Potenzen werden durch diese monogenen Orthogonalreihen alle wesentlichen Eigenschaften bezüglich der hyperkomplexen Ableitung und der monogenen Stammfunktion verallgemeinert. Anhand repräsentativer Beispiele werden die qualitativen und numerischen Eigenschaften der entwickelten funktionentheoretischen Verfahren abschließend evaluiert. In diesem Kontext werden ferner einige weiterführende Anwendungsbereiche im Rahmen der räumlichen Funktionentheorie betrachtet, welche die speziellen Struktureigenschaften der monogenen Potenz- und Laurentreihenentwicklungen benötigen.
In this paper the influence of changes in the mean wind velocity, the wind profile power-law coefficient, the drag coefficient of the terrain and the structural stiffness are investigated on different complex structural models. This paper gives a short introduction to wind profile models and to the approach by Davenport A. G. to compute the structural reaction of wind induced vibrations. Firstly with help of a simple example (a skyscraper) this approach is shown. Using this simple example gives the reader the possibility to study the variance differences when changing one of the above mentioned parameters on this very easy example and see the influence of different complex structural models on the result. Furthermore an approach for estimation of the needed discretization level is given. With the help of this knowledge the structural model design methodology can be base on deeper understanding of the different behavior of the single models.
Rectangular steel frames are considered and subjected to strong ground motion. Their behavior factor is numerically evaluated using nonlinear time history analysis and different ground acceleration records. The behavior factor is determined assuming severe collapse mechanism occurs throughout the time history. The system of equations is transformed into single equation end then the energy balance concept is applied. The expression for the behavior factor is derived and its application to four story two bays steel frame is illustrated and the corresponding results are discussed.
Thin-walled spatial structures are broadly used in the modern technician and building. In fuel industry for long-term keeping of oil and gas are used reservoirs of various capacity, which on technological reasons can be shipped under the soil. Shells of reservoirs combine in itself high toughness and low specific consumption of materials. At the same time, being under the soil, they feel steady-state and dynamic loads from ambiance, which particularly in the event, when reservoir is empty, can bring about the loss of stability of its form. On the other hand contact interactions of shell and soil greatly depend on features of ambiance and its saturating of liquid. For building generalized porous springy ambiance models, saturated by the liquid, it is possible to use Bio equations of motion for displacement of hard and fluid phases. Elaboration of mathematical specified interaction models and theirs realization by means of modern computing software allows to study behaviour of spatial thin-walled designs on base of geometric nonlinear theory of shells
On the basis of the little material available (an architecture plan and some photographs) a computer model is developed for a bullet shaped dome, part of the Belgian Congo pavilion, created by the architect Henry Lacoste for the International Colonial Exhibition of 1931 in Paris. The ingenious and elegant wooden skeleton of the dome is approximated in two stages. The first approximation focusses on the curves traced on the dome by the wooden laminae, which appear to be loxodromes, cutting the meridians by a constant angle. In a second approximation the very specific joints of the laminae are taken into consideration. The resulting computer image shows an astonishing resemblance with the photographs. Finally, the shapes and dimensions of all laminae are calculated, enabling a possible reconstruction of the dome.
Euclidean Clifford analysis is a higher dimensional function theory offering a refinement of classical harmonic analysis. The theory is centered around the concept of monogenic functions, i.e. null solutions of a first order vector valued rotation invariant differential operator called the Dirac operator, which factorizes the Laplacian. More recently, Hermitean Clifford analysis has emerged as a new and successful branch of Clifford analysis, offering yet a refinement of the Euclidean case; it focusses on the simultaneous null solutions, called Hermitean (or h-) monogenic functions, of two Hermitean Dirac operators which are invariant under the action of the unitary group. In Euclidean Clifford analysis, the Clifford-Cauchy integral formula has proven to be a corner stone of the function theory, as is the case for the traditional Cauchy formula for holomorphic functions in the complex plane. Previously, a Hermitean Clifford-Cauchy integral formula has been established by means of a matrix approach. This formula reduces to the traditional Martinelli-Bochner formula for holomorphic functions of several complex variables when taking functions with values in an appropriate part of complex spinor space. This means that the theory of Hermitean monogenic functions should encompass also other results of several variable complex analysis as special cases. At present we will elaborate further on the obtained results and refine them, considering fundamental solutions, Borel-Pompeiu representations and the Teoderescu inversion, each of them being developed at different levels, including the global level, handling vector variables, vector differential operators and the Clifford geometric product as well as the blade level were variables and differential operators act by means of the dot and wedge products. A rich world of results reveals itself, indeed including well-known formulae from the theory of several complex variables.
The conventional way of describing an image is in terms of its canonical pixel-based representation. Other image description techniques are based on image transformations. Such an image transformation converts a canonical image representation into a representation in which specific properties of an image are described more explicitly. In most transformations, images are locally approximated within a window by a linear combination of a number of a priori selected patterns. The coefficients of such a decomposition then provide the desired image representation. The Hermite transform is an image transformation technique introduced by Martens. It uses overlapping Gaussian windows and projects images locally onto a basis of orthogonal polynomials. As the analysis filters needed for the Hermite transform are derivatives of Gaussians, Hermite analysis is in close agreement with the information analysis carried out by the human visual system. In this paper we construct a new higher dimensional Hermite transform within the framework of Quaternionic Analysis. The building blocks for this construction are the Clifford-Hermite polynomials rewritten in terms of Quaternionic analysis. Furthermore, we compare this newly introduced Hermite transform with the Quaternionic-Hermite Continuous Wavelet transform. The Continuous Wavelet transform is a signal analysis technique suitable for non-stationary, inhomogeneous signals for which Fourier analysis is inadequate. Finally the developed three dimensional filter functions of the Quaternionic-Hermite transform are tested with traditional scalar benchmark signals upon their selectivity at detecting pointwise singularities.
Iso-parametric finite elements with linear shape functions show in general a too stiff element behavior, called locking. By the investigation of structural parts under bending loading the so-called shear locking appears, because these elements can not reproduce pure bending modes. Many studies dealt with the locking problem and a number of methods to avoid the undesirable effects have been developed. Two well known methods are the >Assumed Natural Strain< (ANS) method and the >Enhanced Assumed Strain< (EAS) method. In this study the EAS method is applied to a four-node plane element with four EAS-parameters. The paper will describe the well-known linear formulation, its extension to nonlinear materials and the modeling of material uncertainties with random fields. For nonlinear material behavior the EAS parameters can not be determined directly. Here the problem is solved by using an internal iteration at the element level, which is much more efficient and stable than the determination via a global iteration. To verify the deterministic element behavior the results of common test examples are presented for linear and nonlinear materials. The modeling of material uncertainties is done by point-discretized random fields. To show the applicability of the element for stochastic finite element calculations Latin Hypercube Sampling was applied to investigate the stochastic hardening behavior of a cantilever beam with nonlinear material. The enhanced linear element can be applied as an alternative to higher-order finite elements where more nodes are necessary. The presented element formulation can be used in a similar manner to improve stochastic linear solid elements.
In the context of finite element model updating using output-only vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the correct pairing of experimentally obtained and numerically derived natural frequencies and mode shapes is important. In many cases, only limited spatial information is available and noise is present in the measurements. Therefore, the automatic selection of the most likely numerical mode shape corresponding to a particular experimentally identified mode shape can be a difficult task. The most common criterion for indicating corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases and is not reliable for automatic approaches. In this paper, the purely mathematical modal assurance criterion will be enhanced by additional physical information from the numerical model in terms of modal strain energies. A numerical example and a benchmark study with experimental data are presented to show the advantages of the proposed energy-based criterion in comparison to the traditional modal assurance criterion.
In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.
Die Bearbeitung von Bauprojekten erfordert ein hohes Maß an Fachwissen verschiedener Disziplinen. Dabei kommt eine Vielzahl spezialisierter Fachmodelle zum Einsatz. Zur Übernahme der Daten von anderen Planern in das eigene, neu zu erstellende Fachmodell sind die verfügbaren Inhalte aus verschiedenen Modellen vom Fachplaner entsprechend seiner Anforderungen anzupassen und um spezifische Inhalte zu ergänzen. Dabei ergeben sich Beziehungen, welche die Zusammenhänge und Abhängigkeiten der Fachmodelle untereinander aufzeigen. Eine zugleich allgemeingültige sowie vollständige Vordefinition des durch die Beziehungen beschriebenen Modellverbundes ist kaum möglich. Zur rechnerinternen Abbildung erfolgt aus diesem Grund eine Zerlegung des Modellverbundes in Partialmodelle und entsprechende Verknüpfungen. Die Beschaffenheit des Beziehungsgeflechtes hängt sowohl von der Qualität der Datenmodelle als auch von der Beschreibungsgüte der Verknüpfungstypen, deren Definition ein hohes Maß an Fachwissen erfordern, ab. Mit einem Konzept zur Strukturierung und Zerlegung der Verbindungen in Basiselemente sowie der Integrationsmöglichkeit zu komplexeren Elementen wird eine einfachere Erstellung, Wartung und Anpassung von umfassenden baufachlichen Inhalten ermöglicht. Zur Sicherung einer hochwertigen Beschreibung des Modellverbundes ist ein an die Fähigkeiten des Ingenieurs ausgerichteter Zugang zur Spezifikation und Anpassung der Beziehungsdefinitionen unverzichtbar.
Datenaustausch, Daten resp. Produktdatenmodelle sind seit mehreren Jahren Themen in der Forschung. Verschiedene Forschungsprojekte und Initiativen diverser Firmen führten zu bereichsübergreifenden Ansätzen wie IFC und verschiedenen STEP-AP´s. Speziell im Stahlbau sind die Projekte >Produktschnittstelle Stahlbau< und >CIMsteel< entwickelt, weiterentwickelt und überarbeitet worden. Als Weiterentwicklung der bisher existierenden Austauschformate versuchen neuere Ansätze den Nutzen über die reine Datenübermittlung hinaus zu erweitern. So integrieren diese Lösungsvorschläge Aspekte der Kommunikation, der Zusammenarbeit und des Managements. Des weiteren übernehmen sie Aufgaben der Daten- und Modellverwaltung. Somit erfolgt eine digitale Abbildung unter Einbezug sämtlicher ermittelter Daten. Resultierend aus den besonderen Randbedingungen im Bauwesen, wird ein Bauwerksmodell aus untereinander in Beziehung gesetzten Domänenmodellen aufgebaut
SLang - the Structural Language : Solving Nonlinear and Stochastic Problems in Structural Mechanics
(1997)
Recent developments in structural mechanics indicate an increasing need of numerical methods to deal with stochasticity. This process started with the modeling of loading uncertainties. More recently, also system uncertainty, such as physical or geometrical imperfections are modeled in probabilistic terms. Clearly, this task requires close connenction of structural modeling with probabilistic modeling. Nonlinear effects are essential for a realistic description of the structural behavior. Since modern structural analysis relies quite heavily on the Finite Element Method, it seems to be quite reasonable to base stochastic structural analysis on this method. Commercially available software packages can cover deterministic structural analysis in a very wide range. However, the applicability of these packages to stochastic problems is rather limited. On the other hand, there is a number of highly specialized programs for probabilistic or reliability problems which can be used only in connection with rather simplistic structural models. In principle, there is the possibility to combine both kinds of software in order to achieve the goal. The major difficulty which then arises in practical computation is to define the most suitable way of transferring data between the programs. In order to circumvent these problems, the software package SLang (Structural Language) has been developed. SLang is a command interpreter which acts on a set of relatively complex commands. Each command takes input from and gives output to simple data structures (data objects), such as vectors and matrices. All commands communicate via these data objects which are stored in memory or on disk. The paper will show applications to structural engineering problems, in particular failure analysis of frames and shell structures with random loads and random imperfections. Both geometrical and physical nonlinearities are taken into account.
This paper proposes an adaptive atomistic- continuum numerical method for quasi-static crack growth. The phantom node method is used to model the crack in the continuum region and a molecular statics model is used near the crack tip. To ensure self-consistency in the bulk, a virtual atom cluster is used to model the material of the coarse scale. The coupling between the coarse scale and fine scale is realized through ghost atoms. The ghost atom positions are interpolated from the coarse scale solution and enforced as boundary conditions on the fine scale. The fine scale region is adaptively enlarged as the crack propagates and the region behind the crack tip is adaptively coarsened. An energy criterion is used to detect the crack tip location. The triangular lattice in the fine scale region corresponds to the lattice structure of the (111) plane of an FCC crystal. The Lennard-Jones potential is used to model the atom–atom interactions. The method is implemented in two dimensions. The results are compared to pure atomistic simulations; they show excellent agreement.
Bei komplexen Gründungskonstruktionen sind Planungsfehler durch eine konsistente Modellierung vermeidbar. Manuelle Berechnungsmethoden ermöglichen im allgemeinen ein dreidimensionales Vorgehen nicht. Numerische Berechnungsmethoden, wie z.B. die Finite-Element-Methode, sind ein optimales Werkzeug zur ganzheitlichen Simulation des Problems. Die für die Finite-Element-Analyse notwendige Diskretisierung komplexer Bau- grundstrukturen ist manuell nicht zu bewältigen. Der vorliegende Beitrag zeigt wie ein Finite-Element-Modell automatisch aus einem geotechnischen Modell unter Berücksichtigung der spezifischen Anforderungen der Baugrund-Tragwerk-Struktur und des Bauablaufes erzeugt werden kann. Hierbei wird die Berücksichtigung der geometrischen und der mechanischen Besonderheiten bei der Netzgenerierung dargestellt.
As it is well known, the approximation theory of complex valued functions is one of the main fields in function theory. In general, several aspects of approximation and interpolation are only well understood by using methods of complex analysis. It seems natural to extend these techniques to higher dimensions by using Clifford Analysis methods or, more specific, in lower dimensions 3 or 4, by using tools of quaternionic analysis. One starting point for such attempts has to be the suitable choice of complete orthonormal function systems that should replace the holomorphic function systems used in the complex case. The aim of our contribuition is the construction of a complete orthonormal system of monogenic polynomials derived from a harmonic function system by using sistematically the generalized quaternionic derivative
A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)
Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.
The aim of this talk is to show that the methods used by Métivier and Lapidus to study the eigenvalue distribution of elliptic operators (e.g., of the Dirichlet Laplacian) can be adapted to the study of the similar problem for the Stokes operator. In this way we get asymptotic formulae for the eigenvalues of the latter operator even in the case when the underlying domain has an extremely irregular (fractal) boundary. In the case the boundary is not that irregular (e.g., when it is Lipschitz) the estimates we obtain are much better than the ones we can find in the current literature.
The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.
Electromagnetic wave propagation is currently present in the vast majority of situations which occur in veryday life, whether in mobile communications, DTV, satellite tracking, broadcasting, etc. Because of this the study of increasingly complex means of propagation of lectromagnetic waves has become necessary in order to optimize resources and increase the capabilities of the devices as required by the growing demand for such services.
Within the electromagnetic wave propagation different parameters are considered that characterize it under various circumstances and of particular importance are the reflectance and transmittance. There are several methods or the analysis of the reflectance and transmittance such as the method of approximation by boundary condition, the plane wave expansion method (PWE), etc., but this work focuses on the WKB and SPPS methods.
The implementation of the WKB method is relatively simple but is found to be relatively efficient only when working at high frequencies. The SPPS method (Spectral Parameter Powers Series) based on the theory of pseudoanalytic functions, is used to solve this problem through a new representation for solutions of Sturm Liouville equations and has recently proven to be a powerful tool to solve different boundary value and eigenvalue problems. Moreover, it has a very suitable structure for numerical implementation, which in this case took place in the Matlab software for the valuation of both conventional and turning points profiles.
The comparison between the two methods allows us to obtain valuable information about their perfor mance which is useful for determining the validity and propriety of their application for solving problems where these parameters are calculated in real life applications.
Hyperbolic Qp-scales
(2003)
The Qp-scales were first introduced in [1] as interpolation spaces between the Bloch and Dirichlet spaces in the complex space. ... However, such treatment presents the disadvantage of only considering the Euclidean case. In order to obtain an approach to homogeneous hyperbolic manifolds, the projective model of Gel'fand was retaken in [2]. With the help of a convenient fundamental solution for the hyperbolic (homogeneous of degree ®) D® (see [5]) it was introduced in [7] and [3] equivalent Qp scales for homogeneous hyperbolic spaces. In this talk we shall present and study some properties of this hyperbolic scale.
In machine learning, if the training data is independently and identically distributed as the test data then a trained model can make an accurate predictions for new samples of data. Conventional machine learning has a strong dependence on massive amounts of training data which are domain specific to understand their latent patterns. In contrast, Domain adaptation and Transfer learning methods are sub-fields within machine learning that are concerned with solving the inescapable problem of insufficient training data by relaxing the domain dependence hypothesis. In this contribution, this issue has been addressed and by making a novel combination of both the methods we develop a computationally efficient and practical algorithm to solve boundary value problems based on nonlinear partial differential equations. We adopt a meshfree analysis framework to integrate the prevailing geometric modelling techniques based on NURBS and present an enhanced deep collocation approach that also plays an important role in the accuracy of solutions. We start with a brief introduction on how these methods expand upon this framework. We observe an excellent agreement between these methods and have shown that how fine-tuning a pre-trained network to a specialized domain may lead to an outstanding performance compare to the existing ones. As proof of concept, we illustrate the performance of our proposed model on several benchmark problems.
A phantom-node method is developed for three-node shell elements to describe cracks. This method can treat arbitrary cracks independently of the mesh. The crack may cut elements completely or partially. Elements are overlapped on the position of the crack, and they are partially integrated to implement the discontinuous displacement across the crack. To consider the element containing a crack tip, a new kinematical relation between the overlapped elements is developed. There is no enrichment function for the discontinuous displacement field. Several numerical examples are presented to illustrate the proposed method.
This paper presents a strain smoothing procedure for the extended finite element method (XFEM). The resulting “edge-based” smoothed extended finite element method (ESm-XFEM) is tailored to linear elastic fracture mechanics and, in this context, to outperform the standard XFEM. In the XFEM, the displacement-based approximation is enriched by the Heaviside and asymptotic crack tip functions using the framework of partition of unity. This eliminates the need for the mesh alignment with the crack and re-meshing, as the crack evolves. Edge-based smoothing (ES) relies on a generalized smoothing operation over smoothing domains associated with edges of simplex meshes, and produces a softening effect leading to a close-to-exact stiffness, “super-convergence” and “ultra-accurate” solutions. The present method takes advantage of both the ES-FEM and the XFEM. Thanks to the use of strain smoothing, the subdivision of elements intersected by discontinuities and of integrating the (singular) derivatives of the approximation functions is suppressed via transforming interior integration into boundary integration. Numerical examples show that the proposed method improves significantly the accuracy of stress intensity factors and achieves a near optimal convergence rate in the energy norm even without geometrical enrichment or blending correction.
DETERMINATION OF THE DYNAMIC STRESS INTENSITY FACTOR USING ADVANCED ENERGY RELEASE EVALUATION
(2000)
In this study a simple effective procedure practically based upon the FEM for determination of the dynamic stress intensity factor (DSIF) depending on the input frequency and using an advanced strain energy release evaluation by the simultaneous release of a set of fictitious nodal spring links near the crack tip is developed and applied. The DSIF is expressed in terms of the released energy per unit crack length. The formulations of the linear fracture mechanics are accepted. This technique is theoretically based upon the eigenvalue problem for assessment of the spring stiffnesses and on the modal decomposition of the crack shape. The inertial effects are included into the released energy. A linear elastic material, time-dependent loading of sine type and steady state response of the structure are assumed. The procedure allows the opening, sliding and mixed modes of the structure fracture to be studied. This rational and powerful technique requires a mesh refinement near the crack tip. A numerical test example of a square notched steel plate under tension is given. Opening mode of fracture is studied only. The DSIF is calculated using a coarse mesh and a single node release for the released energy computation as well a fine mesh and simultaneous release of four links for more accurate values. The results are analyzed. Comparisons with the known exact results from a static loading are presented. Conclusions are derived. The values of the DSIF are significantly larger than the values of the corresponding static SIF. Significant peaks of the DSIF are observed near the natural frequences. This approach is general, practicable, reliable and versatile.
The paper describes a development of the analytical finite strip method (FSM) in displacements for linear elastic static analysis of simply supported at their transverse ends complex orthotropic prismatic shell structures with arbitrary open or closed deformable contour of the cross-section under general external loads. A number of bridge top structures, some roof structures and others are related to the studied class. By longitudinal sections the prismatic thin-walled structure is discretized to a limited number of plane straight strips which are connected continuously at their longitudinal ends to linear joints. As basic unknowns are assumed the three displacements of points from the joint lines and the rotation to these lines. In longitudinal direction of the strips the unknown quantities and external loads are presented by single Fourier series. In transverse direction of each strips the unknown values are expressed by hyperbolic functions presenting an exact solution of the corresponding differential equations of the plane straight strip. The basic equations and relations for the membrane state, for the bending state and for the total state of the finite strip are obtained. The rigidity matrix of the strip in the local and global co-ordinate systems is derived. The basic relations of the structure are given and the general stages of the analytical FSM are traced. For long structures FSM is more efficient than the classic finite element method (FEM), since the problem dimension is reduced by one and the number of unknowns decreases. In comparison with the semi-analytical FSM, the analytical FSM leads to a practically precise solution, especially for wider strips, and provides compatibility of the displacements and internal forces along the longitudinal linear joints.
COMPARISON OF SOME VARIANTS OF THE FINITE STRIP METHOD FOR ANALYSIS OF COMPLEX SHELL STRUCTURES
(2000)
The subject of this paper is to explore and evaluate the semi-analytical, analytical and numerical versions of the finite strip method (FSM) for static, dynamic and stability analyses of complex thin-walled structures. Many of bridge superstructures, some roof and floor structures, reservoirs, channels, tunnels, subways, layered shells and plates etc. can be analysed by this method. In both semi-analytical and analytical variants beam eigenvalue vibration or stability functions, orthogonal polynomials, products of these functions are used as longitudinal functions of the unknowns. In the numerical FSM spline longitudinal displacement functions are implemented. In the semi-analytical and numerical FSM conventional transverse shape functions for displacements are used. In the analytical FSM the accurate function of the strip normal displacement and the plane stress function are applied. These three basic variants of the FSM are compared in quality and quantity in view to the following: basic ideas, modelling, unknowns, DOF, a kind and order of the strips, longitudinal and transverse displacement and stress functions, compatibility requirements, boundary conditions, ways for obtaining of the strip stiffness and load matrices, a kind and size of the structure stiffness matrix and its band width, mesh density, necessary number of terms in length, accuracy and convergence of the stresses and displacements, approaches for refining results, input and output data, computer resources used, application area, closeness to other methods, options for future development. Numerical example is presented. Advantages and shortcomings are pointed. Conclusions are given.
The goal of the collaborative research center (SFB 532) >Textile reinforced concrete (TRC): the basis for the development of a new material technology< installed in 1998 at the Aachen University is a complex assessment of mechanical, chemical, economical and productional aspects in an interdisciplinary environment. The research project involves 10 institutes performing parallel research in 17 projects. The coordination of such a research process requires effective software support for information sharing in form of data exchange, data analysis and data archival. Furthermore, the processes of experiment planning and design, modification of material compositions and design parameters and development of new material models in such an environment call for systematic coordination applying the concepts of operational research. Flexible organization of the data coming from several sources is a crucial premise for a transparent accumulation of knowledge and, thus, for a successful research in a long run. The technical information system (TRC-TIS) developed in the SFB 532 has been implemented as a database-powered web server with a transparent definition of the product and process model. It serves as an intranet server with access domains devoted to the involved research groups. At the same time, it allows the presentation of selected results just by granting a data object an access from the public area of the server via internet.
MICROPLANE MODEL WITH INITIAL AND DAMAGE-INDUCED ANISOTROPY APPLIED TO TEXTILE-REINFORCED CONCRETE
(2010)
The presented material model reproduces the anisotropic characteristics of textile reinforced concrete in a smeared manner. This includes both the initial anisotropy introduced by the textile reinforcement, as well as the anisotropic damage evolution reflecting fine patterns of crack bridges. The model is based on the microplane approach. The direction-dependent representation of the material structure into oriented microplanes provides a flexible way to introduce the initial anisotropy. The microplanes oriented in a yarn direction are associated with modified damage laws that reflect the tension-stiffening effect due to the multiple cracking of the matrix along the yarn.
In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.
Designing lightings in a 3D-scene is a general complex task for building conception as it is submitted to many constraints such as aesthetics or ergonomics. This is often achieved by experimental trials until reaching an acceptable result. Several rendering softwares (such as Radiance) allow an accurate computation of lighting for each point in a scene, but this is a long process and any modification requires the whole scene to be rendered again to get the result. The first guess is empirical, provided by experience of the operator and rarely submitted to scientific considerations. Our aim is to provide a tool for helping designers to achieve this work in the scope of global illumination. We consider the problem when some data are asked for : on one hand the mean lighting in some zones (for example on a desktop) and on the other hand some qualitative information about location of sources (spotlights on the ceiling, halogens on north wall,...). The system we are conceiving computes the number of light sources, their position and intensities, in order to obtain the lighting effects defined by the user. The algorithms that we use bind together radiosity computations with resolution of a system of constraints.
The cost of keeping large area urban computer aided architectural design (CAAD) models up to date justifies wider use and access. This paper reviews the potential for collaborative groupwork creation and maintenance of such models and suggests an approach to data entry, data management and generation of appropriate levels of detail models from a Geographic Information System (GIS). Staff at the University of the West of England (UWE) modelled a large area of Bristol to demonstrate millennium landmark proposals. It became swiftly apparent that continued amendment of the model to keep it an accurate reflection of changes on the ground was a major data management problem. Piecing in new CAAD models received from Architectural Practices to visualise them in context as part of the planning negotiation process has often taken staff several days of work for each instance. The model is so complex and proprietary that Bristol City operates a specialist visualisation bureau service. UWE later modelled the environs of the Tower of London to support bids for funding and to provide the context for judging the visual impact of iterative design development. Further research continued to develop more effective approaches to. Data conversion and amalgamation from all the diverse sources was the major impediment to effective group working to create the models. It became apparent that a GIS would assist retrieving all the appropriate data that described the part of the model under creation. It was possible to predict that management of many historic part models stepping back through time, allowing for different expert interpretations to co-exist would be in itself a major task requiring a spatial database/GIS. UWE started afresh from the original source data, to explore the collaborative use of GIS and Virtual Reality Modelling Language (VRML) to integrate models and interventions from various sources and to generate an overall navigable interactive whole. Current exploration of the combination of event driven behaviours and Structured Query Language is seeking to define how appropriately to modify objects in the VRML model on demand. This is beginning to realise the potential for use of this process for: asynchronous group modelling on the lines of a collaborative virtual design studio; historic building maintenance management; visitor management; interpretation of historic sites to visitors and public planning information.