### Refine

#### Has Fulltext

- yes (183) (remove)

#### Document Type

- Conference Proceeding (158)
- Article (20)
- Doctoral Thesis (5)

#### Institute

- In Zusammenarbeit mit der Bauhaus-Universität Weimar (92)
- Professur Informatik im Bauwesen (40)
- Institut für Strukturmechanik (10)
- Professur Informatik in der Architektur (10)
- Professur Angewandte Mathematik (6)
- Professur Theorie und Geschichte der modernen Architektur (5)
- Institut für Konstruktiven Ingenieurbau (4)
- Juniorprofessur Computational Architecture (4)
- Institut für Mathematik-Bauphysik (2)
- Professur Bodenmechanik (2)

#### Keywords

- CAD (183) (remove)

In computer-aided design (CAD), industrial products are designed using a virtual 3D model. A CAD model typically consists of curves and surfaces in a parametric representation, in most cases, non-uniform rational B-splines (NURBS). The same representation is also used for the analysis, optimization and presentation of the model. In each phase of this process, different visualizations are required to provide an appropriate user feedback. Designers work with illustrative and realistic renderings, engineers need a
comprehensible visualization of the simulation results, and usability studies or product presentations benefit from using a 3D display. However, the interactive visualization of NURBS models and corresponding physical simulations is a challenging task because of the computational complexity and the limited graphics hardware support.
This thesis proposes four novel rendering approaches that improve the interactive visualization of CAD models and their analysis. The presented algorithms exploit latest graphics hardware capabilities to advance the state-of-the-art in terms of quality, efficiency and performance. In particular, two approaches describe the direct rendering of the parametric representation without precomputed approximations and timeconsuming pre-processing steps. New data structures and algorithms are presented for the efficient partition, classification, tessellation, and rendering of trimmed NURBS surfaces as well as the first direct isosurface ray-casting approach for NURBS-based isogeometric analysis. The other two approaches introduce the versatile concept of programmable order-independent semi-transparency for the illustrative and comprehensible visualization of depth-complex CAD models, and a novel method for the hybrid reprojection of opaque and semi-transparent image information to accelerate stereoscopic rendering. Both approaches are also applicable to standard polygonal geometry which contributes to the computer graphics and virtual reality research communities.
The evaluation is based on real-world NURBS-based models and simulation data. The results show that rendering can be performed directly on the underlying parametric representation with interactive frame rates and subpixel-precise image results. The computational costs of additional visualization effects, such as semi-transparency and stereoscopic rendering, are reduced to maintain interactive frame rates. The benefit of this performance gain was confirmed by quantitative measurements and a pilot user study.

The aim of this paper is to present so-called discrete-continual boundary element method (DCBEM) of structural analysis. Its field of application comprises buildings constructions, structures and also parts and components for the residential, commercial and un-inhabitant structures with invariability of physical and geometrical parameters in some dimensions. We should mention here in particular such objects as beams, thin-walled bars, strip foundations, plates, shells, deep beams, high-rise buildings, extensional buildings, pipelines, rails, dams and others. DCBEM comes under group of semianalytical methods. Semianalytical formulations are contemporary mathematical models which currently becoming available for realization due to substantial speed-up of computer productivity. DCBEM is based on the theory of the pseudodifferential boundary equations. Corresponding pseudodifferential operators are discretely approximated using Fourier analysis or wavelet analysis. The main DCBEM advantages against the other methods of the numerical analysis is a double reduction in dimension of the problem (discrete numerical division applied not to the full region of the interest but only to the boundary of the region cross section, as a matter of fact one is solving an one-dimensional problem with the finite step on the boundary area of the region), one has opportunities to carrying out very detailed analysis of the specific chosen zones, simplified initial data preparation, simplistic and adaptive algorithms. There are two methods to define and conduct DCBEM analysis developed – indirect (IDCBEM) and direct (DDCBEM), thus indirect like in boundary element method (BEM) applied and used little bit more than direct.

The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.

We present an algebraically extended 2D image representation in this paper. In order to obtain more degrees of freedom, a 2D image is embedded into a certain geometric algebra. Combining methods of differential geometry, tensor algebra, monogenic signal and quadrature filter, the novel 2D image representation can be derived as the monogenic extension of a curvature tensor. The 2D spherical harmonics are employed as basis functions to construct the algebraically extended 2D image representation. From this representation, the monogenic signal and the monogenic curvature signal for modeling intrinsically one and two dimensional (i1D/i2D) structures are obtained as special cases. Local features of amplitude, phase and orientation can be extracted at the same time in this unique framework. Compared with the related work, our approach has the advantage of simultaneous estimation of local phase and orientation. The main contribution is the rotationally invariant phase estimation, which enables phase-based processing in many computer vision tasks.

Analysis of the reinforced concrete chimney geometry changes and their influence on the stresses in the chimney mantle was made. All the changes were introduced to a model chimney and compared. Relations between the stresses in the mantle of the chimney and the deformations determined by the change of the chimney's vertical axis geometry were investigated. The vertical axis of chimney was described by linear function (corresponding to the real rotation of the chimney together with the foundation), and by parabolic function (corresponding to the real dislocation of the chimney under the influence of the horizontal forces - wind). The positive stress pattern in the concrete as well as the negative stress pattern in the reinforcing steel have been presented. The two cases were compared. Analysis of the stress changes in the chimney mantle depending on the modification in the thickness of the mantle (the thickness of the chimney mantle was altered in the linear or the abrupt way) was carried out. The relation between the stresses and the chimney's diameter change from the bottom to the top of the chimney was investigated. All the analyses were conducted by means of a specially developed computer program created in Mathematica environment. The program makes it also possible to control calculations and to visualize the results of the calculations at every stage of the calculation process.

TOOL TO CHECK TOPOLOGY AND GEOMETRY FOR SPATIAL STRUCTURES ON BASIS OF THE EXTENDED MAXWELL'S RULE
(2006)

One of the simplest principle in the design of light-weight structures is to avoid bending. This can be achieved by dissolving girders into members acting purely in axial tension or compression. The employment of cables for the tensioned members leads to even lighter structures which are called cable-strut structures. They constitute a subclass of spatial structures. To give fast information about the general feasibility of an architectural concept employing cable-strut structures is a challenging task due to their sophisticated mechanical behavior. In this regard it is essential to control if the structure is stable and if pre-stress can be applied. This paper presents a tool using the spreadsheet software Microsoft (MS) Excel which can give such information. Therefore it is not necessary to purchase special software and the according time consuming training is much lower. The tool was developed on basis of the extended Maxwell's rule, which besides topology also considers the geometry of the structure. For this the rank of the node equilibrium matrix is crucial. Significance and determination of the rank and the implementation of the corresponding algorithms in MS Excel are described in the following. The presented tool is able to support the structural designer in an early stage of the project in finding a feasible architectural concept for cable-strut structures. As examples for the application of the software tool two special cable-strut structures, so called tensegrity structures, were examined for their mechanical behavior.

Subject of the paper is the realisation of a model based efficiency control system for PV generators using a simulation model. A standard 2-diodes model of PV generator is base of the ColSim model, which is implemented in ANSI C code for flexible code exporting. The algorithm is based on discretisized U-I characteristics, which allows the calculation of string topologies witch parallel and serial PV cells and modules. Shadowing effects can be modelled down to cell configuration using polar horizon definitions. The simulation model was ported to a real time environment, to calculate the efficiency of a PV system. Embedded System technology allows the networked operation and the integration of standard I/O devices. Futher work focus on the adaption of shadowing routine, which will be adapted to get the environment conditions from the real operation.

Die meisten Insolvenzen in Deutschland kommen aus der Bauindustrie. Die Gründe hierfür sind vielschichtig, jedoch kann mittels eines modern ausgerichteten M-I-S und Baustellen-Controllings frühzeitig erkannt werden, wie sich die Baustellenergebnisse entwickeln. Hierzu ist es notwendig, dass die Arbeitskalkulation ständig auf dem Laufenden gehalten wird. Nur wenn dies geschieht, sind monatliche Soll-/ Ist-Vergleiche und eine Betrachtung der cost-to-complete möglich und sinnvoll. Eine monatlich rollierende Prognose des Baustellenergebnisses zum Bauende ermöglicht, dass gravierende Veränderungen des Ergebnisses umgehend aufgedeckt werden. Nur in Kenntnis dieser Entwicklungen kann das Management frühzeitig (im Sinne eines Frühwarnsystems) agieren und Steuerungsmaßnahmen ergreifen. Die Ergebnisprognose zum Bauende ist allein als Steuerungsinstrument nicht ausreichend. Die Finanzsituation der Baustelle muß auch regelmäßig geprüft werden, d.h. der Leistungsstand mit der Rechnungsstellung an den Bauherren abgeglichen sowie die unbezahlten Rechnungen des Bauherren überprüft werden. Das beste Prognoseergebnis ist wertlos, wenn der Bauherr seine bezogenen Leistungen nicht vergütet. Die wirtschaftlichen Daten stehen den Verantwortlichen online im Baustellen-Informations-System (B-I-S) zur Verfügung. Ein Ampelsystem verdeutlicht die wirtschaftliche Lage der Baustelle.

Hinsichtlich der Integration einzelner Bauwerkslebensphasen und der verschiedenen Beteiligten, insbesondere innerhalb von Bauplanungs- und Revitalisierungsprozessen, bestehen aktuell entscheidende Defizite. Die generelle Zielstellung der in diesem Beitrag vorgestellten Forschungsarbeiten besteht in der Unterstützung und Verbesserung der Integration durch die disziplin- und lebensphasenübergreifende Bereitstellung sämtlicher bauwerksbezogener Informationen. Dies erfordert einerseits geeignete Ansätze zur Modellierung und Integration der vielfältigen disziplinspezifischen Daten, andererseits geeignete Lösungen, die einen globalen Zugriff, Navigation und Recherche im Gesamtdatenbestand ermöglichen. Die Modellierung und Verwaltung bauwerksbezogener Daten ist seit längerem Gegenstand diverser Forschungsarbeiten. Im Rahmen des SFB 524 wurde ein eigener Ansatz basierend auf einem laufzeitdynamischen Partialmodellverbund entwickelt. Dieser wird in den wesentlichen Grundzügen anderen Ansätzen gegenübergestellt. Den Schwerpunkt dieses Beitrags bildet jedoch die Entwicklung einer geeigneten flexiblen Navigations- und Rechercheschicht zu Realisierung projektglobaler Informationsrecherche. Aus der Sicht der Modellierung und Datenverwaltung wie auch aus der Sicht der Informationsrecherche und Informationspräsentation in Planungsprozessen ergeben sich verschiedene Anforderungen an derartige Recherchewerkzeuge, wobei der wesentlichste Grundsatz maximale Flexibilität hinsichtlich verfügbarer Darstellungstechniken und deren freie Kombination mit Techniken formaler Suchanfragen ist. Das entwickelte Systemkonzept basiert auf einem Framework, welches verschiedene Grundtypen von Recherchemodulen und deren Interaktionsprinzipien vorgibt. Einzelne Recherchemodule werden als Ausprägungen dieser Modultypen realisiert und können je nach Bedarf laufzeitdynamisch in die Navigationsschicht integriert werden. Die technische Realisierung des Systems erfolgt im Umfeld vorhandener Prototypen aus vorangegangenen Forschungsaktivitäten. Dieses technische Umfeld gibt verschiedene Rahmenbedingungen vor, welche im Vorfeld prototypischer Implementierungen verschiedene Adaptionen des generellen Systemkonzepts notwendig machen. Der vorliegende Beitrag stellt den aktuellen Entwicklungsstand der Systemlösung aus konzeptioneller und technischer Sicht sowie erste prototypische Realisierungen von Recherchemodulen vor.

The design of safety-critical structures, exposed to cyclic excitations demands for non-degrading or limited-degrading behavior during extreme events. Among others, the structural behavior is mainly determined by the amount of plastic cycles, completed during the excitation. Existing simplified methods often ignore this dependency, or assume/request sufficient cyclic capacity. The paper introduces a new performance based design method that considers explicitly a predefined number of re-plastifications. Hereby approaches from the shakedown theory and signal processing methods are utilized. The paper introduces the theoretical background, explains the steps of the design procedure and demonstrates the applicability with help of an example. This project was supported by German Science Foundation (Deutsche Forschungsgemeinschaft, DFG)

The reduction of oscillation amplitudes of structural elements is necessary not only for maintenance of their durability and longevity but also for elimination of a harmful effect of oscillations on people and technology operations. The dampers are widely applied for this purpose. One of the most widespread models of structural friction forces having piecewise linear relation to displacement was analysed. T The author suggests the application of phase trajectories mapping in plane "acceleration – displacement". Unlike the trajectories mapping in a plane "velocity – displacement", they don't require large number of geometrical constructions for identification of the characteristics of dynamic systems. It promotes improving the accuracy. The analytical assumptions had been verified by numerical modeling. The results show good enough coincide between numerical and analytical estimation of dissipative characteristic.

The paper is devoted to the investigation of dynamical behavior of a cable under influence of various types of excitations. Such element has a low rigidity and is sensitive to dynamic effect. The structural scheme is a cable which ends are located at different level. The analysis of dynamical behavior of the cable under effect of kinematical excitation which is represented by the oscillations of the upper part of tower is given. The scheme of cable is accepted such, that lower end of an inclined cable is motionless. The motion of the upper end is assumed only in horizontal direction. The fourth-order Runge-Kutta method was realized in software. The fast Fourier transform was used for spectral analysis. Standard graphical software was adopted for presenting results of investigations. The mathematical model of oscillations of a cable was developed by the account of the viscous damping. The analysis of dynamical characteristics of a cable for various parameters of damping and kinematical excitation was carried out. The time series, spectral characteristics and amplitude-frequencies characteristics was obtained. The resonance amplitude for different oscillating regimes was estimated. It is noted that increasing of the coefficient of the viscous damping and decreasing of the amplitude of tower's oscillations reduces the value of the critical frequency and the resonant amplitudes.

The extended finite element method (XFEM) offers an elegant tool to model material discontinuities and cracks within a regular mesh, so that the element edges do not necessarily coincide with the discontinuities. This allows the modeling of propagating cracks without the requirement to adapt the mesh incrementally. Using a regular mesh offers the advantage, that simple refinement strategies based on the quadtree data structure can be used to refine the mesh in regions, that require a high mesh density. An additional benefit of the XFEM is, that the transmission of cohesive forces through a crack can be modeled in a straightforward way without introducing additional interface elements. Finally different criteria for the determination of the crack propagation angle are investigated and applied to numerical tests of cracked concrete specimens, which are compared with experimental results.

The paper contains a description of dynamic effects in the silo wall during the outflow of a stored material. The work allows for determining the danger of construction damage due to resonant vibrations and is of practical importance by determining the influence of cyclic pressures and vibro–creeping during prolonged use of a silo. The paper was devised as a result of tests on silo walls in semi-technical scale. The model is generally applicable and allows for identification of parameters in real- size silos as well.

The paper proposes a new method for general 3D measurement and 3D point reconstruction. Looking at its features, the method explicitly aims at practical applications. These features especially cover low technical expenses and minimal user interaction, a clear problem separation into steps that are solved by simple mathematical methods (direct, stable and optimal with respect to least error squares), and scalability. The method expects the internal and radial distortion parameters of the used camera(s) as inputs, and a plane quadrangle with known geometry within the scene. At first, for each single picture the 3D position of the reference quadrangle (with respect to each camera coordinate frame) is calculated. These 3D reconstructions of the reference quadrangle are then used to yield the relative external parameters of each camera regarding the first one. With known external parameters, triangulation is finally possible. The differences from other known procedures are outlined, paying attention to the stable mathematical methods (no usage of nonlinear optimization) and the low user interaction with good results at the same time.

In this paper proposed the application of two-parameters damage model, based on non-linear finite element approach, to the analysis of masonry panels. Masonry is treated as a homogenized material, for which the material characteristics can be defined by using homogenization technique. The masonry panels subjected to shear loading are studied by using the proposed procedure within the framework of three-dimensional analyses. The nonlinear behaviour of masonry can be modelled using concepts of damage theory. In this case an adequate damage function is defined for taking into account different response of masonry under tension and compression states. Cracking can, therefore, be interpreted as a local damage effect, defined by the evolution of known material parameters and by one or several functions which control the onset and evolution of damage. The model takes into account all the important aspects which should be considered in the nonlinear analysis of masonry structures such as the effect of stiffness degradation due to mechanical effects and the problem of objectivity of the results with respect to the finite element mesh. Finally the proposed damage model is validated with a comparison with experimental results available in the literature.

This paper presents results of applying Fuzzy Inference System for estimation of the number of potential Park and Ride users. Usually it is difficult to evaluate the number of users because it depends on human factor and data in the considered system are uncertain. In such situation the traditional mathematical approaches can not take into consideration rough data. Therefore a fuzzy approach can be applied in this case. A fuzzy methodology is treated as a proper way to describe choice of mode of transport, and especially that uncertainty accompanied of choosing process has rather fuzzy character. The proposed approach is based on the Mamdani Fuzzy Inference System and for calculation there is used Matlab software with Fuzzy Logic Toolbox. Mamdani model requires, as an input data, knowledge of the shape of membership function. These functions can be calibrated taking into consideration results of questionnaires conducted among users of Park and Ride system. Due to lack of representative sample of users, one has decided to use results of experts' questionnaires as a input data for calibration the shape of membership functions. Describing factor will be generalized cost of the trip for different modes of transport. Proposed approach consists of two main stages: modeling of share of public/private transport trips and Multimodal model estimating number of Park and Ride users. Verification of presented methodology is treated as an indirect proof. Proposed approach can be applied for estimation of bi-modal split. Then the results are compared with traditional approaches based on logit functions. Comparable results of proposed fuzzy approach with traditional logit models can be treated as a confirmation of chosen methodology.

... WITHOUT RIGHT ANGLE.
(2006)

Currently sculptural design is one of the most discussed themes in architecture. Due to their light weight, easy transportation and assembly, as well as an almost unlimited structural variety, parameterised spatial structures are excellently suited for constructive realisation of free formed claddings. They subdivide the continuous surface into a structure of small sized nodes, straight members and plane glass panels. Thus they provide an opportunity to realise arbitrary double-curved claddings with a high degree of transparency, using industrial semi-finished products (steel sections, flat glass). Digital design strategies and a huge number of similar looking but in detail unique structural members demand a continuous digital project handling. Within a research project, named MYLOMESH, a free-formed spatial structure was designed, constructed, fabricated and assembled. All required steps were carried out based on digital data. Different digital connections (scripts) between varying software tools, which are usually not used in the planning process of buildings, were created. They allow a completely digital workflow. The project, its design, meshing, constructive detailing and the above-mentioned scripts are described in this paper.

DIGITAL SUPPORT OF MATERIAL- AND PRODUCT SELECTION IN THE ARCHITECTURAL DESIGN- AND PLANNING PROCESS
(2006)

Architecture is predominantly perceived over the surfaces limiting the space. The used surface materials thereby should support the design intention and have to fulfil various technical and economical requirements. If the architect wants to select the "right" or the "best" material he has to play with very different and sometimes contradicting criteria and must weight these individually for the special purpose. This selection process is supported only insufficiently by today's digital systems. If it would be possible to illustrate all the various parameters by numerical values, the method of multidimensional scaling will offer a solution for architects to find the material which is best fitting on basis of his individual weighting of criteria. By displaying the result of the architect's multidimensional query in a spatial arrangement multidimensional scaling can support an interactive selection process with additional feedback over the applied search strategy.

The concept is presented of the sensitivity analysis of the limit state of the structure with respect to selected basic variables. The sensitivity is presented in the form of the probability distribution of the limit state of the structure. The analysis is performed by the problem-oriented Monte Carlo simulation procedure. The procedure is based on the problem's definition of the elementary event, as a structural limit state. Thus the sample space consists of limit states of the structure. Defined on the sample space the one-dimensional random multiplier is introduced. This multiplier refers to the dominant basic variable (group of variables) of the problem. Numerical procedure results in the set of random numbers. Normalized relative histogram of this set is an estimator of the PDF of the limit state of the structure. Estimators of reliability, or the probability of failure are statistical characteristics of this histogram. The procedure is illustrated by the example of sensitivity analysis of the serviceability limit state of monumental structure. It is the colonnade of Licheń Basilica, situated in central Poland. Limit state of the structure is examined with reference to the upper deck horizontal deflection. Wind actions are taken as dominant variables. An assumption is made that the wind load intensities acting on the lower and on the upper storey of the colonnade, respectively, are identically distributed, but correlated random variables. Three correlation variants of these variables are considered. Relevant limit state histograms are analysed thereafter. The paper ends with the conclusions referring to the method and some general remarks on the fully probabilistic design.

VARIATION OF ROTATIONAL RESTRAINT IN GRID DECK CONNECTION DUE TO CORROSION DAMAGE AND STRENGTHENING
(2006)

The approach to assessment of rotational restraint of stringer-to-crossbeam connection in a deck of 100-year old steel truss bridge is presented. Sensitivity of rotational restraint coefficient of the connection to corrosion damage and strengthening is analyzed. Two criteria of the assessment of the rotational restraint coefficient are applied: static and kinematic one. The former is based on bending moment distribution in the considered member, the latter one – on the member rotation at the given joint. 2D-element model of finite element method is described: webs and flanges are modeled with shell elements, while rivets in the connection – with system of beam and spring elements. The method of rivet modeling is verified by T-stub connection test results published in literature. FEM analyses proved that recorded extent of corrosion damage does not alter the initial rotational restraint of stringer-to-crossbeam connection. Strengthening of stringer midspan influences midspan bending moment and stringer end rotation in a different way. Usually restoring member load bearing capacity means strengthening its critical regions (where the highest stress levels occur). This alters flexural stiffness distribution over member length and influences rotational restraint at its connection to other members. The impact depends on criterion chosen for rotational restraint coefficient assessment.

Monitoring und Bewertung sind Hauptaufgaben im Management bzw. der Revitalisierung von Bauwerken. Unterschiedliche Verfahren können bei der Akquisition der erforderlichen geometrischen Information, wie z. B. Größe oder Verformung eines Gebäudes, eingesetzt werden. Da das Potenzial der digitalen Fotografie kontinuierlich wächst, stellt die Industriephotogrammetrie heute eine bedeutende Alternative zu den klassischen Verfahren wie Dehnmessstreifen oder anderen taktilen Sensoren dar. Moderne Industriephotogrammetrie erfasst die Bilder mittels digitaler Systeme. Dies bedeutet, dass die Information digitaler Bilder mit Hilfe der digitalen Bildverarbeitung untersucht werden muss, um die Bildkoordinaten der Messpunkte zu erhalten. Eine der Aufgaben der Bildverarbeitung für photogrammetrische Zwecke besteht somit darin, den Mittelpunkt von kreisförmigen Marken zu lokalisieren. Die modernen Operatoren liefern Subpixelgenauigkeit für die Koordinaten des Punktes. Das optische Messverfahren der Industriephotogrammetrie erfordert hinsichtlich der Hardware in erster Linie hochauflösende digitale Kameras. Dabei lassen sich die Kameras in Videokameras, HighSpeed-Kameras, intelligente Kameras sowie so genannte Consumer und Professionelle Kameras unterscheiden. Die geometrische Auflösung digitaler HighEnd-Kameras liegt heute bei über 10 Megapixel. In punkto Datentransfer zum Rechner sind verschiedene Standards am Markt verfügbar, z. B. USB2.0, GigE-Vision, CameraLink oder Firewire. Die Wahl des Standards hängt immer von der spezifischen Aufgabenstellung ab, da keine der Techniken eine führende Position einnimmt. Die moderne Photogrammetrie bietet viele neue Möglichkeiten für das Monitoring und die Bewertung von Bauwerken. Sie kann ein-, zwei-, drei- oder vierdimensionale Informationen liefern, falls erforderlich auch in Echtzeit. Als berührungsloses Messverfahren ist der Einsatz der Photogrammetrie noch möglich, wenn die taktilen Sensoren z. B. aufgrund ihres Platzbedarfes nicht mehr eingesetzt werden können. Hochauflösende Videokameras erlauben es, selbst dynamische Untersuchungen mit großer Präzision durchzuführen.

Car following models are used to describe the behavior of a number of cars on the road dependent on the distance to the car in front. We introduce a system of ordinary differential equations and perform a theoretical and numerical analysis in order to find solutions that reflect various traffic situations. We present three different variations of the model motivated by reality.

In many branches companies often lose the visibility of their human and technical resources of their field service. On the one hand the people in the fieldservice are often free like kings on the other hand they do not take part of the daily communication in the central office and suffer under the lacking involvement in the decisions inside the central office. The result is inefficiency. Reproaches in both directions follow. With the radio systems and then mobile phones the ditch began to dry up. But the solutions are far from being productive.

Die Kommunale Wohnungsgesellschaft mbH Erfurt(KoWo) ist mit ihren rund 20.000 Wohnungen in der Landeshauptstadt das größte Wohnungsunternehmen in Thüringen. Der Immobilienbestand ist heterogen in seinem technischen Zustand und im Bezug auf die unterschiedlichen Lagen der Objekte. Bedingt durch Leerstände und unterschiedliche Modernisierungsmaßnahmen und -stände unterscheidet sich die Wirtschaftlichkeit verschiedener Objekte deutlich. Ohne eine einheitliche Einwertung des Immobilienbestandes im Bezug auf die Objektattraktivität, die Standortqualität und die Objektwirtschaftlichkeit fällt eine langfristige strategische Entwicklung des Immobilienportfolios schwer. Über die Schritte der technischen Bestandserfassung, die Einwertung über ein Scorintmodell, die Abbildung in einem Portfoliomodell mit zugehöriger Normstrategie bis hin zur Weiterverarbeitung der Daten in der 20-jährigen Instandsetzungsplanung wird praxisnah aufgezeigt, wie die Vorgehensweise bei der Einwertung des Immobilienportfolios ist.

Im Bereich der Altbausanierung und der Bestandserfassung im Bauwesen ist es häufig notwendig, bestehende Pläne hinsichtlich des Bauwerkszustandes zu aktualisieren oder, wenn diese Pläne nicht (mehr) zugänglich sind, gänzlich neue Planunterlagen des Ist-Zustandes zu erstellen. Ein komfortabler Weg, diese Bauwerksdaten zu erheben, eröffnet die Technologie der Laservermessung. Der vorliegende Artikel stellt in diesem Zusammenhang Ansätze zur Teilautomatisierung der Generierung eines dreidimensionalen Computermodells eines Bauwerkes vor. Als Ergebnis wird ein Volumenmodell bereitgestellt, in dem zunächst die geometrischen und topologischen Informationen über Flächen, Kanten und Punkte im Sinne eines B-rep Modells beschrieben sind. Die Objekte dieses Volumenmodells werden mit Verfahren aus dem Bereich der künstlichen Intelligenz analysiert und in Bauteilklassen systematisch kategorisiert. Die Kenntnis der Bauteilsemantik erlaubt es somit, aus den Daten ein Bauwerks-Produktmodell abzuleiten und dieses einzelnen Fachplanern – etwa zur Erstellung eines Energiepasses – zugänglich zu machen. Der Aufsatz zeigt den erfolgreichen Einsatz virtueller neuronaler Netze im Bereich der Bestandserfassung anhand eines komplexen Beispiels.

Unconstrained models are very often found in the broad spectrum of different theories of traffic demand models. In these models there are none or only one-sided restrictions influencing the choice of the individual. However in the traffic demand different deciding dependencies of the traffic volume with regard to the specific conditions of the territory structure potentials exist. Kichhoff and Lohse introduced bi- and tri-linearly constrained models to show these dependencies. In principle, the dependencies are described as hard, elastic and open boundary sum criteria. In this article a model is formulated which gets away from these predefined boundary sum criteria and allows a free determination of minimal and maximal boundary sum criteria. The iterative solution algorithm is shown according to a FURNESS procedure at the same time. With the approach of freely selectable minimal and maximal boundary sum criteria the modeling transport planner gets the possibility to show the traffic event even better. Furthermore all common boundary sum criteria can be calculated with this model. Therewith the often necessary and sensible standard and special cases can also be modeled.

Räume und Gebäude sind heute wegen der enormen Funktionalität der technischen Gebäudeausrüstung (TGA) in Kombination mit der sonstigen Ausstattung und den diversen Anwendungsprozessen und Nutzergruppen ohne innovative Konzepte der integrierten Bedienung kaum noch beherrschbar bzw. optimal nutzbar. Dies gilt sowohl für Wohn- als auch für Zweckimmobilien. Die Gebäudeleittechnik (GLT) und die Gebäudeautomation (GA) können hier unter sinnvoller Integration der Möglichkeiten der Mikroelektronik, Multimedia-, Kommunikations- und Informationstechnik erheblich zu nutzbringenden Innovationen beitragen. Die Automobilindustrie hat in den letzten Jahren gezeigt, wie durch einen integralen Systemansatz und durch Einsatz von Elektronik, Kommunikations- und Informationstechnik eine sinnvolle technische Assistenz der Anwender machbar ist. Genannt sei hier das Konzept des Cockpits mit integrierter Funktionsbündelung und der Informationskonzentration am Armaturenbrett. Im Gegensatz zum Automobil ist der Bereich der technischen Gebäudeausstattung in Wohn- und Nutzimmobilien gekennzeichnet durch eine starke Fragmentierung in unterschiedlichste Gewerke unter Beteiligung vieler oft schlecht koordinierter Akteure. Durch das Duisburger inHaus-Innovationszentrum für Intelligente Raum- und Gebäudesysteme der Fraunhofer-Gesellschaft wurden in den letzten Jahren neuartige Konzepte der Systemintegration heterogener Technik auf der Basis von Middleware-Plattformen und Multimedia-Technologien und -Geräten entwickelt, getestet und in die Anwendung getragen. Einer der ersten Systemanwendungen dieses offenen Infrastrukturkonzepts ist die integrierte Systembedienung mit zum Teil völlig neuen Bedienkonzepten und einer starken Bedienungsvereinfachung auch komplexester Technikausrüstungen in Immobilien. Der Beitrag beschreibt nach einer Analyse der Ausgangslage die technologischen Grundzüge der integrierten Systembedienung. Es folgen einige Anwendungsbeispiele und eine zusammenfassende Bewertung mit einem Ausblick auf weiterführende Aktivitäten.

In distributed project organisations and collaboration there is a need for integrating unstructured self-contained text information with structured project data. We consider this a process of text integration in which various text technologies can be used to externalise text content and consolidate it into structured information or flexibly interlink it with corresponding information bases. However, the effectiveness of text technologies and the potentials of text integration greatly vary with the type of documents, the project setup and the available background knowledge. The goal of our research is to establish text technologies within collaboration environments to allow for (a) flexibly combining appropriate text and data management technologies, (b) utilising available context information and (c) the sharing of text information in accordance to the most critical integration tasks. A particular focus is on Semantic Service Environments that leverage on Web service and Semantic Web technologies and adequately support the required systems integration and parallel processing of semi-structured and structured information. The paper presents an architecture for text integration that extends Semantic Service Environments with two types of integration services. Backbone to the Information Resource Sharing and Integration Service is a shared environment ontology that consolidates information on the project context and the available model, text and general linguistic resources. It also allows for the configuration of Semantic Text Analysis and Annotation Services to analyse the text documents as well as for capturing the discovered text information and sharing it through semantic notification and retrieval engines. A particular focus of the paper is the definition of the overall integration process configuring a complementary set of analyses and information sharing components.

The use of virtual reality techniques in the development of educational applications brings new perspectives to the teaching of subjects related to the field of civil construction in Civil Engineering domain. In order to obtain models, which would be able to visually simulate the construction process of two types of construction work, the research turned to the techniques of geometric modelling and virtual reality. The applications developed for this purpose are concerned with the construction of a cavity wall and a bridge. These models make it possible to view the physical evolution of the work, to follow the planned construction sequence and to visualize details of the form of every component of the works. They also support the study of the type and method of operation of the equipment necessary for these construction procedures. These models have been used to distinct advantage as educational aids in first-degree courses in Civil Engineering. Normally, three-dimensional geometric models, which are used to present architectural and engineering works, show only their final form, not allowing the observation of their physical evolution. The visual simulation of the construction process needs to be able to produce changes to the geometry of the project dynamically. In the present study, two engineering construction work models were created, from which it was possible to obtain three-dimensional models corresponding to different states of their form, simulating distinct stages in their construction. Virtual reality technology was applied to the 3D models. Virtual reality capacities allow the interactive real-time viewing of 3D building models and facilitate the process of visualizing, evaluating and communicating.

Die effektive Kooperation aller beteiligten Fachplaner im Bauplanungsprozess ist die Voraussetzung für wirtschaftliches und qualitativ hochwertiges Bauen. Bauprojektorganisationen bestehen in der Regel aus zahlreichen unabhängigen Planungspartnern, die örtlich verteilt spezifische Planungsaufgaben bearbeiten und die Ergebnisse in Teilproduktmodellen ablegen. Da Planungsprozesse im Bauwesen stark arbeitsteilig ablaufen, sind die Teilproduktmodelle der einzelnen Fachplanungen in hohem Maße voneinander abhängig. Ziel des hier vorgestellten Ansatzes ist die Integration der Teilproduktmodelle der Gebäudeplanung in einem netzwerkbasierten Modellverbund am Beispiel der Brandschutzplanung. Im Beitrag werden die Probleme der Verteiltheit und insbesondere der semantischen Heterogenität der involvierten Teilproduktmodelle betrachtet. Der verteilte Zugriff wird mithilfe mobiler Software-Agenten realisiert. Die Agenten können sich dabei frei im netzwerkbasierten Planungsverbund bewegen und agieren als Vertreter der Fachplaner. Das Problem der semantischen Heterogenität der Teilproduktmodelle wird auf der Basis von Ontologien gelöst. Dazu werden erstens Domänenontologien entwickelt, die Objekte der realen Welt einer abgeschlossenen Domäne, hier des Brandschutzes, abbilden. Zweitens werden Applikationsontologien entwickelt, die die einzelnen proprietären Datenhaltungen (im Sinne von Teilproduktmodellen) der jeweiligen Fachplanungen repräsentieren. Beide Ontologien werden mit einem regelbasierten Ansatz verknüpft. Im vorgestellten Anwendungsfall Brandschutz dient die Domänenontologie als einheitliche Schnittstelle für den Zugriff auf die verteilten Modelle und abstrahiert dabei von deren Datenbankspezifika und proprietären Schemata. Mithilfe von mobilen Agenten und semantischen Technologien kann so eine Plattform zur Verfügung gestellt werden, die erstens die dynamische Integration von Ressourcen in den Planungsverbund erlaubt und zweitens auf deren Basis unabhängig von der Verteiltheit und Heterogenität der eingebundenen Ressourcen ingenieurgerechte Verarbeitungsmethoden realisiert werden können.

Durch die Betrachtung des Produktions-Prozesses als zentrales Transformationselement wird die Struktur der Bauproduktion realitätsnah gefasst. Die Integration der prozessorientierten Kostendefinition setzt relevante Kostenparameter und Produktionsfaktoren so in Beziehung, dass sie im Einklang mit der realen Kostenstruktur und Kostendynamik einer Baustelle stehen. Die Beziehung zwischen Bauzeit und Kosten wird direkt erfasst und ausgewertet. Der hohen Dynamik der Bauproduktion zwischen kapazitätsbeschränkten Einsatzmitteln und Produktionsprozessen wurde durch das Poolmodell und der Simulation als Berechnungsmethode Rechnung getragen. Eine einfache Modellierung von sich zyklusartig wiederholenden Arbeitsvorgängen (Taktplanung) ist möglich. Die Taktbildung vollzieht sich bei der Simulation durch Kapazitätsbeschränkungen ohne Zutun des Benutzers. Durch eine Optimierungsmethode kann automatisiert nach der kostengünstigsten oder zeitlich schnellsten Produktionsvariante gesucht werden

MODELLING THE PLASTIC HINGE IN THE STATICALLY INDETERMINABLE REINFORCED CONCRETE BAR ELEMENTS
(2006)

The paper presents the example numerical model to calculate the reinforced concrete bar structures. Usually applied methods of structure dimensioning do not include the case of plastic hinges occurrence under the limit load of construction. The model represented by A. Borcz is based on the differential equation of deflection line of the beam and it includes the effects of rearrangement of the internal forces and reological effects. The experimental parameters obtained in earlier investigations describe effects resulting from the rise of plastic hinges in the proposed equation.

This research focuses on the Case-based Reasoning paradigm in architectural design (CBD) and education. Initial point for further exploring this only seemingly comprehensive investigated field of research constitutes the finding that promising looking concepts exist but that they do not play a role in daily routine of designing architects or in university education. In search of reasons for this limited success a critical review of the CBR approach to architectural education and design was performed. The aim was to identify gaps in the CBD research and to discover potential fields of research within CBR research in architectural education and design to improve acceptance and practical suitability. Two major shortcomings could be identified. In the first place the way retrieval mechanisms of systems under investigation relate to the needs of architectural designers and students. At second: Successful CBD systems rely on the work of third-parties in sharing their experiences with others and filling the databases with relevant cases. Therefore two questions remain unanswered: The question of which projects become part of the database and how get existing projects not only described but evaluated. This is an essential task and prerequisite to meet the requirements of the underlying theory of CBR.

The presented method for an physically non-linear analysis of stresses and deformations of composite cross-sections and members based on energy principles and their transformation to non-linear optimisation problems. From the LAGRANGE principle of minimum of total potential energy a kinematic formulation of the mechanical problem can be developed, which has the general advantage that pre-deformations excited by shrinkage, temperature, residual deformations after unloading et al., can be considered directly. Thus the non-linear analysis of composite cross-sections with layers of different mechanical properties and different preloading becomes possible and cracks in concrete, stiffness degradation and other specifics of the material behaviour can be taken into account without cardinal modification of the mathematical model. The impact of local defects on the bearing capacity of an entire element can also be analysed in this principle way. Standard computational systems for mathematical optimisation or general programs for spreadsheet analysis enable an uncomplicated implementation of the developed models and an effective non-linear analysis for composite cross-sections and elements.

A new approach to the non-linear analysis of cross-sections loaded by normal forces and bending moments is presented in the paper. The mechanical model is based on the LAGRANGE principle of minimum of total potential energy. Deformations, stresses and limit load parameters are obtained by solving a non-linear optimisation problem. The mathematical model is independent of the specifics of material. In addition to the stress strain relation and the specific strain energy W(ε) two further functions F(ε) and Φ(ε) are introduced to describe the material behaviour. Thus cracks in concrete, non-linearity of material etc. can be taken into account without basic modification of the numerical algorithm. For polygonal cross-sections the GAUSS' integral theorem is used. Numerical solutions of the non-linear optimisation problems can be found by application of standard software. Thus the analysis of reinforced concrete cross-sections or more general composite cross-sections with non-linear behaviour of material is as simple as in the case of linear elasticity. The application of the method is demonstrated for polygonal cross-sections. Pre-stresses or pre-strains can easily be included in the mathematical model.

Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.

The paper presents a linear static analysis on continuous orthotropic thin-walled shell structures simply supported at the transverse ends with a random deformable contour of the cross section. The external loads can be random as well. The class of this structures involves most of the bridges, scaffold bridges, some roof structures etc. A numerical example of steel continuous structures on five spans with an open contour of the cross-section has been solved. The examination of the structure has used the following two computation models: a prismatic structure consisting of isotropic strips, a plates and ribs, with considering their real interaction, and a smooth orthotropic plate equivalent to the structure in the first model. The displacements and forces of the structure characterizing its stressed and deformed condition have been determined. The results obtained from the two solutions have been analyzed. The study on the structure is made with the force method in combination with the analytical finite strip method (AFSM) in displacements. The basic system is obtained by separating the superstructure from the understructure at the places of intermediate supports and consists of two parts. The first part is a single span thin-walled prismatic shell structure; the second part presents supports (columns, space frames etc.). The connection between the superstructure and intermediate supports is made under random supporting conditions. The forces at the supporting points in the direction of the connections removed are assumed to be the basic unknowns of the force method. The solution of the superstructure has been accomplished by the AFSM in displacements. The structure is divided in only one (transverse) direction into a finite number of plain strips connected to each other in longitudinal linear nodes. The three displacements of the points on the node lines and the rotation around those lines have been assumed to be the basic unknown in each node. The boundary conditions of each strip of the basic system correspond to the simply support along the transverse ends and the restraint along the longitudinal ones. The particular strip of the basic system has been solved by the method of the single trigonometric series. The method is reduced to solving a discrete structure in displacements and restoring its continuity at the places of the sections made in respect to both the displacements and forces. The two parts of the basic system have been solved in sequence under the action of single values of each of the basic unknowns and with the external load. The solution of the support part is accomplished using software for analyzing structures by the FEM. The basic unknown forces have been determined from system of canonic equations, the conditions of the deformations continuity on the places of the removed connections under superstructure and intermediate supports. The final displacements and forces at a random point of a continuous superstructure have been determined using the principle of superposition. The computations have been carried by software developed with Visual Fortran version 5.0 for PC.

The paper is dedicated to decidability exploration of market segmentation problem with the help of linear convolution algorithms. Mathematical formulation of this problem represents interval task of bipartite graph cover by stars. Vertices of the first partition correspond to types of commodities, vertices of the second – to customers groups. Appropriate method is offered for interval problem reduction to two-criterion task that has one implemented linear convolution algorithm. Unsolvability with the help of linear convolution algorithm of multicriterion, and consequently interval, market segmentation problem is proved.

Water resources development and management is a complex problem. It includes the design and operation of single system components, often as part of larger interrelated systems and usually on the basis of river basins. While several decades ago the dominant objective was the maximization of economic benefit, other objectives have evolved as part of the sustainable development envisaged. Today, planning and operation of larger water resources systems is practically impossible without adequate computer tools, normally being one or several models, increasingly combined with data bank management systems and multi criteria assessment procedures in decision support systems. The use of models in civil engineering already has a long history when structural engineering is considered. These design support models, however, must rather be seen as expert systems made to support the engineer with his daily work. They often have no direct link to stakeholders and the decision makers community. The scale of investigation is often much larger in water resources engineering than in structural engineering which is related to different stakeholders and decision making procedures. Still, several similarities are obvious which can be summarized as the search for a compromise solution on a complex, i.e. multiobjective and interdisciplinary decision problem. While in structural engineering e.g. aestetics, stability and energy consumption might be important evaluation criteria in addition to construction and maintenance cost other or additional criteria have to be considered in water resources planning such as political, environmental and social criteria. In this respect civil engineers tend to overemphasize technical criteria. For the future the existing expert systems should be embedded into an improved decision support shell, keeping in mind that decision makers are hardly interested in numerical modelling results. The paper will introduce into the problem and demonstrate the state of the art by means of an example.

The paper is a proposal of calculation of internal forces and dislocations in the reinforced concrete beams before and after cracking. For the ideally elastic bars transfer matrix proposed by Rakowski was applied. The effects associated with cracking were introduced by means of the Borcz's theory in the spectrally way. Numerical example was shown. The presented attitude also enables to calculate dynamic problems and those connected with the stability of the compressed and bending cracked beams and columns.

This paper deals with the development of a new multi-objective evolution strategy in combination with an integrated pollution-load and water-quality model. The optimization algorithm combines the advantages of the Non-Dominated Sorting Genetic Algorithm and Self-Adaptive Evolution Strategies. The identification of a good spread of solutions on the pareto-optimum front and the optimization of a large number of decision variables equally demands numerous simulation runs. In addition, statements with regard to the frequency of critical concentrations and peak discharges require continuous long-term simulations. Therefore, a fast operating integrated simulation model is needed providing the required precision of the results. For this purpose, a hydrological deterministic pollution-load model has been coupled with a river water-quality and a rainfall-runoff model. Wastewater treatment plants are simulated in a simplified way. The functionality of the optimization and simulation tool has been validated by analyzing a real catchment area including sewer system, WWTP, water body and natural river basin. For the optimization/rehabilitation of the urban drainage system, both innovative and approved measures have been examined and used as decision variables. As objective functions, investment costs and river water quality criteria have been used.

The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.

The modeling of crack propagation in plain and reinforced concrete structures is still a field for many researchers. If a macroscopic description of the cohesive cracking process of concrete is applied, generally the Fictitious Crack Model is utilized, where a force transmission over micro cracks is assumed. In the most applications of this concept the cohesive model represents the relation between the normal crack opening and the normal stress, which is mostly defined as an exponential softening function, independently from the shear stresses in tangential direction. The cohesive forces are then calculated only from the normal stresses. By Carol et al. 1997 an improved model was developed using a coupled relation between the normal and shear damage based on an elasto-plastic constitutive formulation. This model is based on a hyperbolic yield surface depending on the normal and the shear stresses and on the tensile and shear strength. This model also represents the effect of shear traction induced crack opening. Due to the elasto-plastic formulation, where the inelastic crack opening is represented by plastic strains, this model is limited for applications with monotonic loading. In order to enable the application for cases with un- and reloading the existing model is extended in this study using a combined plastic-damage formulation, which enables the modeling of crack opening and crack closure. Furthermore the corresponding algorithmic implementation using a return mapping approach is presented and the model is verified by means of several numerical examples. Finally an investigation concerning the identification of the model parameters by means of neural networks is presented. In this analysis an inverse approximation of the model parameters is performed by using a given set of points of the load displacement curves as input values and the model parameters as output terms. It will be shown, that the elasto-plastic model parameters could be identified well with this approach, but require a huge number of simulations.

In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.

Requires for reliability and durability of structures and their elements with simultaneous material economy have stimulated improvement of constitutive equations for description of elasto-plastic deformation processes. This has led to the development of phenomenological modelling of complex phenomena of irreversible deformation including history-dependent and rate-dependent effects. During the last several decades many works have been devoted to the development of elasto-plastic models, in order to better predict the material behavior under combined variable thermo-mechanical loading. The increase of accuracy of stress analysis and safety factors for complex structures with the help of modern finite-element packages (ABAQUS, ANSYS, COSMOS, LS-DYNA, MSC.MARC, MSC.NASTRAN, PERMAS and other) can be provided only by use of complex and special variants of plasticity theories, which are adequate for the considered loading conditions and based on authentic information about properties of materials. The areas of application of the various theories (models) are as a rule unknown to the users of finite-element packages at the existing variety loading condition sin machine-building designs. At the moment a universal theory of inelasticity is absent and even the most accomplished theories can not guarantee adequate description of deformation processes for arbitrary structure under wide range of loading programs. The classifier of materials, loading conditions, effects (phenomena) and list of basic experiments are developed by the authors. Use of these classifiers for an establishment of hierarchy of models is a first step for introduction of the multimodel analysis into computational practice. The set of the classic and modern inelasticity theories is considered, so that they are applicable for stress analysis of structures under complex loading programs. Among them there are plastic flow theories with linear and nonlinear isotropic and kinematic hardening, multisurface theories, endochronic theory, holonomic theory, rheologic models, theory of elasto-plastic processes, slip theory, physical theories (single crystal and polycrystalline models) and others. The classification of materials provides rearranging by a degree of homogeneous, chemical composition, level of strength and plasticity, behavior under cyclic loading, anisotropy of properties at initial condition, anisotropy of properties during deformation process, structural stability. The classification of loading conditions takes into consideration proportional and non-proportional loading, temperature range, combination of cyclic and monotonous loading, one-axial, two-axial and complex stress state, curvature of strain path, presence of stress concentrators and level of strain gradient. A unified general form of constitutive equations is presented for all used material models based upon the concept of internal state variables. The wide range of mentioned above inelastic material models has been implemented into finite element program PANTOCRATOR developed by authors (see for details www.pantocrator.narod.ru). Application possibility of different material models is considered both for material element and for complex structures subjected to complex non-proportional loading.

A new application of software technology is the application area of smart living or sustainable living. Within this area application platforms are designed and realized with the goal to support value added services. In this context value added services integrates microelectronics, home automation and services to enhance the attractiveness of flats, homes and buildings. Especially real estate companies or service providers dealing with home services are interested in an effective design and management of their services. Service Engineering is the approved approach for designing customer oriented service processes. Service engineering consists of several phases; from situation analysis to service creation and service design to service management. This article will describe how the method service blueprint can be used to design service processes. Smart living includes all actions to enlarge a flat to a smart home for living. One special requirement of this application domain is the use of local components (actuators, sensors) within service processes. This article will show how this extended method supports service providers to improve the quality of customer oriented service processes and the derivation of needed interfaces of involved actors. For the civil engineering process it will be possible to derive needed information from a built in home automation system. The aim is to show, how to get needed smart local components to fullfill later offered it-supported value added services. Value added services focused on inhabitants are grouped to consulting and information, care and supervision, leisure time activities, repairs, mobility and delivery, safety and security, supply and disposal.

In many applications such as parameter identification of oscillating systems in civil enginee-ring, speech processing, image processing and others we are interested in the frequency con-tent of a signal locally in time. As a start wavelet analysis provides a time-scale decomposition of signals, but this wavelet transform can be connected with an appropriate time-frequency decomposition. For instance in Matlab are defined pseudo-frequencies of wavelet scales as frequency centers of the corresponding bands. This frequency bands overlap more or less which depends on the choice of the biorthogonal wavelet system. Such a definition of frequency center is possible and useful, because different frequencies predominate at different dyadic scales of a wavelet decomposition or rather at different nodes of a wavelet packet decomposition tree. The goal of this work is to offer better algorithms for characterising frequency band behaviour and for calculating frequency centers of orthogonal and biorthogonal wavelet systems. This will be done with some product formulas in frequency domain. Now the connecting procedu-res are more analytical based, better connected with wavelet theory and more assessable. This procedures doesn’t need any time approximation of the wavelet and scaling functions. The method only works in the case of biorthogonal wavelet systems, where scaling functions and wavelets are defined over discrete filters. But this is the practically essential case, because it is connected with fast algorithms (FWT, Mallat Algorithm). At the end corresponding to the wavelet transform some closed formulas of pure oscillations are given. They can generally used to compare the application of different wavelets in the FWT regarding it’s frequency behaviour.

The design and application of high performance materials demands extensive knowledge of the materials damage behavior, which significantly depends on the meso- and microstructural complexity. Numerical simulations of crack growth on multiple length scales are promising tools to understand the damage phenomena in complex materials. In polycrystalline materials it has been observed that the grain boundary decohesion is one important mechanism that leads to micro crack initiation. Following this observation the paper presents a polycrystal mesoscale model consisting of grains with orthotropic material behavior and cohesive interfaces along grain boundaries, which is able to reproduce the crack initiation and propagation along grain boundaries in polycrystalline materials. With respect to the importance of modeling the geometry of the grain structure an advanced Voronoi algorithm is proposed to generate realistic polycrystalline material structures based on measured grain size distribution. The polycrystal model is applied to investigate the crack initiation and propagation in statically loaded representative volume elements of aluminum on the mesoscale without the necessity of initial damage definition. Future research work is planned to include the mesoscale model into a multiscale model for the damage analysis in polycrystalline materials.

This research focuses on an approach to describe principles in architectural layout planning within the domain of revitalization. With the aid of mathematical rules, which are executed by a computer, solutions to design problems are generated. Provided that "design" is in principle a combinatorial problem, i.e. a constraint-based search for an overall optimal solution of a problem, an exemplary method will be described to solve such problems in architectural layout planning. To avoid conflicts relating to theoretical subtleness, a customary approach adopted from Operations Research has been chosen in this work. In this approach, design is a synonym for planning, which could be described as a systematic and methodical course of action for the analysis and solution of current or future problems. The planning task is defined as an analysis of a problem with the aim to prepare optimal decisions by the use of mathematical methods. The decision problem of a planning task is represented by an optimization model and the application of an efficient algorithm in order to aid finding one or more solutions to the problem. The basic principle underlying the approach presented herein is the understanding of design in terms of searching for solutions that fulfill specific criteria. This search is executed by the use of a constraint programming language.

In civil engineering practice, values of column forces are often required before any detailed analysis of the structure has been performed. One of the reasons for this arises from the fast-tracked nature of the majority of construction projects: foundations are laid and base columns constructed whilst analysis and design are still in progress. A need for quick results when feasibility studies are performed or when evaluating the effect of design changes on supporting columns form other situations in which column forces are required, but where a detailed analysis to get these forces seems superfluous. Thus it was concluded that the development of an efficient tool for column force calculations, in which the extensive input required in a finite element analysis is to be avoided, would be highly beneficial. The automation of the process is achieved by making use of a Voronoi diagram. The Voronoi diagram is used a) for subdividing the floor into influence areas and b) as a basis for automatic load assignment. The implemented procedure is integrated into a CAD system in which the relevant geometric information of the floor, i.e. its shape and column layout, can be defined or uploaded. A brief description of the implementation is included. Some comparative results and considerations regarding the continuation of the study are given.

In this paper we evaluate 2D models for soil-water characteristic curve (SWCC), that incorporate the hysteretic nature of the relationship between volumetric water content θ and suction ψ. The models are based on nonlinear least squares estimation of the experimental data for sand. To estimate the dependent variable θ the proposed models include two independent variables, suction and sensors reading position (depth d in the column test). The variable d represents not only the position where suction and water content are measured but also the initial suction distribution before each of the hydraulic loading test phases. Due to this the proposed 2D regression models acquire the advantage that they: (a) can be applied for prediction of θ for any position along the column and (b) give the functional form for the scanning curves.

Interval analysis extends the concept of computing with real numbers to computing with real intervals. As a consequence, some interesting properties appear, such as the delivery of guaranteed results or confirmed global values. The former property is given in the sense that unknown numerical values are in known to lie in a computed interval. The latter property states that the global minimum value, for example, of a given function is also known to be contained in a interval (or a finite set of intervals). Depending upon the amount computation effort invested in the calculation, we can often find tight bounds on these enclosing intervals. The downside of interval analysis, however, is the mathematically correct, but often very pessimistic size of the interval result. This is in particularly due to the so-called dependency effect, where a single variable is used multiple times in one calculation. Applying interval analysis to structural analysis problems, the dependency has a great influence on the quality of numerical results. In this paper, a brief background of interval analysis is presented and shown how it can be applied to the solution of structural analysis problems. A discussion of possible improvements as well as an outlook to parallel computing is also given.

We show a close relation between the Schrödinger equation and the conductivity equation to a Vekua equation of a special form. Under quite general conditions we propose an algorithm for explicit construction of pseudoanalytic positive formal powers for the Vekua equation that as a consequence gives us a complete system of solutions for the Schrödinger and the conductivity equations. Besides the construction of complete systems of exact solutions for the above mentioned second order equations and the Dirac equation, we discuss some other applications of pseudoanalytic function theory.

Procedures of a construction of general solutions for some classes of partial differential equations (PDEs) are proposed and a symmetry operators approach to the raising the orders of the polynomial solutions to linear PDEs are develops. We touch upon an ''operator analytic function theory'' as the solution of a frequent classes of the equations of mathematical physics, when its symmetry operators forms vast enough space. The MAPLE© package programs for the building the operator variables is elaborated also.

Prozesse im Bauingenieurwesen sind komplex und beinhalten eine große Anzahl verschiedener Aufgaben mit vielen logischen Abhängigkeiten. Basierend auf diesen projektspezifischen Abhängigkeiten wird gewöhnlich ein Bauablaufplan manuell erstellt. In der Regel existieren mehrere Varianten und somit alternative Bauabläufe um ein Projekt zu realisieren. Welche dieser Ausführungsvarianten zur praktischen Anwendung kommt, wird durch den jeweiligen Projektmanager bestimmt. Falls Ä;nderungen oder Störungen während des Bauablaufs auftreten, müssen die davon betroffenen Aufgaben und Abläufe per Hand modifiziert und alternative Aufgaben sowie Abläufe stattdessen ausgeführt werden. Diese Vorgehensweise ist oft sehr aufwändig und teuer. Aktuelle Forschungsansätze beschäftigen sich mit der automatischen Generierung von Bauabläufen. Grundlage sind dabei Aufgaben mit ihren erforderlichen Voraussetzungen und erzeugten Ergebnissen. Im Rahmen dieses Beitrags wird eine Methodik vorgestellt, um Bauabläufe mit Ausführungsvarianten in Form von Workflow-Netzen zu jeder Zeit berechnen zu können. Die vorgestellte Methode wird anhand eines Beispiels aus dem Straßenbau schematisch dargestellt.

Für die Ausführung des Oberbaus von Verkehrsflächen existiert in Abhängigkeit von projektspezifischen Voraussetzungen eine Vielzahl von verschiedenen Varianten. Aufgrund von Erfahrungen der Projektplaner werden bei ähnlichen Voraussetzungen häufig gleichartige Ausführungsvarianten gewählt. Um eine mögliche Lösungsvariante für den Straßenoberbau zu erhalten, sollten daher nicht nur die gesetzlichen Richtlinien sondern auch bereits beendete Projekte berücksichtigt werden. Im Rahmen eines Wissenschaftlichen Kollegs an der Bauhaus-Universität Weimar wurde die Anwendung des Case-Based Reasoning für die Auswahl von Ausführungsvarianten für den Straßenoberbau untersucht. In diesem Beitrag werden die grundlegenden Konzepte des Case-Based Reasoning und die Bestimmung von ähnlichen Varianten anhand einfacher Beispiele aus dem Straßenoberbau dargestellt.

This paper presents two new methods for analysis of a technical state of large-panel residential buildings. The first method is based on elements extracted from the classical methods and on data about repairs and modernization collected from building documentations. The technical state of a building is calculated as a sum of several groups of elements defining the technical state. The deterioration in this method depends on: - time, which has passed since last repair of element or time which has passed since construction, - estimate of the state of element groups which can be determined on basis of yearly controls. This is a new unique method. it is easy to use, does not need expertise. The required data could be extracted easily from building documentations. For better accuracy the data from building inspections should be applied (in Poland inspections are made every year). The second method is based on the extracted data processing by means of the artificial neural networks. The aim is to learn the artificial neural network configurations for a set of data containing values of the technical state and information about building repairs for last years (or other information and building parameters) and next to analyse new buildings by the instructed neural network. The second profit from using artificial neural networks is the reduction of number of parameters. Instead of more then 40 parameters describing building, about 6-12 are usually sufficient for satisfactory accuracy. This method could have lower accuracy but it is less prone to data errors.

The Lucas-Kanade tracker has proven to be an efficient and accurate method for calculation of the optical flow. However, this algorithm can reliably track only suitable image features like corners and edges. Therefore, the optical flow can only be calculated for a few points in each image, resulting in sparse optical flow fields. Accumulation of these vectors over time is a suitable method to retrieve a dense motion vector field. However, the accumulation process limits application of the proposed method to fixed camera setups. Here, a histogram based approach is favored to allow more than a single typical flow vector per pixel. The resulting vector field can be used to detect roads and prescribed driving directions which constrain object movements. The motion structure can be modeled as a graph. The nodes represent entry and exit points for road users as well as crossings, while the edges represent typical paths.

For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.

A concept for integrated modeling of urban and rural hydrology is introduced. The concept allows for simulations on the catchment scale as well as on the local scale. It is based on a 2-layer-approach which facilitates the parallel coupling of a catchment hydrology model with an urban hydrology model, considering the interactions between the two systems. The concept has been implemented in a computer model combining a grid based distributed hydrological catchment model and a hydrological urban stormwater model based on elementary units. The combined model provides a flexible solution for time and spatial scale integration and offers to calculate separate water balances for urban and rural hydrology. Furthermore, it is GIS-based which allows for easy and accurate geo-referencing of urban overflow structures, which are considered as points of interactions between the two hydrologic systems. Due to the two-layer-approach, programs of measures can be incorporated in each system separately. The capabilities of the combined model have been tested on a hypothetical test case and a real world application. It could be shown that the model is capable of accurately quantifying the effects of urbanization in a catchment. The affects of urbanization can be analyzed at the catchment outlet, but can also be traced back to its origins, due to the geo-referencing of urban overflow structures. This is a mayor advantage over conventional hydrological catchment models for the analysis of land use changes.

In diesem Beitrag wird eine mobile Software-Komponente zur Vor-Ort-Unterstützung von Bauwerksprüfungen gemäß DIN 1076 „Ingenieurbauwerke im Zuge von Strassen und Wegen, Überwachung und Prüfung“ vorgestellt, welche sich im praktischen Einsatz bei der Hochbahn AG Hamburg befindet. Mit Hilfe dieses Werkzeugs kann die Aktivität am Bauwerk in den gesamten softwaregestützten Geschäftsprozess der auwerksinstandhaltung integriert und somit die Bearbeitungszeit einer Bauwerksprüfung von der Vorbereitung bis zur Prüfbericht-Erstellung reduziert werden. Die Technologie des Mobile Computing wird unter Berücksichtigung spezieller fachlicher Randbedingungen, wie z.B. dem Einsatzort unter freiem Himmel, erläutert und es werden Methoden zur effizienten Datenerfassung mit Stift und Sprache vorgestellt und bewertet. Ferner wird die Einschränkung der Hardware durch die geringere Größe der Endgeräte, die sich durch die Bedingung der Mobilität ergibt, untersucht.

This is an implementation of the Fillmore–Springer–Cnops construction (FSCc) based on the Clifford algebra capacities of the GiNaC computer algebra system. FSCc linearises the linear-fraction action of the Mobius group. This turns to be very useful in several theoretical and applied fields including engineering. The core of this realisation of FSCc is done for an arbitrary dimension, while a subclass for two dimensional cycles add some 2D-specific routines including a visualisation to PostScript files through the MetaPost or Asymptote software. This library is a backbone of many result published in, which serve as illustrations of its usage. It can be ported (with various level of required changes) to other CAS with Clifford algebras capabilities.

The present study was designed to investigate the underlying factors determining the visual impressions of design-patterns that have complex textures. Design-patterns produced by "the dynamical system defined by iterations of discrete Laplacians on the plane lattice" were adopted as stimuli because they were not only complex, but also defined mathematically. In the experiment, 21 graduate and undergraduate students sorted 102 design-patterns into several groups by visual impressions. Those 102 patterns were classified into 12 categories by the cluster analysis. The results showed that the regularity of pattern was a most efficient factor for determining visual impressions of design-pattern, and there were some correspondence between visual impressions and mathematical variables of design-pattern. Especially, the visual impressions were influenced greatly by the neighborhood, and less influenced by steps of iterations.

The design of challenging space structures frequently relies on the theory of folded plates. The models are composed of plane facets of which the bending and membrane stiffness are coupled along the folds. In conventional finite element analysis of faceted structures the continuity of the displacement field is enforced exclusively at the nodes. Since approximate solutions for transverse and for in-plane displacements are not members of the same function space, separation occurs in between the common nodes of adjacent elements. It is shown that the kinematic assumptions of Bernoulli are accounted for this incompatibility along the edges in facet models. A general answer to this problem involves substantial modification of plate and membrane theory, but a straight forward formulation can be derived for simply folded plates, structures, whose folds do not intersect. A broad class of faceted structures, including models of various curved shells, belong to this category and can be calculated consistently. The additional requirements to assure continuity concern the mapping of displacement derivatives on the edges. An appropriate finite facet element provides node and edge-oriented degrees of freedom, whose transformation to system degrees of freedom, depends on the geometric configuration at each node. The concept is implemented using conform triangular elements. To evaluate the new approach, the energy norm of representative structures for refined meshes is calculated. The focus is placed on the mathematical convergence towards reliable solutions obtained from finite volume models.

Mit diesen Ausführungen wird ein Beitrag zum weiteren Erhalt der historischen Bausubstanz in Mecklenburg aus der Sicht der Tragwerksanalyse geleistet. Dabei bestätigt es sich immer mehr, dass mit dem Modell der Geometrie, der Belastung und des Materials gleichberechtigte Modelle für eine wirklichkeitsnahe Einschätzung des Tragverhaltens eines Tragwerks vorliegen müssen. Es zeigt sich, dass dabei die besten Berechnungsprogramme nur die Ergebnisse liefern können, die mit den Eingabedaten zu erzielen sind. So hat sich der Forschungsschwerpunkt im Lehrgebiet Tragwerkslehre des FB Architektur an der Hochschule Wismar in den letzten Jahren auf die realistische Abbildung der Wechselwirkung zwischen der Bauaufnahme und der geometrischen Modellierung konzentriert. In diesem Bereich zeigen sich als Schwerpunkte die Wechselwirkung zwischen Schäden und Tragwerksanalyse und die Wechselwirkung zwischen der aufgenommenen Geometrie und dem geometrischen Modell für die Tragwerksanalyse. Die Fülle der aufgenommenen Daten sind dabei in der Regel mehr hinderlich als ein Segen für die Tragwerksanalyse. Hier wurde gezeigt, welche und wie viele geometrische Daten für das geometrische Modell für die Tragwerksanalyse sinnvoll sind. Da die eigene Datenaufnahme relativ viel Zeit beansprucht, wurde eine "geistige" Bauaufnahme durchgeführt. Dazu wird der historische Planungsprozess in den einzelnen Formfindungsschritten nachvollzogen und in die virtuelle Realität überführt. Mit dieser Methode ergeben sich unterschiedliche Bauzustände und es lassen sich auch mögliche Bauphasen abbilden. Die Tragwerksanalyse dieser virtuellen Realität zeigt dann mögliche Schwächen der Tragwerke und/oder die Notwendigkeit konstruktiver Veränderungen. Ein Vergleich der Ergebnisse der Tragwerksanalyse mit der Realität anhand des vorliegenden Datenbestands liefert die Grundlage für den aktuellen Handlungsbedarf. Da der Bauzustand eines Bauwerkes unter einer zeitlichen Veränderung steht, werden Methoden überprüft, die es ermöglichen, einen einmal vorgelegten Datenbestand aufzubereiten und weiter zu verwalten.

Summer overheating in buildings is a common problem, especially in office buildings with large glazed facades, high internal loads and low thermal mass. Phase change materials (PCM) that undergo a phase transition in the temperature range of thermal comfort can add thermal mass without increasing the structural load of the building. The investigated PCM were micro-encapsulated and mixed into gypsum plaster. The experiments showed a reduction of indoor-temperature of up to 4 K when using a 3 cm layer of PCM-plaster with micro-encapsulated paraffin. The measurement results could validate a numerical model that is based on a temperature dependent function for heat capacity. Thermal building simulation showed that a 3 cm layer of PCM-plaster can help to fulfil German regulations concerning heat protection of buildings in summer for most office rooms.

In the final decades many scientists were occupied intensively with the change of materials during a process and their mathematical descriptions. The extensive and extensive analyses were supported by the advanced computer science. A mathematical description of the phase transformation is a condition for a realistic FE simulation of the state of microstructure. It is possible to simulate the temperature and stress field also in complex construction based on the state of microstructure. In the last years a great number of mathematical models were expanded to describe the transformation between different phases. For the development of the models for transformation kinetics it is practical to subdivide into isothermal and non-isothermal processes according to the thermal conditions. Some models for the description of the transformation with non-isothermal processes represent extensions for isothermal of processes. A part of parameters for the describing equations can be derived from the time-temperature-transformation diagrams in the literature. Furthermore the two possibilities of transformation are considered by different models - diffusion controlled and not diffusion controlled. The material-specific characteristics can be simulated during the transformation for each individual phase in a realistic FE analyses. Also new materials can be simulated after a modification of the parameters in the describing equations for the phase transformation. The effects in the temperature and stress field are a substantial reason for the investigation of the phase transformation during the welding and TIG-dressing processes.

The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.

Für eine beherrschbare Koordination und Durchführung von Planungsaufgaben in Bauprojekten wird der Planungsprozess zunehmend in formalisierten Modellen – Prozessmodellen – beschrieben. Die Produktmodellforschung ihrerseits widmet sich der Speicherung von Planungsdaten in Form von objektorientierten Modellen im Rechner. Hauptaugenmerk sind dabei die Wahrung der Konsistenz und die Modellierung von Abhängigkeiten innerhalb dieses Planungsmaterials. Der Bezug zu den Akteuren der Planung wird nicht direkt hergestellt. Ein formal beschriebener Planungsprozesses kann in der Praxis noch nicht derart realisiert werden, dass ein Zugriff auf Einzelobjekte des Planungsprozesses gewährleistet ist. Bestehende Planungsunterstützungs- und Workflowmanagement-Systeme abstrahieren und ordnen das Planungsmaterial nach wie vor auf Dateiebene. Der vorliegende Artikel beschreibt eine Methode für die geeignete Verbindung von formalisierten Prozessmodellen in der Bauplanung mit den Einzelobjekten, die in den modellorientierten Objektmengen kodiert sind. Dabei wird die Zugehörigkeit bestimmter Objekte zu Plänen und Dokumenten (zum Zwecke des Datenaustauschs) nicht länger durch die physische Zuordnung zu Dateien festgelegt. Es wird ein formales Beschreibungsmittel vorgestellt, welches die entsprechende Teilmengenbildung aus der Gesamtheit der Planungsobjekte ermöglicht. Für die bisherigen Formen des Datenaustausches werden aus den Objektmodellen der Planung Teilmengen herausgelöst und physikalisch zwischen den Planern transportiert. Das neue Beschreibungsmittel hingegen erlaubt es, die Bildungsvorschrift für Objektteilmengen statt der Mengen selbst zwischen den Planern auszutauschen. Der Zugriff auf die konkreten Objekte findet dann direkt modellbasiert statt.

Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.

Advanced finite elements are proposed for the mechanical analysis of heterogeneous materials. The approximation quality of these finite elements can be controlled by a variable order of B-spline shape functions. An element-based formulation is developed such that the finite element problem can iteratively be solved without storing a global stiffness matrix. This memory saving allows for an essential increase of problem size. The heterogeneous material is modelled by projection onto a uniform, orthogonal grid of elements. Conventional, strictly grid-based finite element models show severe oscillating defects in the stress solutions at material interfaces. This problem is cured by the extension to multiphase finite elements. This concept enables to define a heterogeneous material distribution within the finite element. This is possible by a variable number of integration points to each of which individual material properties can be assigned. Based on an interpolation of material properties at nodes and further smooth interpolation within the finite elements, a continuous material function is established. With both, continuous B-spline shape function and continuous material function, also the stress solution will be continuous in the domain. The inaccuracy implied by the continuous material field is by far less defective than the prior oscillating behaviour of stresses. One- and two-dimensional numerical examples are presented.

The present paper is part of a comprehensive approach of grid-based modelling. This approach includes geometrical modelling by pixel or voxel models, advanced multiphase B-spline finite elements of variable order and fast iterative solver methods based on the multigrid method. So far, we have only presented these grid-based methods in connection with linear elastic analysis of heterogeneous materials. Damage simulation demands further considerations. The direct stress solution of standard bilinear finite elements is severly defective, especially along material interfaces. Besides achieving objective constitutive modelling, various nonlocal formulations are applied to improve the stress solution. Such a corrective data processing can either refer to input data in terms of Young's modulus or to the attained finite element stress solution, as well as to a combination of both. A damage-controlled sequentially linear analysis is applied in connection with an isotropic damage law. Essentially by a high resolution of the heterogeneous solid, local isotropic damage on the material subscale allows to simulate complex damage topologies such as cracks. Therefore anisotropic degradation of a material sample can be simulated. Based on an effectively secantial global stiffness the analysis is numerically stable. The iteration step size is controlled for an adequate simulation of the damage path. This requires many steps, but in the iterative solution process each new step starts with the solution of the prior step. Therefore this method is quite effective. The present paper provides an introduction of the proposed concept for a stable simulation of damage in heterogeneous solids.

A fast solver method called the multigrid preconditioned conjugate gradient method is proposed for the mechanical analysis of heterogeneous materials on the mesoscale. Even small samples of a heterogeneous material such as concrete show a complex geometry of different phases. These materials can be modelled by projection onto a uniform, orthogonal grid of elements. As one major problem the possible resolution of the concrete specimen is generally restricted due to (a) computation times and even more critical (b) memory demand. Iterative solvers can be based on a local element-based formulation while orthogonal grids consist of geometrical identical elements. The element-based formulation is short and transparent, and therefore efficient in implementation. A variation of the material properties in elements or integration points is possible. The multigrid method is a fast iterative solver method, where ideally the computational effort only increases linear with problem size. This is an optimal property which is almost reached in the implementation presented here. In fact no other method is known which scales better than linear. Therefore the multigrid method gains in importance the larger the problem becomes. But for heterogeneous models with very large ratios of Young's moduli the multigrid method considerably slows down by a constant factor. Such large ratios occur in certain heterogeneous solids, as well as in the damage analysis of solids. As solution to this problem the multigrid preconditioned conjugate gradient method is proposed. A benchmark highlights the multigrid preconditioned conjugate gradient method as the method of choice for very large ratio's of Young's modulus. A proposed modified multigrid cycle shows good results, in the application as stand-alone solver or as preconditioner.

RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)

Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.

We propose a new approach to the numerical solution of quasi-static elastic-plastic problems based on the Moreau-Yosida theorem. After the time discretization, the problem is expressed as an energy minimization problem for unknown displacement and plastic strain fields. The dependency of the minimization functional on the displacement is smooth whereas the dependency on the plastic strain is non-smooth. Besides, there exists an explicit formula, how to calculate the plastic strain from a given displacement field. This allows us to reformulate the original problem as a minimization problem in the displacement only. Using the Moreau-Yosida theorem from the convex analysis, the minimization functional in the displacements turns out to be Frechet-differentiable, although the hidden dependency on the plastic strain is non-differentiable. The seconds derivative exists everywhere apart from the elastic-plastic interface dividing elastic and plastic zones of the continuum. This motivates to implement a Newton-like method, which converges super-linearly as can be observed in our numerical experiments.

Solid behavior as well as liquid behavior characterizes the flow of granular material in silos. The presented model is based on an appropriate interaction of a displacement field and a velocity field. The constitutive equations and the applied algorithm are developed from the exact solution for a standard case. The standard case evolves from a very tall vertical plane strain silo containing material that flows at a constant speed. No horizontal displacements and velocities take place. No changes regarding the field values arise in the vertical direction and in time. Tension is not allowed at any point. Coulomb friction represents the effects of the vertical walls. The interaction between the flowing material and the walls is covered by a forced boundary condition resulting in an additional matrix for the solid component as well as for the liquid component. The resulting integral equations are designed to be solved directly. Three coefficients describe the properties of the granular material. They govern elastic solid behavior in combination with viscous liquid behavior.

Ausgehend von den fundierten Erfahrungen, die für das Schweißen von verschiedensten Metallen vorliegen, wird an der Professur Stahlbau der Bauhaus-Universität Weimar ein neuartiges Verfahren zum CO2-Laserstrahlschweißen von Quarzglas numerisch untersucht. Dabei kommt die kommerzielle FE-Software SYSWELD® zum Einsatz. Die erforderlichen Versuche werden in Zusammenarbeit mit dem Institut für Fügetechnik und Werkstoffprüfung GmbH aus Jena realisiert. Die numerische Analyse wird eingesetzt, um geeignete Prozessparameter zu bestimmen und deren Auswirkungen auf die transienten thermischen und mechanischen Vorgänge, die während des Schweißvorgangs ablaufen abzubilden. Um die aus der Simulation erhaltenen Aussagen zu überprüfen, ist es erforderlich, das Berechnungsmodell mittels Daten aus Versuchsschweißungen zu kalibrieren. Dabei sind die verwendeten Materialmodelle sowie die der Simulation zugrunde gelegten Materialkennwerte zu validieren. Es stehen verschiedene rheologische Berechnungsmodelle zur Auswahl, die die viskosen Materialeigenschaften des Glases abbilden. Dabei werden die drei mechanischen Grundelemente, die HOOKEsche Feder, der NEWTONsche Dämpfungszylinder und das ST.-VENANT-Element miteinander kombiniert. Die Möglichkeit, thermische und mechanische Vorgänge innerhalb des Glases während des Schweißvorgangs und nach vollständiger Abkühlung, vorhersagen zu können, gestattet es den Schweißvorgang über eine Optimierung der Verfahrensparameter gezielt dahingehend zu beeinflussen, die Wirtschaftlichkeit des Schweißverfahrens zu verbessern, und ein zuverlässiges Schweißergebnis zu erhalten. Dabei können auch nur unter hohem experimentellen Aufwand durchführbare Versuche simuliert werden, um eine Vorhersage zu treffen, ob es zweckmäßig ist, den Versuch auch in der Praxis zu fahren. Dies führt zu einer Reduzierung des experimentellen Aufwandes und damit zu einer Verkürzung des Entwicklungszeitraumes für das angestrebte Verfahren.

The mathematical and technical foundations of optimization have been developed to a large extent. In the design of buildings, however, optimization is rarely applied because of insufficient adaptation of this method to the needs of building design. The use of design optimization requires the consideration of all relevant objectives in an interactive and multidisciplinary process. Disciplines such as structural, light, and thermal engineering, architecture, and economics impose various objectives on the design. A good solution calls for a compromise between these often contradictory objectives. This presentation outlines a method for the application of Multidisciplinary Design Optimization (MDO) as a tool for the designing of buildings. An optimization model is established considering the fact that in building design the non-numerical aspects are of major importance than in other engineering disciplines. A component-based decomposition enables the designer to manage the non-numerical aspects in an interactive design optimization process. A façade example demonstrates a way how the different disciplines interact and how the components integrate the disciplines in one optimization model. In this grid-based façade example, the materials switch between a discrete number of materials and construction types. For light and thermal engineering, architecture, and economics, analysis functions calculate the performance; utility functions serve as an important means for the evaluation since not every increase or decrease of a physical value improves the design. For experimental purposes, a genetic algorithm applied to the exemplary model demonstrates the use of optimization in this design case. A component-based representation first serves to manage non-numerical characteristics such as aesthetics. Furthermore, it complies with usual fabrication methods in building design and with object-oriented data handling in CAD. Therefore, components provide an important basis for an interactive MDO process in building design.

LIFETIME-ORIENTED OPTIMIZATION OF BRIDGE TIE RODS EXPOSED TO VORTEX-INDUCED ACROSS-WIND VIBRATIONS
(2006)

In recent years, damages in welded connections plates of vertical tie rods of several arched steel bridges have been reported. These damages are due to fatigue caused by wind-induced vibrations. In the present study, such phenomena are examined, and the corresponding lifetime of a reference bridge in Münster-Hiltrup, Germany, is estimated, based on the actual shape of the connection plate. Also, the results obtained are compared to the expected lifetime of a connection plate, whose geometry has been optimized separately. The structural optimization, focussing on the shape of the cut at the hanger ends, has been carried out using evolution strategies. The oscillation amplitudes have been computed by means of the Newmark-Wilson time-step method, using an appropriate load model, which has been validated by on-site experiments on the selected reference bridge. Corresponding stress-amplitudes are evaluated by multiplying the oscillation amplitudes with a stress concentration factor. This factor has been computed on the basis of a finite element model of the system "hanger-welding-connection plate", applying solid elements, according to the notch stress approach. The damage estimation takes into account the stochastics of the exciting wind process, as well as the stochastics of the material parameters (fatigue strength) given in terms of Woehler-curves. The shape optimization results in a substantial increase of the estimated hanger lifetime. The comparison of the lifetimes of the bulk plate and of the welding revealed that, in the optimized structure, the welding, being the most sensitive part in the original structure, shows much more resistance against potential damages than the bulk material.

We establish the basis of a discrete function theory starting with a Fischer decomposition for difference Dirac operators. Discrete versions of homogeneous polynomials, Euler and Gamma operators are obtained. As a consequence we obtain a Fischer decomposition for the discrete Laplacian. For the sake of simplicity we consider in the first part only Dirac operators which contain only forward or backward finite differences. Of course, these Dirac operators do not factorize the classic discrete Laplacian. Therefore, we will consider a different definition of a difference Dirac operator in the quaternionic case which do factorizes the discrete Laplacian.

At the 16th IKM Bock, Falcão and Gürlebeck presented examples of the application of some specially developed Maple-Software in hypercomplex analysis. Other papers of those authors continued this work and showed the efficiency of such tools for concrete numerical calculations as well as for numerical experiments, supporting the detection of new relationships and even theorems in a highly technical theoretical work. The mentioned software has been developed mainly for the use on mapping problems in the Euclidean spaces of dimension 3 and 4 by means of Bergman kernel methods (BKM), which are related to monogenic functions as solutions of generalized Cauchy-Riemann equations with respect to the Euclidean metric (Riesz system). The developed procedures concerning generalized powers of totally regular variables and the corresponding homogeneous polynomials basically rely on results and conventions introduced in the paper "Power series representation for monogenic functions in Rm+1 based on a permutational product", Complex Variables, 15, No.3, 181-191 (1990) by H. Malonek. Since 1992 H. Leutwiler, S. L. Eriksson and others developed in a number of papers a modified Clifford analysis and, particularly, a modified quaternionic analysis. The modification mainly consists in considering generalized Cauchy-Riemann equations with respect to a hyperbolic metric in a half space. The aim of this contribution is to show how through a change of the basic combinatorial relations used in the modified quaternionic analysis the aforementioned Maple-software (that has been recently published on CD-Rom as integrated part of the text book "Funktionentheorie in der Ebene und im Raum" by K. Gürlebeck, K. Habetha, and W. Sprössig, in the series "Grundstudium Mathematik" of Birkhäuser Verlag, 2006) can directly be used for numerical calculations in the modified theory.

The use of process models in the analysis, optimization and simulation of processes has proven to be extremely beneficial in the instances where they could be applied appropriately. However, the Architecture/Engineering/Construction (AEC) industries present unique challenges that complicate the modeling of their processes. A simple Engineering process model, based on the specification of Tasks, Datasets, Persons and Tools, and certain relations between them, have been developed, and its advantages over conventional techniques have been illustrated. Graph theory is used as the mathematical foundation mapping Tasks, Datasets, Persons and Tools to vertices and the relations between them to edges forming a directed graph. The acceptance of process modeling in AEC industries not only depends on the results it can provide, but the ease at which these results can be attained. Specifying a complex AEC process model is a dynamic exercise that is characterized by many modifications over the process model's lifespan. This article looks at reducing specification complexity, reducing the probability for erroneous input and allowing consistent model modification. Furthermore, the problem of resource leveling is discussed. Engineering projects are often executed with limited resources and determining the impact of such restrictions on the sequence of Tasks is important. Resource Leveling concerns itself with these restrictions caused by limited resources. This article looks at using Task shifting strategies to find a near-optimal sequence of Tasks that guarantees consistent Dataset evolution while resolving resource restrictions.

Traffic simulation is a valuable tool for the design and evaluation of road networks. Over the years, the level of detail to which urban and freeway traffic can be simulated has increased steadily, shifting from a merely qualitative macroscopic perspective to a very detailed microscopic view, where the behavior of individual vehicles is emulated realistically. With the improvement of behavioral models, however, the computational complexity has also steadily increased, as more and more aspects of real-life traffic have to be considered by the simulation environment. Despite the constant increase in computing power of modern personal computers, microscopic simulation stays computationally expensive, limiting the maximum network size than can be simulated on a single-processor computer in reasonable time. Parallelization can distribute the computing load from a single computer system to a cluster of several computing nodes. To this end, the exisiting simulation framework had to be adapted to allow for a distributed approach. As the simulation is ultimately targeted to be executed in real-time, incorporating real traffic data, only a spatial partition of the simulation was considered, meaning the road network has to be partitioned into subnets of comparable complexity, to ensure a homogenous load balancing. The partition process must also ensure, that the division between subnets does only occur in regions, where no strong interaction between the separated road segments occurs (i.e. not in the direct vicinity of junctions). In this paper, we describe a new microscopic reasoning voting strategy, and discuss in how far the increasing computational costs of these more complex behaviors lend themselves to a parallelized approach. We show the parallel architecture employed, the communication between computing units using MPIJava, and the benefits and pitfalls of adapting a single computer application to be used on a multi-node computing cluster.

HYPERMONOGENIC POLYNOMIALS
(2006)

It is well know that the power function is not monogenic. There are basically two ways to include the power function into the set of solutions: The hypermonogenic functions or holomorphic Cliffordian functions. L. Pernas has found out the dimension of the space of homogenous holomorphic Cliffordian polynomials of degree m, but his approach did not include a basis. It is known that the hypermonogenic functions are included in the space of holomorphic Cliffordian functions. As our main result we show that we can construct a basis for the right module of homogeneous holomorphic Cliffordian polynomials of degree m using hypermonogenic polynomials and their derivatives. To that end we first recall the function spaces of monogenic, hypermonogenic and holomorphic Cliffordian functions and give the results needed in the proof of our main theorem. We list some basic polynomials and their properties for the various function spaces. In particular, we consider recursive formulas, rules of differentiation and properties of linear independency for the polynomials.

We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.

Die Liquiditätsplanung von Bauunternehmen XE "Liquiditätsplanung" gilt als ein wesentliches Steuerungs-, Kontroll- sowie Informationsinstrument für interne und externe Adressaten und übt eine Entscheidungsunterstützungsfunktion aus. Da die einzelnen Bauprojekte einen wesentlichen Anteil an den Gesamtkosten des Unternehmens ausmachen, besitzen diese auch einen erheblichen Einfluß auf die Liquidität und die Zahlungsfähigkeit der Bauunternehmung. Dem folgend ist es in der Baupraxis eine übliche Verfahrensweise, die Liquiditätsplanung zuerst projektbezogen zu erstellen und anschließend auf Unternehmensebene zu verdichten. Ziel der Ausführungen ist es, die Zusammenhänge von Arbeitskalkulation XE "Arbeitskalkulation" , Ergebnisrechnung XE "Ergebnisrechnung" und Finanzrechnung XE "Finanzrechnung" in Form eines deterministischen XE "Erklärungsmodells" Planungsmodells auf Projektebene darzustellen. Hierbei soll das Verständnis und die Bedeutung der Verknüpfungen zwischen dem technisch-orientierten Bauablauf und dessen Darstellung im Rechnungs- und Finanzwesen herausgestellt werden. Die Vorgänge aus der Bauabwicklung, das heißt die Abarbeitung der Bauleistungsverzeichnispositionen und deren zeitliche Darstellung in einem Bauzeitenplan sind periodisiert in Größen der Betriebsbuchhaltung (Leistung, Kosten) zu transformieren und anschließend in der Finanzrechnung (Einzahlungen., Auszahlungen) nach Kreditoren und Debitoren aufzuschlüsseln.

In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.

Due to the amount of flow simulation and measurement data, automatic detection, classification and visualization of features is necessary for an inspection. Therefore, many automated feature detection methods have been developed in recent years. However, only one feature class is visualized afterwards in most cases, and many algorithms have problems in the presence of noise or superposition effects. In contrast, image processing and computer vision have robust methods for feature extraction and computation of derivatives of scalar fields. Furthermore, interpolation and other filter can be analyzed in detail. An application of these methods to vector fields would provide a solid theoretical basis for feature extraction. The authors suggest Clifford algebra as a mathematical framework for this task. Clifford algebra provides a unified notation for scalars and vectors as well as a multiplication of all basis elements. The Clifford product of two vectors provides the complete geometric information of the relative positions of these vectors. Integration of this product results in Clifford correlation and convolution which can be used for template matching of vector fields. For frequency analysis of vector fields and the behavior of vector-valued filters, a Clifford Fourier transform has been derived for 2D and 3D. Convolution and other theorems have been proved, and fast algorithms for the computation of the Clifford Fourier transform exist. Therefore the computation of Clifford convolution can be accelerated by computing it in Clifford Fourier domain. Clifford convolution and Fourier transform can be used for a thorough analysis and subsequent visualization of flow fields.

In civil engineering it is very difficult and often expensive to excite constructions such as bridges and buildings with an impulse hammer or shaker. This problem can be avoided with the output-only method as special feature of stochastic system identification. The permanently existing ambient noise (e.g. wind, traffic, waves) is sufficient to excite the structures in their operational conditions. The output-only method is able to estimate the observable part of a state-space-model which contains the dynamic characteristics of the measured mechanical system. Because of the assumption that the ambient excitation is white there is no requirement to measure the input. Another advantage of the output-only method is the possibility to get high detailed models by a special method, called polyreference setup. To pretend the availability of a much larger set of sensors the data from varying sensor locations will be collected. Several successive data sets are recorded with sensors at different locations (moving sensors) and fixed locations (reference sensors). The covariance functions of the reference sensors are bases to normalize the moving sensors. The result of the following subspace-based system identification is a high detailed black-box-model that contains the weighting function including the well-known dynamic parameters eigenfrequencies and mode shapes of the mechanical system. Emphasis of this lecture is the presentation of an extensive damage detection experiment. A 53-year old prestressed concrete tied-arch-bridge in Hünxe (Germany) was deconstructed in 2005. Preliminary numerous vibration measurements were accomplished. The first experiment for system modification was an additional support near the bridge bearing of one main girder. During a further experiment one hanger from one tied arch was cut through as an induced damage. Some first outcomes of the described experiments will be presented.

Wir betrachten im ÖPNV (Öffentlichen Personennahverkehr) diejenige Situation, daß zwei Bus- oder Straßenbahnlinien gemeinsame Haltestellen haben. Ziel unserer Untersuchungen ist es, für beide Linien einen solchen Fahrplan zu finden, der für die Fahrgäste möglichst viel Bequemlichkeit bietet. Die Bedarfsstruktur - die Anzahl von Personen, die die beiden Linien benutzen - setzt dabei gewisse Beschränkungen für die Taktzeiten der beiden Linien. Die verbleibenden Entscheidungsfreiheiten sollen im Sinne der Zielstellung ausgenutzt werden. Im Vortrag wird folgenden Fragen nachgegangen: - nach welchen Kriterien kann man die "Bequemlichkeit" oder die "Synchonisationsgüte" messen? - wie kann man die einzelnen "Synchronisationsmaße" berechnen ? - wie kann man die verbleibenden Entscheidungsfreiheiten nutzen, um eine möglichst gute Synchronisation zu erreichen ? Die Ergebnisse werden dann auf einige Beispiele angewandt und mit den bereitgestellten Methoden Lösungsvorschläge unterbreitet.

Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.

Reasonably accurate cost estimation of the structural system is quite desirable at the early stages of the design process of a construction project. However, the numerous interactions among the many cost-variables make the prediction difficult. Artificial neural networks (ANN) and case-based reasoning (CBR) are reported to overcome this difficulty. This paper presents a comparison of CBR and ANN augmented by genetic algorithms (GA) conducted by using spreadsheet simulations. GA was used to determine the optimum weights for the ANN and CBR models. The cost data of twenty-nine actual cases of residential building projects were used as an example application. Two different sets of cases were randomly selected from the data set for training and testing purposes. Prediction rates of 84% in the GA/CBR study and 89% in the GA/ANN study were obtained. The advantages and disadvantages of the two approaches are discussed in the light of the experiments and the findings. It appears that GA/ANN is a more suitable model for this example of cost estimation where the prediction of numerical values is required and only a limited number of cases exist. The integration of GA into CBR and ANN in a spreadsheet format is likely to improve the prediction rates.

Designing a structure follows a pattern of creating a structural design concept, executing a finite element analysis and developing a design model. A project was undertaken to create computer support for executing these tasks within a collaborative environment. This study focuses on developing a software architecture that integrates the various structural design aspects into a seamless functional collaboratory that satisfies engineering practice requirements. The collaboratory is to support both homogeneous collaboration i.e. between users operating on the same model and heterogeneous collaboration i.e. between users operating on different model types. Collaboration can take place synchronously or asynchronously, and the information exchange is done either at the granularity of objects or at the granularity of models. The objective is to determine from practicing engineers which configurations they regard as best and what features are essential for working in a collaborative environment. Based on the suggestions of these engineers a specification of a collaboration configuration that satisfies engineering practice requirements will be developed.

In classical complex function theory the geometric mapping property of conformality is closely linked with complex differentiability. In contrast to the planar case, in higher dimensions the set of conformal mappings is only the set of Möbius transformations. Unfortunately, the theory of generalized holomorphic functions (by historical reasons they are called monogenic functions) developed on the basis of Clifford algebras does not cover the set of Möbius transformations in higher dimensions, since Möbius transformations are not monogenic. But on the other side, monogenic functions are hypercomplex differentiable functions and the question arises if from this point of view they can still play a special role for other types of 3D-mappings, for instance, for quasi-conformal ones. On the occasion of the 16th IKM 3D-mapping methods based on the application of Bergman's reproducing kernel approach (BKM) have been discussed. Almost all authors working before that with BKM in the Clifford setting were only concerned with the general algebraic and functional analytic background which allows the explicit determination of the kernel in special situations. The main goal of the abovementioned contribution was the numerical experiment by using a Maple software specially developed for that purpose. Since BKM is only one of a great variety of concrete numerical methods developed for mapping problems, our goal is to present a complete different from BKM approach to 3D-mappings. In fact, it is an extension of ideas of L. V. Kantorovich to the 3-dimensional case by using reduced quaternions and some suitable series of powers of a small parameter. Whereas until now in the Clifford case of BKM the recovering of the mapping function itself and its relation to the monogenic kernel function is still an open problem, this approach avoids such difficulties and leads to an approximation by monogenic polynomials depending on that small parameter.

In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.