Refine
Has Fulltext
- yes (201) (remove)
Document Type
- Conference Proceeding (174)
- Article (21)
- Doctoral Thesis (2)
- Report (2)
- Book (1)
- Preprint (1)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (201) (remove)
Keywords
- Computerunterstütztes Verfahren (174)
- Architektur <Informatik> (141)
- CAD (92)
- Angewandte Informatik (82)
- Angewandte Mathematik (82)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (49)
- Building Information Modeling (16)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (15)
- Berührungslose Messung (2)
- Controlling (2)
- Design (2)
- Entscheidungsunterstützung (2)
- Kosten-Nutzen-Analyse (2)
- Laser (2)
- Massendaten (2)
- Polieren (2)
- Quarzglas (2)
- Anthropologie (1)
- Anwendung (1)
- Ausstellung (1)
- Außenwand (1)
- BIM, BIM Adoption, BIM Advantages , Construction Industry, Pakistan (1)
- Bauindustrie (1)
- Bemessungsverfahren (1)
- Berechnungsverfahren (1)
- Beschichtung (1)
- Biopolitik (1)
- Bretton-Woods-System (1)
- Commercialization; Comprehencive; Memorlization; Renovation; Socialization (1)
- Designforschung (1)
- Diener (1)
- Dynamic Taxonomy Composition, Keyquery, Classification Systems, Reverted Index, Big Data Problem (1)
- Empfindlichkeit (1)
- Energieverbrauch (1)
- Film (1)
- Gesicht (1)
- Global Design (1)
- Großtafelbau (1)
- Haufwerksporiger Beton (1)
- Identifikation (1)
- Immobilieninvestments, Portfoliomanagement, Entscheidungsunterstützung, Multikriterielle Analyse, Outranking, ELECTRE, Planung, Controlling (1)
- Integrative Gestaltung (1)
- Kohlenstoff (1)
- Kunstausstellung (1)
- Laser; quartz glass; polishing; temperature; residual stress; simulation; contactless measurement (1)
- Leben (1)
- Literatur (1)
- Markt (1)
- Maschinelles Lernen (1)
- Maschinelles Sehen (1)
- Medien (1)
- Model Predictive Control (1)
- Modellierung (1)
- Nachhaltigkeit (1)
- Nanowissenschaften (1)
- Neoliberalismus (1)
- P3P (1)
- Pakistan (1)
- Physarum polycephalum (1)
- Planung (1)
- Portfoliomanagement (1)
- Privatsphäre (1)
- Projector Camera System (1)
- Prüfverfahren (1)
- Radioastronomie (1)
- Renovierung (1)
- Richtlinien (1)
- Roboter (1)
- Sanierung (1)
- Schriftzeichenerkennung (1)
- Sozialisation (1)
- Taxonomie (1)
- Temperatur (1)
- Text Mining (1)
- Theater (1)
- Tragverhalten (1)
- Training (1)
- Unternehmen (1)
- Verbundwerkstoff (1)
- Verkehr (1)
- Verkehrsinfrastruktur (1)
- Virtuelle Realität (1)
- Visually Impaired (1)
- Wolke (1)
- bodily self-consciousness; self-face identification; self–other distinction; visuo-spatial mechanisms (1)
- congestion; signaled intersections; traffic capacity (1)
- correlation; factors; loss of life in accidents; modeling; Road traffic accidents; socio-economic aspect; statistical analysis (1)
- einschichtige Elemente (1)
- laser; contactless measurement; quartz glass; polishing; temperature; residual stress; simulation; sensitivity (1)
- media anthropology, anthropomediality, anthropological knowledge, museum exhibitions, diorama, relationality (1)
- mehrschichtige Elemente (1)
- privacy, mobile identity management, location-based applications, p3p (1)
- protoplasmic tubes, Nucleolus-Organisator-Region (1)
- robots; companions; servants; literature; theater; film (1)
- scientific visualization, virtual environments, Big Data, radio astronomy (1)
- Öffentlicher Sektor (1)
The paper contains a description of dynamic effects in the silo wall during the outflow of a stored material. The work allows for determining the danger of construction damage due to resonant vibrations and is of practical importance by determining the influence of cyclic pressures and vibro–creeping during prolonged use of a silo. The paper was devised as a result of tests on silo walls in semi-technical scale. The model is generally applicable and allows for identification of parameters in real- size silos as well.
This research focuses on an approach to describe principles in architectural layout planning within the domain of revitalization. With the aid of mathematical rules, which are executed by a computer, solutions to design problems are generated. Provided that "design" is in principle a combinatorial problem, i.e. a constraint-based search for an overall optimal solution of a problem, an exemplary method will be described to solve such problems in architectural layout planning. To avoid conflicts relating to theoretical subtleness, a customary approach adopted from Operations Research has been chosen in this work. In this approach, design is a synonym for planning, which could be described as a systematic and methodical course of action for the analysis and solution of current or future problems. The planning task is defined as an analysis of a problem with the aim to prepare optimal decisions by the use of mathematical methods. The decision problem of a planning task is represented by an optimization model and the application of an efficient algorithm in order to aid finding one or more solutions to the problem. The basic principle underlying the approach presented herein is the understanding of design in terms of searching for solutions that fulfill specific criteria. This search is executed by the use of a constraint programming language.
DECENTRALIZED APPROACHES TO ADAPTIVE TRAFFIC CONTROL AND AN EXTENDED LEVEL OF SERVICE CONCEPT
(2006)
Traffic systems are highly complex multi-component systems suffering from instabilities and non-linear dynamics, including chaos. This is caused by the non-linearity of interactions, delays, and fluctuations, which can trigger phenomena such as stop-and-go waves, noise-induced breakdowns, or slower-is-faster effects. The recently upcoming information and communication technologies (ICT) promise new solutions leading from the classical, centralized control to decentralized approaches in the sense of collective (swarm) intelligence and ad hoc networks. An interesting application field is adaptive, self-organized traffic control in urban road networks. We present control principles that allow one to reach a self-organized synchronization of traffic lights. Furthermore, vehicles will become automatic traffic state detection, data management, and communication centers when forming ad hoc networks through inter-vehicle communication (IVC). We discuss the mechanisms and the efficiency of message propagation on freeways by short-range communication. Our main focus is on future adaptive cruise control systems (ACC), which will not only increase the comfort and safety of car passengers, but also enhance the stability of traffic flows and the capacity of the road (“traffic assistance”). We present an automated driving strategy that adapts the operation mode of an ACC system to the autonomously detected, local traffic situation. The impact on the traffic dynamics is investigated by means of a multi-lane microscopic traffic simulation. The simulation scenarios illustrate the efficiency of the proposed driving strategy. Already an ACC equipment level of 10% improves the traffic flow quality and reduces the travel times for the drivers drastically due to delaying or preventing a breakdown of the traffic flow. For the evaluation of the resulting traffic quality, we have recently developed an extended level of service concept (ELOS). We demonstrate our concept on the basis of travel times as the most important variable for a user-oriented quality of service.
MODEL OF TRAM LINE OPERATION
(2006)
From passenger's perspective punctuality is one of the most important features of trams operations. Unfortunately in most cases this feature is only insufficiently fulfilled. In this paper we present a simulation model for trams operation with special focus on punctuality. The aim is to get a helpful tool for designing time-tables and for analyzing the effects by changing priorities for trams in traffic lights respectively the kind of track separation. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops. In this paper the running time is modeled by the sum of its mean value and a zero-mean random variable. With the help of multiple regression we find out that the average running time is a function depending on the length of the sections and the number of intersections. The random component is modeled by a sum of two independent zero-mean random variables. One of these variables describes the disturbance caused by the process of waiting at an intersection and the other the disturbance caused by the process of driving. The time spent at a stop is assumed to be a random variable, too. Its distribution is estimated from given measurements of these stop times for different tram lines in Kraków. Finally a special case of the introduced model is considered and numerical results are presented. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
For the dynamic behavior of lightweight structures like thin shells and membranes exposed to fluid flow the interaction between the two fields is often essential. Computational fluid-structure interaction provides a tool to predict this interaction and complement or eventually replace expensive experiments. Partitioned analyses techniques enjoy great popularity for the numerical simulation of these interactions. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular analysis software. We use, for the fluid, GLS stabilized finite elements on a moving domain based on the incompressible instationary Navier-Stokes equations, where the formulation guarantees geometric conservation on the deforming domain. The structure is discretized by nonlinear, three-dimensional shell elements.
Commonly used sequential staggered coupling schemes may exhibit instabilities due to the so-called artificial added mass effect. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. Since iterative coupling algorithms are computationally very costly, their convergence rate is very decisive for their usability. To ensure and accelerate the convergence of this iteration the updates of the interface position are relaxed. The time dependent, 'optimal' relaxation parameter is determined automatically without any user-input via exploiting a gradient method or applying an Aitken iteration scheme.
The paper presents a linear static analysis on continuous orthotropic thin-walled shell structures simply supported at the transverse ends with a random deformable contour of the cross section. The external loads can be random as well. The class of this structures involves most of the bridges, scaffold bridges, some roof structures etc. A numerical example of steel continuous structures on five spans with an open contour of the cross-section has been solved. The examination of the structure has used the following two computation models: a prismatic structure consisting of isotropic strips, a plates and ribs, with considering their real interaction, and a smooth orthotropic plate equivalent to the structure in the first model. The displacements and forces of the structure characterizing its stressed and deformed condition have been determined. The results obtained from the two solutions have been analyzed. The study on the structure is made with the force method in combination with the analytical finite strip method (AFSM) in displacements. The basic system is obtained by separating the superstructure from the understructure at the places of intermediate supports and consists of two parts. The first part is a single span thin-walled prismatic shell structure; the second part presents supports (columns, space frames etc.). The connection between the superstructure and intermediate supports is made under random supporting conditions. The forces at the supporting points in the direction of the connections removed are assumed to be the basic unknowns of the force method. The solution of the superstructure has been accomplished by the AFSM in displacements. The structure is divided in only one (transverse) direction into a finite number of plain strips connected to each other in longitudinal linear nodes. The three displacements of the points on the node lines and the rotation around those lines have been assumed to be the basic unknown in each node. The boundary conditions of each strip of the basic system correspond to the simply support along the transverse ends and the restraint along the longitudinal ones. The particular strip of the basic system has been solved by the method of the single trigonometric series. The method is reduced to solving a discrete structure in displacements and restoring its continuity at the places of the sections made in respect to both the displacements and forces. The two parts of the basic system have been solved in sequence under the action of single values of each of the basic unknowns and with the external load. The solution of the support part is accomplished using software for analyzing structures by the FEM. The basic unknown forces have been determined from system of canonic equations, the conditions of the deformations continuity on the places of the removed connections under superstructure and intermediate supports. The final displacements and forces at a random point of a continuous superstructure have been determined using the principle of superposition. The computations have been carried by software developed with Visual Fortran version 5.0 for PC.
The concept is presented of the sensitivity analysis of the limit state of the structure with respect to selected basic variables. The sensitivity is presented in the form of the probability distribution of the limit state of the structure. The analysis is performed by the problem-oriented Monte Carlo simulation procedure. The procedure is based on the problem's definition of the elementary event, as a structural limit state. Thus the sample space consists of limit states of the structure. Defined on the sample space the one-dimensional random multiplier is introduced. This multiplier refers to the dominant basic variable (group of variables) of the problem. Numerical procedure results in the set of random numbers. Normalized relative histogram of this set is an estimator of the PDF of the limit state of the structure. Estimators of reliability, or the probability of failure are statistical characteristics of this histogram. The procedure is illustrated by the example of sensitivity analysis of the serviceability limit state of monumental structure. It is the colonnade of Licheń Basilica, situated in central Poland. Limit state of the structure is examined with reference to the upper deck horizontal deflection. Wind actions are taken as dominant variables. An assumption is made that the wind load intensities acting on the lower and on the upper storey of the colonnade, respectively, are identically distributed, but correlated random variables. Three correlation variants of these variables are considered. Relevant limit state histograms are analysed thereafter. The paper ends with the conclusions referring to the method and some general remarks on the fully probabilistic design.
RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)
Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.
Im Bereich der Altbausanierung und der Bestandserfassung im Bauwesen ist es häufig notwendig, bestehende Pläne hinsichtlich des Bauwerkszustandes zu aktualisieren oder, wenn diese Pläne nicht (mehr) zugänglich sind, gänzlich neue Planunterlagen des Ist-Zustandes zu erstellen. Ein komfortabler Weg, diese Bauwerksdaten zu erheben, eröffnet die Technologie der Laservermessung. Der vorliegende Artikel stellt in diesem Zusammenhang Ansätze zur Teilautomatisierung der Generierung eines dreidimensionalen Computermodells eines Bauwerkes vor. Als Ergebnis wird ein Volumenmodell bereitgestellt, in dem zunächst die geometrischen und topologischen Informationen über Flächen, Kanten und Punkte im Sinne eines B-rep Modells beschrieben sind. Die Objekte dieses Volumenmodells werden mit Verfahren aus dem Bereich der künstlichen Intelligenz analysiert und in Bauteilklassen systematisch kategorisiert. Die Kenntnis der Bauteilsemantik erlaubt es somit, aus den Daten ein Bauwerks-Produktmodell abzuleiten und dieses einzelnen Fachplanern – etwa zur Erstellung eines Energiepasses – zugänglich zu machen. Der Aufsatz zeigt den erfolgreichen Einsatz virtueller neuronaler Netze im Bereich der Bestandserfassung anhand eines komplexen Beispiels.
A new application of software technology is the application area of smart living or sustainable living. Within this area application platforms are designed and realized with the goal to support value added services. In this context value added services integrates microelectronics, home automation and services to enhance the attractiveness of flats, homes and buildings. Especially real estate companies or service providers dealing with home services are interested in an effective design and management of their services. Service Engineering is the approved approach for designing customer oriented service processes. Service engineering consists of several phases; from situation analysis to service creation and service design to service management. This article will describe how the method service blueprint can be used to design service processes. Smart living includes all actions to enlarge a flat to a smart home for living. One special requirement of this application domain is the use of local components (actuators, sensors) within service processes. This article will show how this extended method supports service providers to improve the quality of customer oriented service processes and the derivation of needed interfaces of involved actors. For the civil engineering process it will be possible to derive needed information from a built in home automation system. The aim is to show, how to get needed smart local components to fullfill later offered it-supported value added services. Value added services focused on inhabitants are grouped to consulting and information, care and supervision, leisure time activities, repairs, mobility and delivery, safety and security, supply and disposal.
Water resources development and management is a complex problem. It includes the design and operation of single system components, often as part of larger interrelated systems and usually on the basis of river basins. While several decades ago the dominant objective was the maximization of economic benefit, other objectives have evolved as part of the sustainable development envisaged. Today, planning and operation of larger water resources systems is practically impossible without adequate computer tools, normally being one or several models, increasingly combined with data bank management systems and multi criteria assessment procedures in decision support systems. The use of models in civil engineering already has a long history when structural engineering is considered. These design support models, however, must rather be seen as expert systems made to support the engineer with his daily work. They often have no direct link to stakeholders and the decision makers community. The scale of investigation is often much larger in water resources engineering than in structural engineering which is related to different stakeholders and decision making procedures. Still, several similarities are obvious which can be summarized as the search for a compromise solution on a complex, i.e. multiobjective and interdisciplinary decision problem. While in structural engineering e.g. aestetics, stability and energy consumption might be important evaluation criteria in addition to construction and maintenance cost other or additional criteria have to be considered in water resources planning such as political, environmental and social criteria. In this respect civil engineers tend to overemphasize technical criteria. For the future the existing expert systems should be embedded into an improved decision support shell, keeping in mind that decision makers are hardly interested in numerical modelling results. The paper will introduce into the problem and demonstrate the state of the art by means of an example.
The paper is dedicated to decidability exploration of market segmentation problem with the help of linear convolution algorithms. Mathematical formulation of this problem represents interval task of bipartite graph cover by stars. Vertices of the first partition correspond to types of commodities, vertices of the second – to customers groups. Appropriate method is offered for interval problem reduction to two-criterion task that has one implemented linear convolution algorithm. Unsolvability with the help of linear convolution algorithm of multicriterion, and consequently interval, market segmentation problem is proved.
For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.
The Lucas-Kanade tracker has proven to be an efficient and accurate method for calculation of the optical flow. However, this algorithm can reliably track only suitable image features like corners and edges. Therefore, the optical flow can only be calculated for a few points in each image, resulting in sparse optical flow fields. Accumulation of these vectors over time is a suitable method to retrieve a dense motion vector field. However, the accumulation process limits application of the proposed method to fixed camera setups. Here, a histogram based approach is favored to allow more than a single typical flow vector per pixel. The resulting vector field can be used to detect roads and prescribed driving directions which constrain object movements. The motion structure can be modeled as a graph. The nodes represent entry and exit points for road users as well as crossings, while the edges represent typical paths.
Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.
Optimum technological solutions must take into account the entire life cycle of structures including design procedures as well as quality assurance, inspection, maintenance, and repair strategies. Unfortunately, current design standards do not provide a satisfactory basis to ensure expected structural lifetimes. The latter may vary from only a few years for temporary structures to over a century for bridges, water dams or nuclear repositories. Consistent scientific concepts are urgently required to cover this wide spectrum of lifetimes in structural design and maintenance. This was a motivation for a group of scientists at the Ruhr-University Bochum (RUB) to start a special research program supported by the German Research Foundation (DFG) within the Cooperative Research Center SFB 398 since 1996. Institutes of the University Wuppertal and of the University Essen-Duisburg joined the research group. The goal of the Center is to study sources of damage and deterioration in materials and structures, to develop consistent models and simulation methods, to predict structural lifetimes and finally to integrate this predictions into new lifetime-oriented design strategies.
Research activities in our center are organised in three Project Groups as follows:
- Modelling of lifetime effects
- Methods for lifetime-oriented structural analyses
- Future lifespan-oriented design strategies.
The aim of this paper is to present so-called discrete-continual boundary element method (DCBEM) of structural analysis. Its field of application comprises buildings constructions, structures and also parts and components for the residential, commercial and un-inhabitant structures with invariability of physical and geometrical parameters in some dimensions. We should mention here in particular such objects as beams, thin-walled bars, strip foundations, plates, shells, deep beams, high-rise buildings, extensional buildings, pipelines, rails, dams and others. DCBEM comes under group of semianalytical methods. Semianalytical formulations are contemporary mathematical models which currently becoming available for realization due to substantial speed-up of computer productivity. DCBEM is based on the theory of the pseudodifferential boundary equations. Corresponding pseudodifferential operators are discretely approximated using Fourier analysis or wavelet analysis. The main DCBEM advantages against the other methods of the numerical analysis is a double reduction in dimension of the problem (discrete numerical division applied not to the full region of the interest but only to the boundary of the region cross section, as a matter of fact one is solving an one-dimensional problem with the finite step on the boundary area of the region), one has opportunities to carrying out very detailed analysis of the specific chosen zones, simplified initial data preparation, simplistic and adaptive algorithms. There are two methods to define and conduct DCBEM analysis developed – indirect (IDCBEM) and direct (DDCBEM), thus indirect like in boundary element method (BEM) applied and used little bit more than direct.
The design of challenging space structures frequently relies on the theory of folded plates. The models are composed of plane facets of which the bending and membrane stiffness are coupled along the folds. In conventional finite element analysis of faceted structures the continuity of the displacement field is enforced exclusively at the nodes. Since approximate solutions for transverse and for in-plane displacements are not members of the same function space, separation occurs in between the common nodes of adjacent elements. It is shown that the kinematic assumptions of Bernoulli are accounted for this incompatibility along the edges in facet models. A general answer to this problem involves substantial modification of plate and membrane theory, but a straight forward formulation can be derived for simply folded plates, structures, whose folds do not intersect. A broad class of faceted structures, including models of various curved shells, belong to this category and can be calculated consistently. The additional requirements to assure continuity concern the mapping of displacement derivatives on the edges. An appropriate finite facet element provides node and edge-oriented degrees of freedom, whose transformation to system degrees of freedom, depends on the geometric configuration at each node. The concept is implemented using conform triangular elements. To evaluate the new approach, the energy norm of representative structures for refined meshes is calculated. The focus is placed on the mathematical convergence towards reliable solutions obtained from finite volume models.
Durch die Betrachtung des Produktions-Prozesses als zentrales Transformationselement wird die Struktur der Bauproduktion realitätsnah gefasst. Die Integration der prozessorientierten Kostendefinition setzt relevante Kostenparameter und Produktionsfaktoren so in Beziehung, dass sie im Einklang mit der realen Kostenstruktur und Kostendynamik einer Baustelle stehen. Die Beziehung zwischen Bauzeit und Kosten wird direkt erfasst und ausgewertet. Der hohen Dynamik der Bauproduktion zwischen kapazitätsbeschränkten Einsatzmitteln und Produktionsprozessen wurde durch das Poolmodell und der Simulation als Berechnungsmethode Rechnung getragen. Eine einfache Modellierung von sich zyklusartig wiederholenden Arbeitsvorgängen (Taktplanung) ist möglich. Die Taktbildung vollzieht sich bei der Simulation durch Kapazitätsbeschränkungen ohne Zutun des Benutzers. Durch eine Optimierungsmethode kann automatisiert nach der kostengünstigsten oder zeitlich schnellsten Produktionsvariante gesucht werden
The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.
Due to economical, technical or political reasons all over the world about 100 nuclear power plants have been disconnected until today. All these power stations are still waiting for their complete dismantling which, considering one reactor, causes cost of up to one Bil. Euros and lasts up to 15 years. In our contribution we present a resource-constrained project scheduling approach minimizing the total discounted cost of dismantling a nuclear power plant. A project of dismantling a nuclear power plant can be subdivided into a number of disassembling activities. The execution of these activities requires time and scarce resources like manpower, special equipment or storage facilities for the contaminated material arising from the dismantling. Moreover, we have to regard several minimum and maximum time lags (temporal constraints) between the start times of the different activities. Finally, each disassembling activity can be processed in two alternative execution modes, which lead to different disbursements and determine the resource requirements of the considered activity. The optimization problem is to determine a start time and an execution mode for each activity, such that the discounted cost of the project is minimum, and neither the temporal constraints are violated nor the activities' resource requirements exceed the availability of any scarce resource at any point in time. In our contribution we introduce an appropriate multi-mode project scheduling model with minimum and maximum time lags as well as renewable and cumulative resources for the described optimization problem. Furthermore, we show that the considered optimization problem is NP-hard in the strong sense. For small problem instances, optimal solutions can be gained from a relaxation based enumeration approach which is incorporated into a branch and bound algorithm. In order to be able to solve large problem instances, we also propose a truncated version of the devised branch and bound algorithm.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
The contribution presents a model that is able to simulate construction duration and cost for a building project. This model predicts set of expected project costs and duration schedule depending on input parameters such as production speed, scope of work, time schedule, bonding conditions and maximum and minimum deviations from scope of work and production speed. The simulation model is able to calculate, on the basis of input level of probability, the adequate construction cost and time duration of a project. The reciprocal view attends to finding out the adequate level of probability for construction cost and activity durations. Among interpretive outputs of the application software belongs the compilation of a presumed dynamic progress chart. This progress chart represents the expected scenario of development of a building project with the mapping of potential time dislocations for particular activities. The calculation of a presumed dynamic progress chart is based on an algorithm, which calculates mean values as a partial result of the simulated building project. Construction cost and time models are, in many ways, useful tools in project management. Clients are able to make proper decisions about the time and cost schedules of their investments. Consequently, building contractors are able to schedule predicted project cost and duration before any decision is finalized.
We propose a new approach to the numerical solution of quasi-static elastic-plastic problems based on the Moreau-Yosida theorem. After the time discretization, the problem is expressed as an energy minimization problem for unknown displacement and plastic strain fields. The dependency of the minimization functional on the displacement is smooth whereas the dependency on the plastic strain is non-smooth. Besides, there exists an explicit formula, how to calculate the plastic strain from a given displacement field. This allows us to reformulate the original problem as a minimization problem in the displacement only. Using the Moreau-Yosida theorem from the convex analysis, the minimization functional in the displacements turns out to be Frechet-differentiable, although the hidden dependency on the plastic strain is non-differentiable. The seconds derivative exists everywhere apart from the elastic-plastic interface dividing elastic and plastic zones of the continuum. This motivates to implement a Newton-like method, which converges super-linearly as can be observed in our numerical experiments.
Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.
The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.
In this study we introduce a concept of discrete Laplacian on the plane lattice and consider its iteration dynamical system. At first we discuss some basic properties on the dynamical system to be proved. Next making their computer simulations, we show that we can realize the following phenomena quite well:(1) The crystal of waters (2) The designs of carpets, embroideries (3) The time change of the numbers of families of extinct animals, and (4) The echo systems of life things. Hence we may expect that we can understand the evolutions and self organizations by use of the dynamical systems. Here we want to make a stress on the following fact: Although several well known chaotic dynamical systems can describe chaotic phenomena, they have difficulties in the descriptions of the evolutions and self organizations.
Reasonably accurate cost estimation of the structural system is quite desirable at the early stages of the design process of a construction project. However, the numerous interactions among the many cost-variables make the prediction difficult. Artificial neural networks (ANN) and case-based reasoning (CBR) are reported to overcome this difficulty. This paper presents a comparison of CBR and ANN augmented by genetic algorithms (GA) conducted by using spreadsheet simulations. GA was used to determine the optimum weights for the ANN and CBR models. The cost data of twenty-nine actual cases of residential building projects were used as an example application. Two different sets of cases were randomly selected from the data set for training and testing purposes. Prediction rates of 84% in the GA/CBR study and 89% in the GA/ANN study were obtained. The advantages and disadvantages of the two approaches are discussed in the light of the experiments and the findings. It appears that GA/ANN is a more suitable model for this example of cost estimation where the prediction of numerical values is required and only a limited number of cases exist. The integration of GA into CBR and ANN in a spreadsheet format is likely to improve the prediction rates.
The ride of the tram along the line, defined by a time-table, consists of the travel time between the subsequent sections and the time spent by tram on the stops. In the paper, statistical data collected in the city of Krakow is presented and evaluated. In polish conditions, for trams the time spent on stops makes up the remarkable amount of 30 % of the total time of tram line operation. Moreover, this time is characterized by large variability. The time spent by tram on a stop consists of alighting and boarding time and time lost by tram on stop after alighting and boarding time ending, but before departure. Alighting and boarding time itself usually depends on the random number of alighting and boarding passengers and also on the number of passengers which are inside the vehicle. However, the time spent by tram on stop after alighting and boarding time ending is an effect of certain random events, mainly because of impossibility of departure from stop, caused by lack of priorities for public transport vehicles. The main focus of the talk lies on the description and the modelling of these effects. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Mit diesen Ausführungen wird ein Beitrag zum weiteren Erhalt der historischen Bausubstanz in Mecklenburg aus der Sicht der Tragwerksanalyse geleistet. Dabei bestätigt es sich immer mehr, dass mit dem Modell der Geometrie, der Belastung und des Materials gleichberechtigte Modelle für eine wirklichkeitsnahe Einschätzung des Tragverhaltens eines Tragwerks vorliegen müssen. Es zeigt sich, dass dabei die besten Berechnungsprogramme nur die Ergebnisse liefern können, die mit den Eingabedaten zu erzielen sind. So hat sich der Forschungsschwerpunkt im Lehrgebiet Tragwerkslehre des FB Architektur an der Hochschule Wismar in den letzten Jahren auf die realistische Abbildung der Wechselwirkung zwischen der Bauaufnahme und der geometrischen Modellierung konzentriert. In diesem Bereich zeigen sich als Schwerpunkte die Wechselwirkung zwischen Schäden und Tragwerksanalyse und die Wechselwirkung zwischen der aufgenommenen Geometrie und dem geometrischen Modell für die Tragwerksanalyse. Die Fülle der aufgenommenen Daten sind dabei in der Regel mehr hinderlich als ein Segen für die Tragwerksanalyse. Hier wurde gezeigt, welche und wie viele geometrische Daten für das geometrische Modell für die Tragwerksanalyse sinnvoll sind. Da die eigene Datenaufnahme relativ viel Zeit beansprucht, wurde eine "geistige" Bauaufnahme durchgeführt. Dazu wird der historische Planungsprozess in den einzelnen Formfindungsschritten nachvollzogen und in die virtuelle Realität überführt. Mit dieser Methode ergeben sich unterschiedliche Bauzustände und es lassen sich auch mögliche Bauphasen abbilden. Die Tragwerksanalyse dieser virtuellen Realität zeigt dann mögliche Schwächen der Tragwerke und/oder die Notwendigkeit konstruktiver Veränderungen. Ein Vergleich der Ergebnisse der Tragwerksanalyse mit der Realität anhand des vorliegenden Datenbestands liefert die Grundlage für den aktuellen Handlungsbedarf. Da der Bauzustand eines Bauwerkes unter einer zeitlichen Veränderung steht, werden Methoden überprüft, die es ermöglichen, einen einmal vorgelegten Datenbestand aufzubereiten und weiter zu verwalten.
Designing a structure follows a pattern of creating a structural design concept, executing a finite element analysis and developing a design model. A project was undertaken to create computer support for executing these tasks within a collaborative environment. This study focuses on developing a software architecture that integrates the various structural design aspects into a seamless functional collaboratory that satisfies engineering practice requirements. The collaboratory is to support both homogeneous collaboration i.e. between users operating on the same model and heterogeneous collaboration i.e. between users operating on different model types. Collaboration can take place synchronously or asynchronously, and the information exchange is done either at the granularity of objects or at the granularity of models. The objective is to determine from practicing engineers which configurations they regard as best and what features are essential for working in a collaborative environment. Based on the suggestions of these engineers a specification of a collaboration configuration that satisfies engineering practice requirements will be developed.
Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.
Objects for civil engineering applications can be identified with their reference in memory, their alpha-numeric name or their geometric location. Particularly in graphic user interfaces, it is common to identify objects geometrically by selection with the mouse. As the number of geometric objects in a graphic user interface grows, it becomes increasingly more important to treat the basic operations add, search and remove for geometric objects with great efficiency. Guttmann has proposed the Region-Tree (R-tree) for geometric identification in an environment which uses pages on disc as data structure. Minimal bounding rectangles are used to structure the data in such a way that neighborhood relations can be described effectively. The literature shows that the parameters which influence the efficiency of the R-trees have been studied extensively, but without conclusive results. The goal of the research which is reported in this paper is to determine reliably the parameters which significantly influence the efficiency of R-trees for geometric identification in technical drawings. In order to make this investigation conclusive, it must be performed with the best available software technology. Therefore an object-oriented software for the method is developed. This implementation is tested with technical drawings containing many thousands of geometric objects. These drawings are created automatically by a stochastic generator which is incorporated into a test bed consisting of an editor and a visualisor. This test bed is used to obtain statistics for the main factors which affect the efficiency of R-trees. The investigation shows that the following main factors which affect the efficiency can be identified reliably : number of geometric objects on the drawing the minimum und maximum number of children of a node of the tree the maximum width and height of the minimal bounding rectangles of the geometric objects relative to the size of the drawing.
Solid behavior as well as liquid behavior characterizes the flow of granular material in silos. The presented model is based on an appropriate interaction of a displacement field and a velocity field. The constitutive equations and the applied algorithm are developed from the exact solution for a standard case. The standard case evolves from a very tall vertical plane strain silo containing material that flows at a constant speed. No horizontal displacements and velocities take place. No changes regarding the field values arise in the vertical direction and in time. Tension is not allowed at any point. Coulomb friction represents the effects of the vertical walls. The interaction between the flowing material and the walls is covered by a forced boundary condition resulting in an additional matrix for the solid component as well as for the liquid component. The resulting integral equations are designed to be solved directly. Three coefficients describe the properties of the granular material. They govern elastic solid behavior in combination with viscous liquid behavior.
Unconstrained models are very often found in the broad spectrum of different theories of traffic demand models. In these models there are none or only one-sided restrictions influencing the choice of the individual. However in the traffic demand different deciding dependencies of the traffic volume with regard to the specific conditions of the territory structure potentials exist. Kichhoff and Lohse introduced bi- and tri-linearly constrained models to show these dependencies. In principle, the dependencies are described as hard, elastic and open boundary sum criteria. In this article a model is formulated which gets away from these predefined boundary sum criteria and allows a free determination of minimal and maximal boundary sum criteria. The iterative solution algorithm is shown according to a FURNESS procedure at the same time. With the approach of freely selectable minimal and maximal boundary sum criteria the modeling transport planner gets the possibility to show the traffic event even better. Furthermore all common boundary sum criteria can be calculated with this model. Therewith the often necessary and sensible standard and special cases can also be modeled.
In this paper we study the structure of the solutions to higher dimensional Dirac type equations generalizing the known λ-hyperholomorphic functions, where λ is a complex parameter. The structure of the solutions to the system of partial differential equations (D- λ) f=0 show a close connection with Bessel functions of first kind with complex argument. The more general system of partial differential equations that is considered in this paper combines Dirac and Euler operators and emphasizes the role of the Bessel functions. However, contrary to the simplest case, one gets now Bessel functions of any arbitrary complex order.
Die effektive Kooperation aller beteiligten Fachplaner im Bauplanungsprozess ist die Voraussetzung für wirtschaftliches und qualitativ hochwertiges Bauen. Bauprojektorganisationen bestehen in der Regel aus zahlreichen unabhängigen Planungspartnern, die örtlich verteilt spezifische Planungsaufgaben bearbeiten und die Ergebnisse in Teilproduktmodellen ablegen. Da Planungsprozesse im Bauwesen stark arbeitsteilig ablaufen, sind die Teilproduktmodelle der einzelnen Fachplanungen in hohem Maße voneinander abhängig. Ziel des hier vorgestellten Ansatzes ist die Integration der Teilproduktmodelle der Gebäudeplanung in einem netzwerkbasierten Modellverbund am Beispiel der Brandschutzplanung. Im Beitrag werden die Probleme der Verteiltheit und insbesondere der semantischen Heterogenität der involvierten Teilproduktmodelle betrachtet. Der verteilte Zugriff wird mithilfe mobiler Software-Agenten realisiert. Die Agenten können sich dabei frei im netzwerkbasierten Planungsverbund bewegen und agieren als Vertreter der Fachplaner. Das Problem der semantischen Heterogenität der Teilproduktmodelle wird auf der Basis von Ontologien gelöst. Dazu werden erstens Domänenontologien entwickelt, die Objekte der realen Welt einer abgeschlossenen Domäne, hier des Brandschutzes, abbilden. Zweitens werden Applikationsontologien entwickelt, die die einzelnen proprietären Datenhaltungen (im Sinne von Teilproduktmodellen) der jeweiligen Fachplanungen repräsentieren. Beide Ontologien werden mit einem regelbasierten Ansatz verknüpft. Im vorgestellten Anwendungsfall Brandschutz dient die Domänenontologie als einheitliche Schnittstelle für den Zugriff auf die verteilten Modelle und abstrahiert dabei von deren Datenbankspezifika und proprietären Schemata. Mithilfe von mobilen Agenten und semantischen Technologien kann so eine Plattform zur Verfügung gestellt werden, die erstens die dynamische Integration von Ressourcen in den Planungsverbund erlaubt und zweitens auf deren Basis unabhängig von der Verteiltheit und Heterogenität der eingebundenen Ressourcen ingenieurgerechte Verarbeitungsmethoden realisiert werden können.
In classical complex function theory the geometric mapping property of conformality is closely linked with complex differentiability. In contrast to the planar case, in higher dimensions the set of conformal mappings is only the set of Möbius transformations. Unfortunately, the theory of generalized holomorphic functions (by historical reasons they are called monogenic functions) developed on the basis of Clifford algebras does not cover the set of Möbius transformations in higher dimensions, since Möbius transformations are not monogenic. But on the other side, monogenic functions are hypercomplex differentiable functions and the question arises if from this point of view they can still play a special role for other types of 3D-mappings, for instance, for quasi-conformal ones. On the occasion of the 16th IKM 3D-mapping methods based on the application of Bergman's reproducing kernel approach (BKM) have been discussed. Almost all authors working before that with BKM in the Clifford setting were only concerned with the general algebraic and functional analytic background which allows the explicit determination of the kernel in special situations. The main goal of the abovementioned contribution was the numerical experiment by using a Maple software specially developed for that purpose. Since BKM is only one of a great variety of concrete numerical methods developed for mapping problems, our goal is to present a complete different from BKM approach to 3D-mappings. In fact, it is an extension of ideas of L. V. Kantorovich to the 3-dimensional case by using reduced quaternions and some suitable series of powers of a small parameter. Whereas until now in the Clifford case of BKM the recovering of the mapping function itself and its relation to the monogenic kernel function is still an open problem, this approach avoids such difficulties and leads to an approximation by monogenic polynomials depending on that small parameter.
In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.
At the 16th IKM Bock, Falcão and Gürlebeck presented examples of the application of some specially developed Maple-Software in hypercomplex analysis. Other papers of those authors continued this work and showed the efficiency of such tools for concrete numerical calculations as well as for numerical experiments, supporting the detection of new relationships and even theorems in a highly technical theoretical work. The mentioned software has been developed mainly for the use on mapping problems in the Euclidean spaces of dimension 3 and 4 by means of Bergman kernel methods (BKM), which are related to monogenic functions as solutions of generalized Cauchy-Riemann equations with respect to the Euclidean metric (Riesz system). The developed procedures concerning generalized powers of totally regular variables and the corresponding homogeneous polynomials basically rely on results and conventions introduced in the paper "Power series representation for monogenic functions in Rm+1 based on a permutational product", Complex Variables, 15, No.3, 181-191 (1990) by H. Malonek. Since 1992 H. Leutwiler, S. L. Eriksson and others developed in a number of papers a modified Clifford analysis and, particularly, a modified quaternionic analysis. The modification mainly consists in considering generalized Cauchy-Riemann equations with respect to a hyperbolic metric in a half space. The aim of this contribution is to show how through a change of the basic combinatorial relations used in the modified quaternionic analysis the aforementioned Maple-software (that has been recently published on CD-Rom as integrated part of the text book "Funktionentheorie in der Ebene und im Raum" by K. Gürlebeck, K. Habetha, and W. Sprössig, in the series "Grundstudium Mathematik" of Birkhäuser Verlag, 2006) can directly be used for numerical calculations in the modified theory.
ON THE NAVIER-STOKES EQUATION WITH FREE CONVECTION IN STRIP DOMAINS AND 3D TRIANGULAR CHANNELS
(2006)
The Navier-Stokes equations and related ones can be treated very elegantly with the quaternionic operator calculus developed in a series of works by K. Guerlebeck, W. Sproeossig and others. This study will be extended in this paper. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one basically needs to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. With special variants of quaternionic holomorphic multiperiodic functions we obtain explicit formulas for three dimensional parallel plate channels, rectangular block domains and regular triangular channels. The explicit knowledge of the integral kernels makes it then possible to evaluate the operator equations in order to determine the solutions of the boundary value problem explicitly.
Wolken
(2006)
Processes underlying crowding in visual letter recognition were examined by investigating effects of training. Experiment 1 revealed that training reduces crowding mainly for trained strings. This was corroborated in Experiment 2, where no training effects were obvious after 3 days of training when strings changed from trial to trial. Experiment 3 specified that after a short amount of training, learning effects remained specific to trained strings and also to the trained retinal eccentricity and the interletter spacing used in training. Transfer to other than trained conditions was observed only after further training. Experiment 4 showed that transfer occurred earlier when words were used as stimuli. These results thus demonstrate that part of crowding results from the absence of higher level representations of the stimulus. Such representations can be acquired through learning visual properties of the stimulus.
This text proposes a genealogy of biopolitics based on Michel Foucault’s thought, and on an understanding of it as a philosophico-political notion. In order to elaborate this genealogy, the text takes as its starting point not only politics but also life, as the second component of the term. The hypothesis is the following: To understand what biopolitics means, we have to take seriously Foucault’s assertion of an indetermination of life, as the correlate of power and knowledge. This notion emerges in the epistemic break that takes place around 1800 and that entails the opening up of the notion of biopolitics under the name of governmentality, implying that life is not only the object of biopolitics but also serves as its model.
Identity management provides PET (privacy enhancing technology) tools for users to control privacy of their personal data. With the support of mobile location determination techniques based on GPS, WLAN, Bluetooth, etc., context-aware and location-aware mobile applications (e.g. restaurant finder, friend finder, indoor and outdoor navigation, etc.) have gained quite big interest in the business and IT world. Considering sensitive static personal information (e.g. name, address, phone number, etc.) and also dynamic personal information (e.g. current location, velocity in car, current status, etc.), mobile identity management is required to help mobile users to safeguard their personal data. In this paper, we evaluate certain required aspects and features (e.g. context-to-context dependence and relation, blurring in levels, trust management with p3p integration, extended privacy preferences, etc.) of mobile identity management
This paper deals with the modelling and the analysis of masonry vaults. Numerical FEM analyses are performed using LUSAS code. Two vault typologies are analysed (barrel and cross-ribbed vaults) parametrically varying geometrical proportions and constraints. The proposed model and the developed numerical procedure are implemented in a computer analysis. Numerical applications are developed to assess the model effectiveness and the efficiency of the numerical procedure. The main object of the present paper is the development of a computational procedure which allows to define 3D structural behaviour of masonry vaults. For each investigated example, the homogenized limit analysis approach has been employed to predict ultimate load and failure mechanisms. Finally, both a mesh dependence study and a sensitivity analysis are reported. Sensitivity analysis is conducted varying in a wide range mortar tensile strength and mortar friction angle with the aim of investigating the influence of the mechanical properties of joints on collapse load and failure mechanisms. The proposed computer model is validated by a comparison with experimental results available in the literature.
Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.
In this note, we describe quite explicitly the Howe duality for Hodge systems and connect it with the well-known facts of harmonic analysis and Clifford analysis. In Section 2, we recall briefly the Fisher decomposition and the Howe duality for harmonic analysis. In Section 3, the well-known fact that Clifford analysis is a real refinement of harmonic analysis is illustrated by the Fisher decomposition and the Howe duality for the space of spinor-valued polynomials in the Euclidean space under the so-called L-action. On the other hand, for Clifford algebra valued polynomials, we can consider another action, called in Clifford analysis the H-action. In the last section, we recall the Fisher decomposition for the H-action obtained recently. As in Clifford analysis the prominent role plays the Dirac equation in this case the basic set of equations is formed by the Hodge system. Moreover, analysis of Hodge systems can be viewed even as a refinement of Clifford analysis. In this note, we describe the Howe duality for the H-action. In particular, in Proposition 1, we recognize the Howe dual partner of the orthogonal group O(m) in this case as the Lie superalgebra sl(2 1). Furthermore, Theorem 2 gives the corresponding multiplicity free decomposition with an explicit description of irreducible pieces.
MULTI-SITE CONSTRUCTION PROJECT SCHEDULING CONSIDERING RESOURCE MOVING TIME IN DEVELOPING COUNTRIES
(2010)
Under the booming construction demands in developing countries, particularly in Vietnam situation, construction contractors often perform multiple concurrent projects in different places. In construction project scheduling processes, the existing scheduling methods often assume the resource moving time between activities/projects to be negligible. When multiple projects are deployed in different places and far from each other, this assumption has many shortcomings for properly modelling the real-world constraints. Especially, with respect to developing countries such as the Vietnam which contains transportation systems that are still in backward and low technical standards. This paper proposes a new algorithm named Multi-Site Construction Project Scheduling - MCOPS. The objective of this algorithm is to solve the problem of minimising multi-site construction project duration under limited available conditions of renewable resources (labour, machines and equipment) combining with the moving time of required resource among activities/projects. Additionally, in order to mitigate the impact of resource moving time into the multi-site project duration, this paper proposed a new priority rule: Minimum Resource Moving Time (MinRMT). The MinRMT is applied to rank the finished activities according to a priority order, to support the released resources to the scheduling activities. In order to investigate the impact of the resource moving time among activities during the scheduling process, computational experimentation was implemented. The results of the MCOPS-based computational experiments showed that, the resource moving time among projects has significantly impacted the multi-site project durations and this amount of time can not be ignored in the multi-site project scheduling process. Besides, the efficient application of the MinRMT is also demonstrated through the achieved results of the computational experiment in this paper. Though the efforts in this paper are based on the Vietnamese construction conditions, the proposed method can be usefully applied in other developing countries which have similar construction conditions.
The article presents analysis of stress distribution in the reinforced concrete support beam bracket which is a component of prefabricated reinforced concrete building. The building structure is spatial frame where dilatations were applied. The proper stiffness of its structure is provided by frames with stiff joints, monolithic lift shifts and staircases. The prefabricated slab floors are supported by beam shelves which are shaped as inverted letter ‘T’. Beams are supported by the column brackets. In order to lower the storey height and fulfill the architectural demands at the same time, the designer lowered the height of beam at the support zone. The analyzed case refers to the bracket zone where the slant crack. on the support beam bracket was observed. It could appear as a result of overcrossing of allowable tension stresses in reinforced concrete, in the bracket zone. It should be noted that the construction solution applied, i.e. concurrent support of the “undercut” beam on the column bracket causes local concentration of stresses in the undercut zone where the strongest transverse forces and tangent stresses occur concurrently. Some additional rectangular stresses being a result of placing the slab floors on the lower part of beam shelves sum up with those described above.
Euclidean Clifford analysis is a higher dimensional function theory offering a refinement of classical harmonic analysis. The theory is centered around the concept of monogenic functions, i.e. null solutions of a first order vector valued rotation invariant differential operator called the Dirac operator, which factorizes the Laplacian. More recently, Hermitean Clifford analysis has emerged as a new and successful branch of Clifford analysis, offering yet a refinement of the Euclidean case; it focusses on the simultaneous null solutions, called Hermitean (or h-) monogenic functions, of two Hermitean Dirac operators which are invariant under the action of the unitary group. In Euclidean Clifford analysis, the Clifford-Cauchy integral formula has proven to be a corner stone of the function theory, as is the case for the traditional Cauchy formula for holomorphic functions in the complex plane. Previously, a Hermitean Clifford-Cauchy integral formula has been established by means of a matrix approach. This formula reduces to the traditional Martinelli-Bochner formula for holomorphic functions of several complex variables when taking functions with values in an appropriate part of complex spinor space. This means that the theory of Hermitean monogenic functions should encompass also other results of several variable complex analysis as special cases. At present we will elaborate further on the obtained results and refine them, considering fundamental solutions, Borel-Pompeiu representations and the Teoderescu inversion, each of them being developed at different levels, including the global level, handling vector variables, vector differential operators and the Clifford geometric product as well as the blade level were variables and differential operators act by means of the dot and wedge products. A rich world of results reveals itself, indeed including well-known formulae from the theory of several complex variables.
Several results concerning the distribution of the headway of busses in the flow behind a traffic signal are presented. In the main focus of interest is the description of analytical models, which are verified by the results of Monte-Carlo-Methods. The advantage of analytical models (verified, but not derived by simulation methods) is their flexibility with respect to possible generalizations. For instance, several random distributions of the flow incoming to the traffic signal can be compared. The attention will be directed at the question, how the primary headway H (analyzed in front of the traffic signal) is mapped to the headway H’ analyzed behind of the traffic signal and how the random distribution of H is mapped to that of H’. For the traffic flow in front of the traffic signal several models will be discussed. The first case considers the situation, that busses operate on a common lane with the individual motor car traffic and the traffic flow is saturated. In the second situation, busses operate on a separated bus lane. Moreover, a mixed situation is discussed to model as close to reality as possible.
CRITICAL STRESS ASSESSMENT IN ANGLE TO GUSSET PLATE BOLTED CONNECTION BY SIMPLIFIED FEM MODELLING
(2010)
Simplified modelling of friction grip bolted connections of steel member – to – gusset plate is often applied in engineering practise. The paper deals with the simplification of pre-tensioned bolt model and simplification of load transfer within connection. Influence on normal strain (and thus stress) distribution at critical cross-section is investigated. Laboratory testing of single-angle or double-angle members – to – gusset plates bolted connections were taken as basis for numerical analysis. FE models were created using 1D and 2D elements. Angles and gusset plates were modelled with shell elements. Two methods of modelling of friction grip bolting were considered: bolt-regarding approach with 1D element systems modelling bolts and two variants of bolt-disregarding approach with special constraints over some part of member and gusset plate surfaces in contact: a) constraints over whole area of contact, b) constraints over the area around each bolt shank (“partially tied”). Modelling of friction grip bolted connections using simplified bolt modelling may be effective, especially in the case of analysis concerning elastic range only. In such a case disregarding bolts and replacing them with “partially tied” modelling seems to be more attractive. It is less time-consuming and provides results of similar accuracy in comparison to analysis utilizing simplified bolt modelling.
The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.
MICROPLANE MODEL WITH INITIAL AND DAMAGE-INDUCED ANISOTROPY APPLIED TO TEXTILE-REINFORCED CONCRETE
(2010)
The presented material model reproduces the anisotropic characteristics of textile reinforced concrete in a smeared manner. This includes both the initial anisotropy introduced by the textile reinforcement, as well as the anisotropic damage evolution reflecting fine patterns of crack bridges. The model is based on the microplane approach. The direction-dependent representation of the material structure into oriented microplanes provides a flexible way to introduce the initial anisotropy. The microplanes oriented in a yarn direction are associated with modified damage laws that reflect the tension-stiffening effect due to the multiple cracking of the matrix along the yarn.
Planning and construction processes are characterized by the peculiarity that they need to be designed individually for each project. It is necessary to set up an individual schedule for each project. As a basis for a new project, schedules from already finished projects are used, but adaptions are always necessary. In practice, scheduling tools only document a process. Schedules cover a set of activities, their duration and a set of interdependencies between activities. The design of a process is up to the user. It is not necessary to specify each interdependency, and completeness and correctness need to be checked manually. No methodologies are available to guarantee properties such as correctness or completeness. The considerations presented in the paper are based on an approach where a planning and a construction process including the interdependencies between planning and construction activities are regarded as a result. Selected information need to be specified by a user, and a proposal for an order of planning and construction activities is computed. As a consequence, process properties such as correctness and completeness can be guaranteed with respect to user input. Especially in Germany, clients are allowed to modify their requirements at any time. This leads to modifications in the planning and construction processes. This paper covers a mathematical formulation for this problem based on set theory. A complex structure is set up covering objects and relations; and operations are defined that guarantee consistency in the underlying and versioned process description. The presented considerations are based on previous work. This paper can be regarded as the next step in a series of previous work describing how a suitable concept for handling, planning and construction processes in civil engineering can be formed.
The uncertainty existing in the construction industry is bigger than in other industries. Consequently, most construction projects do not go totally as planned. The project management plan needs therefore to be adapted repeatedly within the project lifecycle to suit the actual project conditions. Generally, the risks of change in the project management plan are difficult to be identified in advance, especially if these risks are caused by unexpected events such as human errors or changes in the client preferences. The knowledge acquired from different resources is essential to identify the probable deviations as well as to find proper solutions to the faced change risks. Hence, it is necessary to have a knowledge base that contains known solutions for the common exceptional cases that may cause changes in each construction domain. The ongoing research work presented in this paper uses the process modeling technique of Event-driven Process Chains to describe different patterns of structure changes in the schedule networks. This results in several so called “change templates”. Under each template different types of change risk/ response pairs can be categorized and stored in a knowledge base. This knowledge base is described as an ontology model populated with reference construction process data. The implementation of the developed approach can be seen as an iterative scheduling cycle that will be repeated within the project lifecycle as new change risks surface. This can help to check the availability of ready solutions in the knowledge base for the situation at hand. Moreover, if the solution is adopted, CPSP, “Change Project Schedule Plan „a prototype developed for the purpose of this research work, will be used to make the needed structure changes of the schedule network automatically based on the change template. What-If scenarios can be implemented using the CPSP prototype in the planning phase to study the effect of specific situations without endangering the success of the project objectives. Hence, better designed and more maintainable project schedules can be achieved.
We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.
The changed global security situation in the last eight years has shown the importance of emergency management plans in public buildings. Therefore, the use of computer simulators for surveying fire safety design and evacuation process is increasing. The aim of these simulators is to have more realistic evacuation simulations. The challenge is, firstly, to realize the virtual simulation environment based on geometrical and material boundary conditions, secondly, to considerate the mutual interaction effects between different parameters and, finally, to have a realistic visualization of the simulated results. In order to carry out this task, an especial new software method on a BIM-platform has to be developed which can integrate all required simulations and will be able to have an immersive output BIM ISEE (Immersive Safety Engineering Environment). The new BIM-ISEE will integrate the Fire Dynamics Simulator (FDS) for fire and evacuation simulation in the Autodesk Revit which is a BIM-platform and will represent the simulation results in the immersive virtual environment at the institute (CES-Lab). With BIM-ISEE the fire safety engineer will be able to obtain more realistic visualizations in the immersive environment, to modify his concept more effectively, to evaluate the simulation results more accurately and to visualize the various simulation results. It can also give the rescue staff the opportunity to perform and evaluate emergency evacuation trainings.
Information technology plays a key role in the everyday operation of buildings and campuses. Many proprietary technologies and methodologies can assist in effective Building Performance Monitoring (BPM) and efficient managing of building resources. The integration of related tools like energy simulator packages, facility, energy and building management systems, and enterprise resource planning systems is of benefit to BPM. However, the complexity to integrating such domain specific systems prevents their common usage. Service Oriented Architecture (SOA) has been deployed successfully in many large multinational companies to create integrated and flexible software systems, but so far this methodology has not been applied broadly to the field of BPM. This paper envisions that SOA provides an effective integration framework for BPM. Service oriented architecture for the ITOBO framework for sustainable and optimised building operation is proposed and an implementation for a building performance monitoring system is introduced.
Due to increasing numbers of wind energy converters, the accurate assessment of the lifespan of their structural parts and the entire converter system is becoming more and more paramount. Lifespan-oriented design, inspections and remedial maintenance are challenging because of their complex dynamic behavior. Wind energy converters are subjected to stochastic turbulent wind loading causing corresponding stochastic structural response and vibrations associated with an extreme number of stress cycles (up to 109 according to the rotation of the blades). Currently, wind energy converters are constructed for a service life of about 20 years. However, this estimation is more or less made by rule of thumb and not backed by profound scientific analyses or accurate simulations. By contrast, modern structural health monitoring systems allow an improved identification of deteriorations and, thereupon, to drastically advance the lifespan assessment of wind energy converters. In particular, monitoring systems based on artificial intelligence techniques represent a promising approach towards cost-efficient and reliable real-time monitoring. Therefore, an innovative real-time structural health monitoring concept based on software agents is introduced in this contribution. For a short time, this concept is also turned into a real-world monitoring system developed in a DFG joint research project in the authors’ institute at the Ruhr-University Bochum. In this paper, primarily the agent-based development, implementation and application of the monitoring system is addressed, focusing on the real-time monitoring tasks in the deserved detail.
In order to make control decisions, Smart Buildings need to collect data from multiple sources and bring it to a central location, such as the Building Management System (BMS). This needs to be done in a timely and automated fashion. Besides data being gathered from different energy using elements, information of occupant behaviour is also important for a building’s requirement analysis. In this paper, the parameter of Occupant Density was considered to help find behaviour of occupants towards a building space. Through this parameter, support for building energy consumption and requirements based on occupant need and demands was provided. The demonstrator presented provides information on the number of people present in a particular building space at any time, giving the space density. Such collections of density data made over a certain period of time represents occupant behaviour towards the building space, giving its usage patterns. Similarly, inventory items were tracked and monitored for moving out or being brought into a particular read zone. For both, people and inventory items, this was achieved using small, low-cost, passive Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) tags. Occupants were given the tags in a form factor of a credit card to be possessed at all times. A central database was built where occupant and inventory information for a particular building space was maintained for monitoring and providing a central data access.
We investigate aspects of tram-network section reliability, which operates as a part of the model of whole city tram-network reliability. Here, one of the main points of interest is the character of the chronological development of the disturbances (namely the differences between time of departure provided in schedule and real time of departure) on subsequent sections during tram line operation. These developments were observed in comprehensive measurements done in Krakow, during one of the main transportation nodes (Rondo Mogilskie) rebuilding. All taken building activities cause big disturbances in tram lines operation with effects extended to neighboring sections. In a second part, the stochastic character of section running time will be analyzed more detailed. There will be taken into consideration sections with only one beginning stop and also with two or three beginning stops located at different streets at an intersection. Possibility of adding results from sections with two beginning stops to one set will be checked with suitable statistical tests which are used to compare the means of the two samples. Section running time may depend on the value of gap between two following trams and from the value of deviation from schedule. This dependence will be described by a multi regression formula. The main measurements were done in the city center of Krakow in two stages: before and after big changes in tramway infrastructure.
SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)
After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.
THE FOURIER-BESSEL TRANSFORM
(2010)
In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.
We present the way of calculation of displacement in the bent reinforced concrete bar elements where rearrangement of internal forces and plastic hinge occurred. The described solution is based on prof. Borcz’s mathematical model. It directly takes into consideration the effects connected with the occurrence of plastic hinge, such as for example a crack, by means of a differential equation of axis of the bent reinforced concrete beam. The EN Eurocode 2 makes it possible to consider the influence of plastic hinge on the values of the reinforced concrete structures. This influence can also be assumed using other analytical methods. However, the results obtained by the application of Eurocode 2 are higher from those received in testing. Just comparably big error level occurs when calculations are made by means of Borcz’s method, but in the latter case, the results depend on the assumptions made beforehand. This method makes it possible to apply the experimental results using parameters r1 i r0. When the experimental results are taken into account, one could observe the compatibility between the calculations and actual deflections of the structure.
The main aim of the research project in progress is to develop virtual models as tools to support decision-making in the planning of construction maintenance. The virtual models gives the capacity to allow them to transmit, visually and interactively, information related to the physical behaviour of materials, components of given infrastructures, defined as a function of the time variable. The interactive application allows decisions to be made on conception options in the definition of plans for maintenance, conservation or rehabilitation. The first virtual prototype that is now in progress concerns just lamps. It allows the examination of the physical model, visualizing, for each element modelled in 3D and linked to a database, the corresponding technical information concerned with the wear and tear aspects of the material, calculated for that period of time. In addition, the analysis of solutions for repair work or substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The aim is that the virtual model should be able to be applied directly over the 3D models of new constructions, in situations of rehabilitation. The practical usage of these models is directed, then, towards supporting decision-making in the conception phase and the planning of maintenance. In further work other components will be analysed and incorporated into the virtual system.
In order to model and simulate collapses of large scale complex structures, a user-friendly and high performance software system is essential. Because a large number of simulation experiments have to be performed, therefore, next to an appropriate simulation model and high performance computing, efficient interactive control and visualization capabilities of model parameters and simulation results are crucial. To this respect, this contribution is concerned with advancements of the software system CADCE (Computer Aided Demolition using Controlled Explosives) that is extended under particular consideration of computational steering concepts. Thereby, focus is placed on problems and solutions for the collapse simulation of real world large scale complex structures. The simulation model applied is based on a multilevel approach embedding finite element models on a local as well as a near field length scale, and multibody models on a global scale. Within the global level simulation, relevant effects of the local and the near field scale, such as fracture and failure processes of the reinforced concrete parts, are approximated by means of tailor-made multibody subsystems. These subsystems employ force elements representing nonlinear material characteristics in terms of force/displacement relationships that, in advance, are determined by finite element analysis. In particular, enhancements concerning the efficiency of the multibody model and improvements of the user interaction are presented that are crucial for the capability of the computational steering. Some scenarios of collapse simulations of real world large scale structures demonstrate the implementation of the above mentioned approaches within the computational steering.
FREE VIBRATION FREQUENCIES OF THE CRACKED REINFORCED CONCRETE BEAMS - METHODS OF CALCULATIONS
(2010)
The paper presents method of calculation of natural frequencies of the cracked reinforced concrete beams including discreet model of crack. The described method is based on the stiff finite elements method. It was modified in such a way as to take into account local discontinuities (ie. cracks). In addition, some theoretical studies as well as experimental tests of concrete mechanics based on discrete crack model were taken into consideration. The calculations were performed using the author’s own numerical algorithm. Moreover, other calculation methods of dynamic reinforced concrete beams presented in standards and guidelines are discussed. Calculations performed by using different methods are compared with the results obtained in experimental tests.
Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.
An introduction is given to Clifford Analysis over pseudo-Euclidean space of arbitrary signature, called for short Ultrahyperbolic Clifford Analysis (UCA). UCA is regarded as a function theory of Clifford-valued functions, satisfying a first order partial differential equation involving a vector-valued differential operator, called a Dirac operator. The formulation of UCA presented here pays special attention to its geometrical setting. This permits to identify tensors which qualify as geometrically invariant Dirac operators and to take a position on the naturalness of contravariant and covariant versions of such a theory. In addition, a formal method is described to construct the general solution to the aforementioned equation in the context of covariant UCA.
As numerical techniques for solving PDE or integral equations become more sophisticated, treatments of the generation of the geometric inputs should also follow that numerical advancement. This document describes the preparation of CAD data so that they can later be applied to hierarchical BEM or FEM solvers. For the BEM case, the geometric data are described by surfaces which we want to decompose into several curved foursided patches. We show the treatment of untrimmed and trimmed surfaces. In particular, we provide prevention of smooth corners which are bad for diffeomorphism. Additionally, we consider the problem of characterizing whether a Coons map is a diffeomorphism from the unit square onto a planar domain delineated by four given curves. We aim primarily at having not only theoretically correct conditions but also practically efficient methods. As for FEM geometric preparation, we need to decompose a 3D solid into a set of curved tetrahedra. First, we describe some method of decomposition without adding too many Steiner points (additional points not belonging to the initial boundary nodes of the boundary surface). Then, we provide a methodology for efficiently checking whether a tetrahedral transfinite interpolation is regular. That is done by a combination of degree reduction technique and subdivision. Along with the method description, we report also on some interesting practical results from real CAD data.
We consider a structural truss problem where all of the physical model parameters are uncertain: not just the material values and applied loads, but also the positions of the nodes are assumed to be inexact but bounded and are represented by intervals. Such uncertainty may typically arise from imprecision during the process of manufacturing or construction, or round-off errors. In this case the application of the finite element method results in a system of linear equations with numerous interval parameters which cannot be solved conventionally. Applying a suitable variable substitution, an iteration method for the solution of a parametric system of linear equations is firstly employed to obtain initial bounds on the node displacements. Thereafter, an interval tightening (pruning) technique is applied, firstly on the element forces and secondly on the node displacements, in order to obtain tight guaranteed enclosures for the interval solutions for the forces and displacements.
The paper is devoted to a study of properties of homogeneous solutions of massless field equation in higher dimensions. We first treat the case of dimension 4. Here we use the two-component spinor language (developed for purposes of general relativity). We describe how are massless field operators related to a higher spin analogues of the de Rham sequence - the so called Bernstein-Gel'fand-Gel'fand (BGG) complexes - and how are they related to the twisted Dirac operators. Then we study similar question in higher (even) dimensions. Here we have to use more tools from representation theory of the orthogonal group. We recall the definition of massless field equations in higher dimensions and relations to higher dimensional conformal BGG complexes. Then we discuss properties of homogeneous solutions of massless field equation. Using some recent techniques for decomposition of tensor products of irreducible $Spin(m)$-modules, we are able to add some new results on a structure of the spaces of homogenous solutions of massless field equations. In particular, we show that the kernel of the massless field equation in a given homogeneity contains at least on specific irreducible submodule.
A stress based remodeling approach is used to investigate the sensitivity of the collagen architecture in humane eye tissues on the biomechanical response of the lamina cribrosa with a particular focus on the stress environment of the nerve fibers. This approach is based on a multi-level biomechanical framework, where the biomechanical properties of eye tissues are derived from a single crimped fibril at the micro-scale via the collagen network of distributed fibrils at the meso-scale to the incompressible and anisotropic soft tissue at the macro-scale. Biomechanically induced remodeling of the collagen network is captured on the meso-scale by allowing for a continuous reorientation of collagen fibrils. To investigate the multi-scale phenomena related to glaucomatous neuropathy a generalized computational homogenization scheme is applied to a coupled two-scale analysis of the human eye considering a numerical macro- and meso-scale model of the lamina cribrosa.
Reducing energy consumption is one of the major challenges for present day and will continue for future generations. The emerging EU directives relating to energy (EU EPBD and the EU Directive on Emissions Trading) now place demands on building owners to rate the energy performance of their buildings for efficient energy management. Moreover European Legislation (Directive 2006/32/EC) requires Facility Managers to reduce building energy consumption and operational costs. Currently sophisticated building services systems are available integrating off-the-shelf building management components. However this ad-hoc combination presents many difficulties to building owners in the management and upgrade of these systems. This paper addresses the need for integration concepts, holistic monitoring and analysis methodologies, life-cycle oriented decision support and sophisticated control strategies through the seamless integration of people, ICT-devices and computational resources via introducing the newly developed integrated system architecture. The first concept was applied to a residential building and the results were elaborated to improve current building conditions.
From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.
Models in the context of engineering can be classified in process based and data based models. Whereas the process based model describes the problem by an explicit formulation, the data based model is often used, where no such mapping can be found due to the high complexity of the problem. Artificial Neuronal Networks (ANN) is a data based model, which is able to “learn“ a mapping from a set of training patterns. This paper deals with the application of ANN in time dependent bathymetric models. A bathymetric model is a geometric representation of the sea bed. Typically, a bathymetry is been measured and afterwards described by a finite set of measured data. Measuring at different time steps leads to a time dependent bathymetric model. To obtain a continuous surface, the measured data has to be interpolated by some interpolation method. Unlike the explicitly given interpolation methods, the presented time dependent bathymetric model using an ANN trains the approximated surface in space and time in an implicit way. The ANN is trained by topographic measured data, which consists of the location (x,y) and time t. In other words the ANN is trained to reproduce the mapping h = f(x,y,t) and afterwards it is able to approximate the topographic height for a given location and date. In a further step, this model is extended to take meteorological parameters into account. This leads to a model of more predictive character.
In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.
In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.
This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.
For many applications, nonuniformly distributed functional data is given which lead to large–scale scattered data problems. We wish to represent the data in terms of a sparse representation with a minimal amount of degrees of freedom. For this, an adaptive scheme which operates in a coarse-to-fine fashion using a multiscale basis is proposed. Specifically, we investigate hierarchical bases using B-splines and spline-(pre)wavelets. At each stage a leastsquares approximation of the data is computed. We take into account different requests arising in large-scale scattered data fitting: we discuss the fast iterative solution of the least square systems, regularization of the data, and the treatment of outliers. A particular application concerns the approximate continuation of harmonic functions, an issue arising in geodesy.
Quality is one of the most important properties of a product. Providing the optimal quality can reduce costs for rework, scrap, recall or even legal actions while satisfying customers demand for reliability. The aim is to achieve ``built-in'' quality within product development process (PDP). The common approach therefore is the robust design optimization (RDO). It uses stochastic values as constraint and/or objective to obtain a robust and reliable optimal design. In classical approaches the effort required for stochastic analysis multiplies with the complexity of the optimization algorithm. The suggested approach shows that it is possible to reduce this effort enormously by using previously obtained data. Therefore the support point set of an underlying metamodel is filled iteratively during ongoing optimization in regions of interest if this is necessary. In a simple example, it will be shown that this is possible without significant loss of accuracy.
Since the 90-ties the Pascal matrix, its generalizations and applications have been in the focus of a great amount of publications. As it is well known, the Pascal matrix, the symmetric Pascal matrix and other special matrices of Pascal type play an important role in many scientific areas, among them Numerical Analysis, Combinatorics, Number Theory, Probability, Image processing, Sinal processing, Electrical engineering, etc. We present a unified approach to matrix representations of special polynomials in several hypercomplex variables (new Bernoulli, Euler etc. polynomials), extending results of H. Malonek, G.Tomaz: Bernoulli polynomials and Pascal matrices in the context of Clifford Analysis, Discrete Appl. Math. 157(4)(2009) 838-847. The hypercomplex version of a new Pascal matrix with block structure, which resembles the ordinary one for polynomials of one variable will be discussed in detail.
Nodal integration of finite elements has been investigated recently. Compared with full integration it shows better convergence when applied to incompressible media, allows easier remeshing and highly reduces the number of material evaluation points thus improving efficiency. Furthermore, understanding it may help to create new integration schemes in meshless methods as well. The new integration technique requires a nodally averaged deformation gradient. For the tetrahedral element it is possible to formulate a nodal strain which passes the patch test. On the downside, it introduces non-physical low energy modes. Most of these "spurious modes" are local deformation maps of neighbouring elements. Present stabilization schemes rely on adding a stabilizing potential to the strain energy. The stabilization is discussed within this article. Its drawbacks are easily identified within numerical experiments: Nonlinear material laws are not well represented. Plastic strains may often be underestimated. Geometrically nonlinear stabilization greatly reduces computational efficiency. The article reinterpretes nodal integration in terms of imposing a nonconforming C0-continuous strain field on the structure. By doing so, the origins of the spurious modes are discussed and two methods are presented that solve this problem. First, a geometric constraint is formulated and solved using a mixed formulation of Hu-Washizu type. This assumption leads to a consistent representation of the strain energy while eliminating spurious modes. The solution is exact, but only of theoretical interest since it produces global support. Second, an integration scheme is presented that approximates the stabilization criterion. The latter leads to a highly efficient scheme. It can even be extended to other finite element types such as hexahedrals. Numerical efficiency, convergence behaviour and stability of the new method is validated using linear tetrahedral and hexahedral elements.
A practical framework for generating cross correlated fields with a specified marginal distribution function, an autocorrelation function and cross correlation coefficients is presented in the paper. The contribution promotes a recent journal paper [1]. The approach relies on well known series expansion methods for simulation of a Gaussian random field. The proposed method requires all cross correlated fields over the domain to share an identical autocorrelation function and the cross correlation structure between each pair of simulated fields to be simply defined by a cross correlation coefficient. Such relations result in specific properties of eigenvectors of covariance matrices of discretized field over the domain. These properties are used to decompose the eigenproblem which must normally be solved in computing the series expansion into two smaller eigenproblems. Such decomposition represents a significant reduction of computational effort. Non-Gaussian components of a multivariate random field are proposed to be simulated via memoryless transformation of underlying Gaussian random fields for which the Nataf model is employed to modify the correlation structure. In this method, the autocorrelation structure of each field is fulfilled exactly while the cross correlation is only approximated. The associated errors can be computed before performing simulations and it is shown that the errors happen especially in the cross correlation between distant points and that they are negligibly small in practical situations.
We give a sufficient and a necessary condition for an analytic function "f" on the unit disk "D" with Hadamard gap to belong to a class of weighted logarithmic Bloch space as well as to the corresponding little weighted logarithmic Bloch space under some conditions posed on the defined weight function. Also, we study the relations between the class of weighted logarithmic Bloch functions and some other classes of analytic functions by the help of analytic functions in the Hadamard gap class.
Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing.
NONZONAL WAVELETS ON S^N
(2010)
In the present article we will construct wavelets on an arbitrary dimensional sphere S^n due the approach of approximate Identities. There are two equivalently approaches to wavelets. The group theoretical approach formulates a square integrability condition for a group acting via unitary, irreducible representation on the sphere. The connection to the group theoretical approach will be sketched. The concept of approximate identities uses the same constructions in the background, here we select an appropriate section of dilations and translations in the group acting on the sphere in two steps. At First we will formulate dilations in terms of approximate identities and than we call in translations on the sphere as rotations. This leads to the construction of an orthogonal polynomial system in L²(SO(n+1)). That approach is convenient to construct concrete wavelets, since the appropriate kernels can be constructed form the heat kernel leading to the approximate Identity of Gauss-Weierstra\ss. We will work out conditions to functions forming a family of wavelets, subsequently we formulate how we can construct zonal wavelets from a approximate Identity and the relation to admissibility of nonzonal wavelets. Eventually we will give an example of a nonzonal Wavelet on $S^n$, which we obtain from the approximate identity of Gauss-Weierstraß.
In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.
A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)
Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.
In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.
In recent years special hypercomplex Appell polynomials have been introduced by several authors and their main properties have been studied by different methods and with different objectives. Like in the classical theory of Appell polynomials, their generating function is a hypercomplex exponential function. The observation that this generalized exponential function has, for example, a close relationship with Bessel functions confirmed the practical significance of such an approach to special classes of hypercomplex differentiable functions. Its usefulness for combinatorial studies has also been investigated. Moreover, an extension of those ideas led to the construction of complete sets of hypercomplex Appell polynomial sequences. Here we show how this opens the way for a more systematic study of the relation between some classes of Special Functions and Elementary Functions in Hypercomplex Function Theory.
Sand-bentonite mixtures are well recognized as buffer and sealing material in nuclear waste repository constructions. The behaviour of compacted sand-bentonite mixture needs to be well understood in order to guarantee the safety and the efficiency of the barrier construction. This paper presents numerical simulations of swelling test and coupled thermo-hydro-mechanical (THM) test on compacted sand-bentonite mixture in order to reveal the influence of the temperature and hydraulic gradients on the distribution of temperature, mechanical stress and water content in such materials. Sensitivity analysis is carried out to identify the parameters which influence the most the response of the numerical model. Results of back analysis of the model parameters are reported and critically assessed.
Many researchers are working on developing robots into adequate partners, be it at the working place, be it at home or in leisure activities, or enabling elder persons to lead a self-determined, independent life. While quite some progress has been made in e.g. speech or emotion understanding, processing and expressing, the relations between humans and robots are usually only short-term. In order to build long-term, i.e. social relations, qualities like empathy, trust building, dependability, non-patronizing, and others will be required. But these are just terms and as such no adequate starting points to “program” these capacities even more how to avoid the problems and pitfalls in interactions between humans and robots. However, a rich source for doing this is available, unused until now for this purpose: artistic productions, namely literature, theater plays, not to forget operas, and films with their multitude of examples. Poets, writers, dramatists, screen-writers, etc. have studied for centuries the facets of interactions between persons, their dynamics, and the related snags. And since we wish for human-robot relations as master-servant relations - the human obviously being the master - the study of these relations will be prominent. A procedure is proposed, with four consecutive steps, namely Selection, Analysis, Categorization, and Integration. Only if we succeed in developing robots which are seen as servants we will be successful in supporting and helping humans through robots.
The laser beam is a small, flexible and fast polishing tool. With laser radiation it is possible to finish many outlines or geometries on quartz glass surfaces in the shortest possible time. It's a fact that the temperature developing while polishing determines the reachable surface smoothing and, as a negative result, causes material tensions. To find out which parameters are important for the laser polishing process and the surface roughness respectively and to estimate material tensions, temperature simulations and extensive polishing experiments took place. During these experiments starting and machining parameters were changed and temperatures were measured contact-free.
Due to the complex interactions between the ground, the driving machine, the lining tube and the built environment, the accurate assignment of in-situ system parameters for numerical simulation in mechanized tunneling is always subject to tremendous difficulties. However, the more accurate these parameters are, the more applicable the responses gained from computations will be. In particular, if the entire length of the tunnel lining is examined, then, the appropriate selection of various kinds of ground parameters is accountable for the success of a tunnel project and, more importantly, will prevent potential casualties. In this context, methods of system identification for the adaptation of numerical simulation of ground models are presented. Hereby, both deterministic and probabilistic approaches are considered for typical scenarios representing notable variations or changes in the ground model.