### Refine

#### Document Type

- Conference Proceeding (174)
- Article (21)
- Doctoral Thesis (2)
- Report (2)
- Book (1)

#### Institute

- In Zusammenarbeit mit der Bauhaus-Universität Weimar (200) (remove)

#### Keywords

- Computerunterstütztes Verfahren (174)
- Architektur <Informatik> (141)
- CAD (92)
- Angewandte Informatik (82)
- Angewandte Mathematik (82)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (49)
- Building Information Modeling (16)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (15)
- Berührungslose Messung (2)
- Controlling (2)

Wir betrachten im ÖPNV (Öffentlichen Personennahverkehr) diejenige Situation, daß zwei Bus- oder Straßenbahnlinien gemeinsame Haltestellen haben. Ziel unserer Untersuchungen ist es, für beide Linien einen solchen Fahrplan zu finden, der für die Fahrgäste möglichst viel Bequemlichkeit bietet. Die Bedarfsstruktur - die Anzahl von Personen, die die beiden Linien benutzen - setzt dabei gewisse Beschränkungen für die Taktzeiten der beiden Linien. Die verbleibenden Entscheidungsfreiheiten sollen im Sinne der Zielstellung ausgenutzt werden. Im Vortrag wird folgenden Fragen nachgegangen: - nach welchen Kriterien kann man die "Bequemlichkeit" oder die "Synchonisationsgüte" messen? - wie kann man die einzelnen "Synchronisationsmaße" berechnen ? - wie kann man die verbleibenden Entscheidungsfreiheiten nutzen, um eine möglichst gute Synchronisation zu erreichen ? Die Ergebnisse werden dann auf einige Beispiele angewandt und mit den bereitgestellten Methoden Lösungsvorschläge unterbreitet.

Wolken
(2006)

Im Bereich der Altbausanierung und der Bestandserfassung im Bauwesen ist es häufig notwendig, bestehende Pläne hinsichtlich des Bauwerkszustandes zu aktualisieren oder, wenn diese Pläne nicht (mehr) zugänglich sind, gänzlich neue Planunterlagen des Ist-Zustandes zu erstellen. Ein komfortabler Weg, diese Bauwerksdaten zu erheben, eröffnet die Technologie der Laservermessung. Der vorliegende Artikel stellt in diesem Zusammenhang Ansätze zur Teilautomatisierung der Generierung eines dreidimensionalen Computermodells eines Bauwerkes vor. Als Ergebnis wird ein Volumenmodell bereitgestellt, in dem zunächst die geometrischen und topologischen Informationen über Flächen, Kanten und Punkte im Sinne eines B-rep Modells beschrieben sind. Die Objekte dieses Volumenmodells werden mit Verfahren aus dem Bereich der künstlichen Intelligenz analysiert und in Bauteilklassen systematisch kategorisiert. Die Kenntnis der Bauteilsemantik erlaubt es somit, aus den Daten ein Bauwerks-Produktmodell abzuleiten und dieses einzelnen Fachplanern – etwa zur Erstellung eines Energiepasses – zugänglich zu machen. Der Aufsatz zeigt den erfolgreichen Einsatz virtueller neuronaler Netze im Bereich der Bestandserfassung anhand eines komplexen Beispiels.

The use of virtual reality techniques in the development of educational applications brings new perspectives to the teaching of subjects related to the field of civil construction in Civil Engineering domain. In order to obtain models, which would be able to visually simulate the construction process of two types of construction work, the research turned to the techniques of geometric modelling and virtual reality. The applications developed for this purpose are concerned with the construction of a cavity wall and a bridge. These models make it possible to view the physical evolution of the work, to follow the planned construction sequence and to visualize details of the form of every component of the works. They also support the study of the type and method of operation of the equipment necessary for these construction procedures. These models have been used to distinct advantage as educational aids in first-degree courses in Civil Engineering. Normally, three-dimensional geometric models, which are used to present architectural and engineering works, show only their final form, not allowing the observation of their physical evolution. The visual simulation of the construction process needs to be able to produce changes to the geometry of the project dynamically. In the present study, two engineering construction work models were created, from which it was possible to obtain three-dimensional models corresponding to different states of their form, simulating distinct stages in their construction. Virtual reality technology was applied to the 3D models. Virtual reality capacities allow the interactive real-time viewing of 3D building models and facilitate the process of visualizing, evaluating and communicating.

We consider a structural truss problem where all of the physical model parameters are uncertain: not just the material values and applied loads, but also the positions of the nodes are assumed to be inexact but bounded and are represented by intervals. Such uncertainty may typically arise from imprecision during the process of manufacturing or construction, or round-off errors. In this case the application of the finite element method results in a system of linear equations with numerous interval parameters which cannot be solved conventionally. Applying a suitable variable substitution, an iteration method for the solution of a parametric system of linear equations is firstly employed to obtain initial bounds on the node displacements. Thereafter, an interval tightening (pruning) technique is applied, firstly on the element forces and secondly on the node displacements, in order to obtain tight guaranteed enclosures for the interval solutions for the forces and displacements.

In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.

VARIATIONAL POSITING AND SOLUTION OF COUPLED THERMOMECHANICAL PROBLEMS IN A REFERENCE CONFIGURATION
(2015)

Variational formulation of a coupled thermomechanical problem of anisotropic solids for the case of non-isothermal finite deformations in a reference configuration is shown. The formulation of the problem includes: a condition of equilibrium flow of a deformation process in the reference configuration; an equation of a coupled heat conductivity in a variational form, in which an influence of deformation characteristics of a process on the temperature field is taken into account; tensor-linear constitutive relations for a hypoelastic material; kinematic and evolutional relations; initial and boundary conditions. Based on this formulation several axisymmetric isothermal and coupled problems of finite deformations of isotropic and anisotropic bodies are solved. The solution of coupled thermomechanical problems for a hollow cylinder in case of finite deformation showed an essential influence of coupling on distribution of temperature, stresses and strains. The obtained solutions show the development of stressstrain state and temperature changing in axisymmetric bodies in the case of finite deformations.

VARIATION OF ROTATIONAL RESTRAINT IN GRID DECK CONNECTION DUE TO CORROSION DAMAGE AND STRENGTHENING
(2006)

The approach to assessment of rotational restraint of stringer-to-crossbeam connection in a deck of 100-year old steel truss bridge is presented. Sensitivity of rotational restraint coefficient of the connection to corrosion damage and strengthening is analyzed. Two criteria of the assessment of the rotational restraint coefficient are applied: static and kinematic one. The former is based on bending moment distribution in the considered member, the latter one – on the member rotation at the given joint. 2D-element model of finite element method is described: webs and flanges are modeled with shell elements, while rivets in the connection – with system of beam and spring elements. The method of rivet modeling is verified by T-stub connection test results published in literature. FEM analyses proved that recorded extent of corrosion damage does not alter the initial rotational restraint of stringer-to-crossbeam connection. Strengthening of stringer midspan influences midspan bending moment and stringer end rotation in a different way. Usually restoring member load bearing capacity means strengthening its critical regions (where the highest stress levels occur). This alters flexural stiffness distribution over member length and influences rotational restraint at its connection to other members. The impact depends on criterion chosen for rotational restraint coefficient assessment.

Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.

Interval analysis extends the concept of computing with real numbers to computing with real intervals. As a consequence, some interesting properties appear, such as the delivery of guaranteed results or confirmed global values. The former property is given in the sense that unknown numerical values are in known to lie in a computed interval. The latter property states that the global minimum value, for example, of a given function is also known to be contained in a interval (or a finite set of intervals). Depending upon the amount computation effort invested in the calculation, we can often find tight bounds on these enclosing intervals. The downside of interval analysis, however, is the mathematically correct, but often very pessimistic size of the interval result. This is in particularly due to the so-called dependency effect, where a single variable is used multiple times in one calculation. Applying interval analysis to structural analysis problems, the dependency has a great influence on the quality of numerical results. In this paper, a brief background of interval analysis is presented and shown how it can be applied to the solution of structural analysis problems. A discussion of possible improvements as well as an outlook to parallel computing is also given.

Portugal is one of the European countries with higher spatial and population freeway network coverage. The sharp growth of this network in the last years instigates the use of methods of analysis and the evaluation of their quality of service in terms of the traffic performance, typically performed through internationally accepted methodologies, namely that presented in the Highway Capacity Manual (HCM). Lately, the use of microscopic traffic simulation models has been increasingly widespread. These models simulate the individual movement of the vehicles, allowing to perform traffic analysis. The main target of this study was to verify the possibility of using microsimulation as an auxiliary tool in the adaptation of the methodology by HCM 2000 to Portugal. For this purpose, were used the microscopic simulators AIMSUN and VISSIM for the simulation of the traffic circulation in the A5 Portuguese freeway. The results allowed the analysis of the influence of the main geometric and traffic factors involved in the methodology by HCM 2000. In conclusion, the study presents the main advantages and limitations of the microsimulators AIMSUN and VISSIM in modelling the traffic circulation in Portuguese freeways. The main limitation is that these microsimulators are not able to simulate explicitly some of the factors considered in the HCM 2000 methodology, which invalidates their direct use as a tool in the quantification of those effects and, consequently, makes the direct adaptation of this methodology to Portugal impracticable.

Der Begriff der Zuverlässigkeit spielt eine zentrale Rolle bei der Bewertung von Verkehrsnetzen. Aus der Sicht der Nutzer des öffentlichen Personennahverkehrs (ÖPNV) ist eines der wichtigsten Kriterien zur Beurteilung der Qualität des Liniennetzes, ob es möglich ist, mit einer großen Sicherheit das Reiseziel in einer vorgegebenen Zeit zu erreichen. Im Vortrag soll dieser Zuverlässigkeitsbegriff mathematisch gefasst werden. Dabei wird zunächst auf den üblichen Begriff der Zuverlässigkeit eines Netzes im Sinne paarweiser Zusammenhangswahrscheinlichkeiten eingegangen. Dieser Begriff wird erweitert durch die Betrachtung der Zuverlässigkeit unter Einbeziehung einer maximal zulässigen Reisezeit. In vergangenen Arbeiten hat sich die Ring-Radius-Struktur als bewährtes Modell für die theoretische Beschreibung von Verkehrsnetzen erwiesen. Diese Überlegungen sollen nun durch Einbeziehung realer Verkehrsnetzstrukturen erweitert werden. Als konkretes Beispiel dient das Straßenbahnnetz von Krakau. Hier soll insbesondere untersucht werden, welche Auswirkungen ein geplanter Ausbau des Netzes auf die Zuverlässigkeit haben wird. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.

Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.

The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.

In order to make control decisions, Smart Buildings need to collect data from multiple sources and bring it to a central location, such as the Building Management System (BMS). This needs to be done in a timely and automated fashion. Besides data being gathered from different energy using elements, information of occupant behaviour is also important for a building’s requirement analysis. In this paper, the parameter of Occupant Density was considered to help find behaviour of occupants towards a building space. Through this parameter, support for building energy consumption and requirements based on occupant need and demands was provided. The demonstrator presented provides information on the number of people present in a particular building space at any time, giving the space density. Such collections of density data made over a certain period of time represents occupant behaviour towards the building space, giving its usage patterns. Similarly, inventory items were tracked and monitored for moving out or being brought into a particular read zone. For both, people and inventory items, this was achieved using small, low-cost, passive Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) tags. Occupants were given the tags in a form factor of a credit card to be possessed at all times. A central database was built where occupant and inventory information for a particular building space was maintained for monitoring and providing a central data access.

The aim of our contribution is to clarify the relation between totally regular variables and Appell sequences of hypercomplex holomorphic polynomials (sometimes simply called monogenic power-like functions) in Hypercomplex Function Theory. After their introduction in 2006 by two of the authors of this note on the occasion of the 17th IKM, the latter have been subject of investigations by different authors with different methods and in various contexts. The former concept, introduced by R. Delanghe in 1970 and later also studied by K. Gürlebeck in 1982 for the case of quaternions, has some obvious relationship with the latter, since it describes a set of linear hypercomplex holomorphic functions all power of which are also hypercomplex holomorphic. Due to the non-commutative nature of the underlying Clifford algebra, being totally regular variables or Appell sequences are not trivial properties as it is for the integer powers of the complex variable z=x+ iy. Simple examples show also, that not every totally regular variable and its powers form an Appell sequence and vice versa. Under some very natural normalization condition the set of all para-vector valued totally regular variables which are also Appell sequences will completely be characterized. In some sense the result can also be considered as an answer to a remark of K. Habetha in chapter 16: Function theory in algebras of the collection Complex analysis. Methods, trends, and applications, Akademie-Verlag Berlin, (Eds. E. Lanckau and W. Tutschke) 225-237 (1983) on the use of exact copies of several complex variables for the power series representation of any hypercomplex holomorphic function.

Digital models of buildings are widely used in civil engineering. In these models, geometric information is used as leading information. Engineers are used to have geometric information, and, for instance, it is state of the art to specify a point by its three coordinates. However, the traditional approaches have disadvantages. Geometric information is over-determined. Thus, more geometric information is specified and stored than needed. In addition, engineers already deal with topological information. A denotation of objects in buildings is of topological nature. It has to be answered whether approaches where topological information becomes a leading role would be more efficient in civil engineering. This paper presents such an approach. Topological information is modelled independently of geometric information. It is used for denoting the objects of a building. Geometric information is associated to topological information so that geometric information “weights” a topology.
The concept presented in this paper has already been used in surveying existing buildings. Experiences in the use of this concept showed that the number of geometric information that is required for a complete specification of a building could be reduced by a factor up to 100. Further research will show how this concept can be used in planning processes.

Design activity could be treated as state transition computationally. In stepwise processing, in-between form-states are not easily observed. However, in this research time-based concept is introduced and applied in order to bridge the gap. In architecture, folding is one method of form manipulation and architects also want to search for alternatives by this operation. Besides, folding operation has to be defined and parameterized before time factor is involved as a variable of folding. As a result, time-based transformation provides sequential form states and redirects design activity.

The concrete is modeled as a material with damage and plasticity, whereat the viscoplastic and the viscoelastic behaviour depends on the rate of the total strains. Due to the damage behaviour the compliance tensor develops different properties in tension and compression. There have been tested various yield surfaces and flow rules, damage rules respectively to their usability in a concrete model. One three-dimensional yield surface was developed from a failure surface based on the Willam--Warnke five-parameter model by the author. Only one general uni-axial stress-strain-relation is used for the numeric control of the yield surface. From that curve all necessary parameters for different strengths of concrete and different strain rates can be derived by affine transformations. For the flow rule in the compression zone a non associated inelastic potential is used, in the tension zone a Rankine potential. Conditional on the time-dependent formulation, the symmetry of the system equations is maintained in spite of the usage of non-associated potentials for the derivation of the inelastic strains. In case of quasi statical computations a simple viscoplastic law is used that is rested on an approach to Perzyna. The principle of equality of dissipation power in the uni-axial and the three-axial state of stress is used. It is modified by a factor that depends on the actual stress ratio and in comparison with the Kupfer experiments it implicates strains that are more realistic. The implementation of the concrete model is conducted in a mixed hybrid finite element. Examples in the structural level are introduced for verification of the concrete model.

This text proposes a genealogy of biopolitics based on Michel Foucault’s thought, and on an understanding of it as a philosophico-political notion. In order to elaborate this genealogy, the text takes as its starting point not only politics but also life, as the second component of the term. The hypothesis is the following: To understand what biopolitics means, we have to take seriously Foucault’s assertion of an indetermination of life, as the correlate of power and knowledge. This notion emerges in the epistemic break that takes place around 1800 and that entails the opening up of the notion of biopolitics under the name of governmentality, implying that life is not only the object of biopolitics but also serves as its model.

For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.

The Lucas-Kanade tracker has proven to be an efficient and accurate method for calculation of the optical flow. However, this algorithm can reliably track only suitable image features like corners and edges. Therefore, the optical flow can only be calculated for a few points in each image, resulting in sparse optical flow fields. Accumulation of these vectors over time is a suitable method to retrieve a dense motion vector field. However, the accumulation process limits application of the proposed method to fixed camera setups. Here, a histogram based approach is favored to allow more than a single typical flow vector per pixel. The resulting vector field can be used to detect roads and prescribed driving directions which constrain object movements. The motion structure can be modeled as a graph. The nodes represent entry and exit points for road users as well as crossings, while the edges represent typical paths.

It is well-known that the solution of the fundamental equations of linear elasticity for a homogeneous isotropic material in plane stress and strain state cases can be equivalently reduced to the solution of a biharmonic equation. The discrete version of the Theorem of Goursat is used to describe the solution of the discrete biharmonic equation by the help of two discrete holomorphic functions. In order to obtain a Taylor expansion of discrete holomorphic functions we introduce a basis of discrete polynomials which fulfill the so-called Appell property with respect to the discrete adjoint Cauchy-Riemann operator. All these steps are very important in the field of fracture mechanics, where stress and displacement fields in the neighborhood of singularities caused by cracks and notches have to be calculated with high accuracy. Using the sum representation of holomorphic functions it seems possible to reproduce the order of singularity and to determine important mechanical characteristics.

This article presents the Rigid Finite Element Method in the calculation of reinforced concrete beam deflection with cracks. Initially, this method was used in the shipbuilding industry. Later, it was adapted in the homogeneous calculations of the bar structures. In this method, rigid mass discs serve as an element model. In the flat layout, three generalized coordinates (two translational and one rotational) correspond to each disc. These discs are connected by elastic ties. The genuine idea is to take into account a discrete crack in the Rigid Finite Element Method. It consists in the suitable reduction of the rigidity in rotational ties located in the spots, where cracks occurred. The susceptibility of this tie results from the flexural deformability of the element and the occurrence of the crack. As part of the numerical analyses, the influence of cracks on the total deflection of beams was determined. Furthermore, the results of the calculations were compared to the results of the experiment. Overestimations of the calculated deflections against the measured deflections were found. The article specifies the size of the overestimation and describes its causes.

In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.

In this note, we describe quite explicitly the Howe duality for Hodge systems and connect it with the well-known facts of harmonic analysis and Clifford analysis. In Section 2, we recall briefly the Fisher decomposition and the Howe duality for harmonic analysis. In Section 3, the well-known fact that Clifford analysis is a real refinement of harmonic analysis is illustrated by the Fisher decomposition and the Howe duality for the space of spinor-valued polynomials in the Euclidean space under the so-called L-action. On the other hand, for Clifford algebra valued polynomials, we can consider another action, called in Clifford analysis the H-action. In the last section, we recall the Fisher decomposition for the H-action obtained recently. As in Clifford analysis the prominent role plays the Dirac equation in this case the basic set of equations is formed by the Hodge system. Moreover, analysis of Hodge systems can be viewed even as a refinement of Clifford analysis. In this note, we describe the Howe duality for the H-action. In particular, in Proposition 1, we recognize the Howe dual partner of the orthogonal group O(m) in this case as the Lie superalgebra sl(2 1). Furthermore, Theorem 2 gives the corresponding multiplicity free decomposition with an explicit description of irreducible pieces.

THE FOURIER-BESSEL TRANSFORM
(2010)

In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.

We briefly review and use the recent comprehensive research on the manifolds of square roots of −1 in real Clifford geometric algebras Cl(p,q) in order to construct the Clifford Fourier transform. Basically in the kernel of the complex Fourier transform the complex imaginary unit j is replaced by a square root of −1 in Cl(p,q). The Clifford Fourier transform (CFT) thus obtained generalizes previously known and applied CFTs, which replaced the complex imaginary unit j only by blades (usually pseudoscalars) squaring to −1. A major advantage of real Clifford algebra CFTs is their completely real geometric interpretation. We study (left and right) linearity of the CFT for constant multivector coefficients in Cl(p,q), translation (x-shift) and modulation (w -shift) properties, and signal dilations. We show an inversion theorem. We establish the CFT of vector differentials, partial derivatives, vector derivatives and spatial moments of the signal. We also derive Plancherel and Parseval identities as well as a general convolution theorem.

This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.

Due to the amount of flow simulation and measurement data, automatic detection, classification and visualization of features is necessary for an inspection. Therefore, many automated feature detection methods have been developed in recent years. However, only one feature class is visualized afterwards in most cases, and many algorithms have problems in the presence of noise or superposition effects. In contrast, image processing and computer vision have robust methods for feature extraction and computation of derivatives of scalar fields. Furthermore, interpolation and other filter can be analyzed in detail. An application of these methods to vector fields would provide a solid theoretical basis for feature extraction. The authors suggest Clifford algebra as a mathematical framework for this task. Clifford algebra provides a unified notation for scalars and vectors as well as a multiplication of all basis elements. The Clifford product of two vectors provides the complete geometric information of the relative positions of these vectors. Integration of this product results in Clifford correlation and convolution which can be used for template matching of vector fields. For frequency analysis of vector fields and the behavior of vector-valued filters, a Clifford Fourier transform has been derived for 2D and 3D. Convolution and other theorems have been proved, and fast algorithms for the computation of the Clifford Fourier transform exist. Therefore the computation of Clifford convolution can be accelerated by computing it in Clifford Fourier domain. Clifford convolution and Fourier transform can be used for a thorough analysis and subsequent visualization of flow fields.

This paper presents a robust model updating strategy for system identification of wind turbines. To control the updating parameters and to avoid ill-conditioning, the global sensitivity analysis using the elementary effects method is conducted. The formulation of the objective function is based on M¨uller-Slany’s strategy for multi-criteria functions. As a simulationbased optimization, a simulation adapter is developed to interface the simulation software ANSYS and the locally developed optimization software MOPACK. Model updating is firstly tested on the beam model of the rotor blade. The defect between the numerical model and the reference has been markedly reduced by the process of model updating. The effect of model updating becomes more pronounced in the comparison of the measured and the numerical properties of the wind turbine model. The deviations of the frequencies of the updated model are rather small. The complete comparison including the free vibration modes by the modal assurance criteria shows the excellent coincidence of the modal parameters of the updated model with the ones from the measurements. By successful implementation of the model validation via model updating, the applicability and effectiveness of the solution concept has been demonstrated.

Due to the complex interactions between the ground, the driving machine, the lining tube and the built environment, the accurate assignment of in-situ system parameters for numerical simulation in mechanized tunneling is always subject to tremendous difficulties. However, the more accurate these parameters are, the more applicable the responses gained from computations will be. In particular, if the entire length of the tunnel lining is examined, then, the appropriate selection of various kinds of ground parameters is accountable for the success of a tunnel project and, more importantly, will prevent potential casualties. In this context, methods of system identification for the adaptation of numerical simulation of ground models are presented. Hereby, both deterministic and probabilistic approaches are considered for typical scenarios representing notable variations or changes in the ground model.

This contribution will be freewheeling in the domain of signal, image and surface processing and touch briefly upon some topics that have been close to the heart of people in our research group. A lot of the research of the last 20 years in this domain that has been carried out world wide is dealing with multiresolution. Multiresolution allows to represent a function (in the broadest sense) at different levels of detail. This was not only applied in signals and images but also when solving all kinds of complex numerical problems. Since wavelets came into play in the 1980's, this idea was applied and generalized by many researchers. Therefore we use this as the central idea throughout this text. Wavelets, subdivision and hierarchical bases are the appropriate tools to obtain these multiresolution effects. We shall introduce some of the concepts in a rather informal way and show that the same concepts will work in one, two and three dimensions. The applications in the three cases are however quite different, and thus one wants to achieve very different goals when dealing with signals, images or surfaces. Because completeness in our treatment is impossible, we have chosen to describe two case studies after introducing some concepts in signal processing. These case studies are still the subject of current research. The first one attempts to solve a problem in image processing: how to approximate an edge in an image efficiently by subdivision. The method is based on normal offsets. The second case is the use of Powell-Sabin splines to give a smooth multiresolution representation of a surface. In this context we also illustrate the general method of construction of a spline wavelet basis using a lifting scheme.

This paper deals with the modelling and the analysis of masonry vaults. Numerical FEM analyses are performed using LUSAS code. Two vault typologies are analysed (barrel and cross-ribbed vaults) parametrically varying geometrical proportions and constraints. The proposed model and the developed numerical procedure are implemented in a computer analysis. Numerical applications are developed to assess the model effectiveness and the efficiency of the numerical procedure. The main object of the present paper is the development of a computational procedure which allows to define 3D structural behaviour of masonry vaults. For each investigated example, the homogenized limit analysis approach has been employed to predict ultimate load and failure mechanisms. Finally, both a mesh dependence study and a sensitivity analysis are reported. Sensitivity analysis is conducted varying in a wide range mortar tensile strength and mortar friction angle with the aim of investigating the influence of the mechanical properties of joints on collapse load and failure mechanisms. The proposed computer model is validated by a comparison with experimental results available in the literature.

The ride of the tram along the line, defined by a time-table, consists of the travel time between the subsequent sections and the time spent by tram on the stops. In the paper, statistical data collected in the city of Krakow is presented and evaluated. In polish conditions, for trams the time spent on stops makes up the remarkable amount of 30 % of the total time of tram line operation. Moreover, this time is characterized by large variability. The time spent by tram on a stop consists of alighting and boarding time and time lost by tram on stop after alighting and boarding time ending, but before departure. Alighting and boarding time itself usually depends on the random number of alighting and boarding passengers and also on the number of passengers which are inside the vehicle. However, the time spent by tram on stop after alighting and boarding time ending is an effect of certain random events, mainly because of impossibility of departure from stop, caused by lack of priorities for public transport vehicles. The main focus of the talk lies on the description and the modelling of these effects. This paper is involved with CIVITAS-CARAVEL project: "Clean and better transport in cites". The project has received research funding from the Community's Sixth Framework Programme. The paper reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein.

The paper presents a linear static analysis on continuous orthotropic thin-walled shell structures simply supported at the transverse ends with a random deformable contour of the cross section. The external loads can be random as well. The class of this structures involves most of the bridges, scaffold bridges, some roof structures etc. A numerical example of steel continuous structures on five spans with an open contour of the cross-section has been solved. The examination of the structure has used the following two computation models: a prismatic structure consisting of isotropic strips, a plates and ribs, with considering their real interaction, and a smooth orthotropic plate equivalent to the structure in the first model. The displacements and forces of the structure characterizing its stressed and deformed condition have been determined. The results obtained from the two solutions have been analyzed. The study on the structure is made with the force method in combination with the analytical finite strip method (AFSM) in displacements. The basic system is obtained by separating the superstructure from the understructure at the places of intermediate supports and consists of two parts. The first part is a single span thin-walled prismatic shell structure; the second part presents supports (columns, space frames etc.). The connection between the superstructure and intermediate supports is made under random supporting conditions. The forces at the supporting points in the direction of the connections removed are assumed to be the basic unknowns of the force method. The solution of the superstructure has been accomplished by the AFSM in displacements. The structure is divided in only one (transverse) direction into a finite number of plain strips connected to each other in longitudinal linear nodes. The three displacements of the points on the node lines and the rotation around those lines have been assumed to be the basic unknown in each node. The boundary conditions of each strip of the basic system correspond to the simply support along the transverse ends and the restraint along the longitudinal ones. The particular strip of the basic system has been solved by the method of the single trigonometric series. The method is reduced to solving a discrete structure in displacements and restoring its continuity at the places of the sections made in respect to both the displacements and forces. The two parts of the basic system have been solved in sequence under the action of single values of each of the basic unknowns and with the external load. The solution of the support part is accomplished using software for analyzing structures by the FEM. The basic unknown forces have been determined from system of canonic equations, the conditions of the deformations continuity on the places of the removed connections under superstructure and intermediate supports. The final displacements and forces at a random point of a continuous superstructure have been determined using the principle of superposition. The computations have been carried by software developed with Visual Fortran version 5.0 for PC.

In recent years special hypercomplex Appell polynomials have been introduced by several authors and their main properties have been studied by different methods and with different objectives. Like in the classical theory of Appell polynomials, their generating function is a hypercomplex exponential function. The observation that this generalized exponential function has, for example, a close relationship with Bessel functions confirmed the practical significance of such an approach to special classes of hypercomplex differentiable functions. Its usefulness for combinatorial studies has also been investigated. Moreover, an extension of those ideas led to the construction of complete sets of hypercomplex Appell polynomial sequences. Here we show how this opens the way for a more systematic study of the relation between some classes of Special Functions and Elementary Functions in Hypercomplex Function Theory.

What is nowadays called (classic) Clifford analysis consists in the establishment of a function theory for functions belonging to the kernel of the Dirac operator. While such functions can very well describe problems of a particle with internal SU(2)-symmetries, higher order symmetries are beyond this theory. Although many modifications (such as Yang-Mills theory) were suggested over the years they could not address the principal problem, the need of a n-fold factorization of the d’Alembert operator. In this paper we present the basic tools of a fractional function theory in higher dimensions, for the transport operator (alpha = 1/2 ), by means of a fractional correspondence to the Weyl relations via fractional Riemann-Liouville derivatives. A Fischer decomposition, fractional Euler and Gamma operators, monogenic projection, and basic fractional homogeneous powers are constructed.

The aim of this paper we discuss explicit series constructions for the fundamental solution of the Helmholtz operator on some important examples non-orientable conformally at manifolds. In the context of this paper we focus on higher dimensional generalizations of the Klein bottle which in turn generalize higher dimensional Möbius strips that we discussed in preceding works. We discuss some basic properties of pinor valued solutions to the Helmholtz equation on these manifolds.

This research focuses on an approach to describe principles in architectural layout planning within the domain of revitalization. With the aid of mathematical rules, which are executed by a computer, solutions to design problems are generated. Provided that "design" is in principle a combinatorial problem, i.e. a constraint-based search for an overall optimal solution of a problem, an exemplary method will be described to solve such problems in architectural layout planning. To avoid conflicts relating to theoretical subtleness, a customary approach adopted from Operations Research has been chosen in this work. In this approach, design is a synonym for planning, which could be described as a systematic and methodical course of action for the analysis and solution of current or future problems. The planning task is defined as an analysis of a problem with the aim to prepare optimal decisions by the use of mathematical methods. The decision problem of a planning task is represented by an optimization model and the application of an efficient algorithm in order to aid finding one or more solutions to the problem. The basic principle underlying the approach presented herein is the understanding of design in terms of searching for solutions that fulfill specific criteria. This search is executed by the use of a constraint programming language.

The paper is dedicated to decidability exploration of market segmentation problem with the help of linear convolution algorithms. Mathematical formulation of this problem represents interval task of bipartite graph cover by stars. Vertices of the first partition correspond to types of commodities, vertices of the second – to customers groups. Appropriate method is offered for interval problem reduction to two-criterion task that has one implemented linear convolution algorithm. Unsolvability with the help of linear convolution algorithm of multicriterion, and consequently interval, market segmentation problem is proved.

We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.

The contribution presents a model that is able to simulate construction duration and cost for a building project. This model predicts set of expected project costs and duration schedule depending on input parameters such as production speed, scope of work, time schedule, bonding conditions and maximum and minimum deviations from scope of work and production speed. The simulation model is able to calculate, on the basis of input level of probability, the adequate construction cost and time duration of a project. The reciprocal view attends to finding out the adequate level of probability for construction cost and activity durations. Among interpretive outputs of the application software belongs the compilation of a presumed dynamic progress chart. This progress chart represents the expected scenario of development of a building project with the mapping of potential time dislocations for particular activities. The calculation of a presumed dynamic progress chart is based on an algorithm, which calculates mean values as a partial result of the simulated building project. Construction cost and time models are, in many ways, useful tools in project management. Clients are able to make proper decisions about the time and cost schedules of their investments. Consequently, building contractors are able to schedule predicted project cost and duration before any decision is finalized.

We demonstrate how logical operations can be implemented in ensembles of protoplasmic tubes of acellular slime mold Physarum polycephalum. The tactile response of the protoplasmic tubes is used to actuate analogs of two- and four-input logical gates and memory devices. The slime mold tube logical gates display results of logical operations by blocking flow in mechanically stimulated tube fragments and redirecting the flow to output tube fragments. We demonstrate how XOR and NOR gates are constructed. We also exemplify circuits of hybrid gates and a memory device. The slime mold based gates are non-electronic, simple and inexpensive, and several gates can be realized simultaneously at sites where protoplasmic tubes merge.

A practical framework for generating cross correlated fields with a specified marginal distribution function, an autocorrelation function and cross correlation coefficients is presented in the paper. The contribution promotes a recent journal paper [1]. The approach relies on well known series expansion methods for simulation of a Gaussian random field. The proposed method requires all cross correlated fields over the domain to share an identical autocorrelation function and the cross correlation structure between each pair of simulated fields to be simply defined by a cross correlation coefficient. Such relations result in specific properties of eigenvectors of covariance matrices of discretized field over the domain. These properties are used to decompose the eigenproblem which must normally be solved in computing the series expansion into two smaller eigenproblems. Such decomposition represents a significant reduction of computational effort. Non-Gaussian components of a multivariate random field are proposed to be simulated via memoryless transformation of underlying Gaussian random fields for which the Nataf model is employed to modify the correlation structure. In this method, the autocorrelation structure of each field is fulfilled exactly while the cross correlation is only approximated. The associated errors can be computed before performing simulations and it is shown that the errors happen especially in the cross correlation between distant points and that they are negligibly small in practical situations.

The paper gives the results of scientific research, which, being based on probabilistic and statistical modeling, identifies the relationship of certain socio-economic factors and the number of people killed in road accidents in the Russian Federation regions. It notes the identity of processes in various fields, in which there is loss of life. Scientific methods and techniques were used in the process of data processing and study findings: systematic approach, methods of system analysis (algorithmization, mathematical programming) and mathematical statistics. The scientific novelty lies in the formulation, formalization and solving problems related to the analysis of regional road traffic accidents, its modeling taking into account the factors of socio-economic impact.

From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.

SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)

After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.

In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.

A new application of software technology is the application area of smart living or sustainable living. Within this area application platforms are designed and realized with the goal to support value added services. In this context value added services integrates microelectronics, home automation and services to enhance the attractiveness of flats, homes and buildings. Especially real estate companies or service providers dealing with home services are interested in an effective design and management of their services. Service Engineering is the approved approach for designing customer oriented service processes. Service engineering consists of several phases; from situation analysis to service creation and service design to service management. This article will describe how the method service blueprint can be used to design service processes. Smart living includes all actions to enlarge a flat to a smart home for living. One special requirement of this application domain is the use of local components (actuators, sensors) within service processes. This article will show how this extended method supports service providers to improve the quality of customer oriented service processes and the derivation of needed interfaces of involved actors. For the civil engineering process it will be possible to derive needed information from a built in home automation system. The aim is to show, how to get needed smart local components to fullfill later offered it-supported value added services. Value added services focused on inhabitants are grouped to consulting and information, care and supervision, leisure time activities, repairs, mobility and delivery, safety and security, supply and disposal.

In distributed project organisations and collaboration there is a need for integrating unstructured self-contained text information with structured project data. We consider this a process of text integration in which various text technologies can be used to externalise text content and consolidate it into structured information or flexibly interlink it with corresponding information bases. However, the effectiveness of text technologies and the potentials of text integration greatly vary with the type of documents, the project setup and the available background knowledge. The goal of our research is to establish text technologies within collaboration environments to allow for (a) flexibly combining appropriate text and data management technologies, (b) utilising available context information and (c) the sharing of text information in accordance to the most critical integration tasks. A particular focus is on Semantic Service Environments that leverage on Web service and Semantic Web technologies and adequately support the required systems integration and parallel processing of semi-structured and structured information. The paper presents an architecture for text integration that extends Semantic Service Environments with two types of integration services. Backbone to the Information Resource Sharing and Integration Service is a shared environment ontology that consolidates information on the project context and the available model, text and general linguistic resources. It also allows for the configuration of Semantic Text Analysis and Annotation Services to analyse the text documents as well as for capturing the discovered text information and sharing it through semantic notification and retrieval engines. A particular focus of the paper is the definition of the overall integration process configuring a complementary set of analyses and information sharing components.

SELECTION AND SCALING OF GROUND MOTION RECORDS FOR SEISMIC ANALYSIS USING AN OPTIMIZATION ALGORITHM
(2015)

The nonlinear time history analysis and seismic performance based methods require a set of scaled ground motions. The conventional procedure of ground motion selection is based on matching the motion properties, e.g. magnitude, amplitude, fault distance, and fault mechanism. The seismic target spectrum is only used in the scaling process following the random selection process. Therefore, the aim of the paper is to present a procedure to select a sets of ground motions from a built database of ground motions. The selection procedure is based on running an optimization problem using Dijkstra’s algorithm to match the selected set of ground motions to a target response spectrum. The selection and scaling procedure of optimized sets of ground motions is presented by examining the analyses of nonlinear single degree of freedom systems.

In this paper experimental studies and numerical analysis carried out on reinforced concrete beam are partially reported. They aimed to apply the rigid finite element method to calculations for reinforced concrete beams using discrete crack model. Hence rotational ductility resulting from crack occurrence had to be determined. A relationship for calculating it in static equilibrium was proposed. Laboratory experiments proved that dynamic ductility is considerably smaller. Therefore scaling of the empirical parameter was carried out. Consequently a formula for its value depending on reinforcement ratio was obtained.

Many researchers are working on developing robots into adequate partners, be it at the working place, be it at home or in leisure activities, or enabling elder persons to lead a self-determined, independent life. While quite some progress has been made in e.g. speech or emotion understanding, processing and expressing, the relations between humans and robots are usually only short-term. In order to build long-term, i.e. social relations, qualities like empathy, trust building, dependability, non-patronizing, and others will be required. But these are just terms and as such no adequate starting points to “program” these capacities even more how to avoid the problems and pitfalls in interactions between humans and robots. However, a rich source for doing this is available, unused until now for this purpose: artistic productions, namely literature, theater plays, not to forget operas, and films with their multitude of examples. Poets, writers, dramatists, screen-writers, etc. have studied for centuries the facets of interactions between persons, their dynamics, and the related snags. And since we wish for human-robot relations as master-servant relations - the human obviously being the master - the study of these relations will be prominent. A procedure is proposed, with four consecutive steps, namely Selection, Analysis, Categorization, and Integration. Only if we succeed in developing robots which are seen as servants we will be successful in supporting and helping humans through robots.

RESEARCH OF DEFORMATION OF MULTILAYERED PLATES ON UNDEFORMABLE BASIS BY UNFLEXURAL SPECIFIED MODEL
(2006)

Stress-strain state (SSS) of multilayered plates on undeformable foundation is investigated. The settlement circuit of transverse loaded plate is formed by symmetrical attaching of a plate concerning a surface of contact to the foundation. The plate of the double thickness becomes bilateral symmetrically loaded concerning its median surface. It allows to model only unflexural deformation that reduces amount of unknown and the general order of differentiation of resolving system of the equations. The developed refined continual model takes into account deformations of transverse shear and transverse compression in high iterative approximation. Rigid contact between the foundation and a plate, and also shear without friction on a surface of contact of a plate with the foundation is considered. Calculations confirm efficiency of such approach, allowing to receive decisions which is qualitative and quantitatively close to three-dimensional solutions.

In this paper we consider three different methods for generating monogenic functions. The first one is related to Fueter's well known approach to the generation of monogenic quaternion-valued functions by means of holomorphic functions, the second one is based on the solution of hypercomplex differential equations and finally the third one is a direct series approach, based on the use of special homogeneous polynomials. We illustrate the theory by generating three different exponential functions and discuss some of their properties. Formula que se usa em preprints e artigos da nossa UI&D (acho demasiado completo): Partially supported by the R\&D unit \emph{Matem\'atica a Aplica\c\~es} (UIMA) of the University of Aveiro, through the Portuguese Foundation for Science and Technology (FCT), co-financed by the European Community fund FEDER.

We investigate aspects of tram-network section reliability, which operates as a part of the model of whole city tram-network reliability. Here, one of the main points of interest is the character of the chronological development of the disturbances (namely the differences between time of departure provided in schedule and real time of departure) on subsequent sections during tram line operation. These developments were observed in comprehensive measurements done in Krakow, during one of the main transportation nodes (Rondo Mogilskie) rebuilding. All taken building activities cause big disturbances in tram lines operation with effects extended to neighboring sections. In a second part, the stochastic character of section running time will be analyzed more detailed. There will be taken into consideration sections with only one beginning stop and also with two or three beginning stops located at different streets at an intersection. Possibility of adding results from sections with two beginning stops to one set will be checked with suitable statistical tests which are used to compare the means of the two samples. Section running time may depend on the value of gap between two following trams and from the value of deviation from schedule. This dependence will be described by a multi regression formula. The main measurements were done in the city center of Krakow in two stages: before and after big changes in tramway infrastructure.

The theory of regular quaternionic functions of a reduced quaternionic variable is a 3-dimensional generalization of complex analysis. The Moisil-Theodorescu system (MTS) is a regularity condition for such functions depending on the radius vector r = ix+jy+kz seen as a reduced quaternionic variable. The analogues of the main theorems of complex analysis for the MTS in quaternion forms are established: Cauchy, Cauchy integral formula, Taylor and Laurent series, approximation theorems and Cauchy type integral properties. The analogues of positive powers (inner spherical monogenics) are investigated: the set of recurrence formulas between the inner spherical monogenics and the explicit formulas are established. Some applications of the regular function in the elasticity theory and hydrodynamics are given.

Due to increasing numbers of wind energy converters, the accurate assessment of the lifespan of their structural parts and the entire converter system is becoming more and more paramount. Lifespan-oriented design, inspections and remedial maintenance are challenging because of their complex dynamic behavior. Wind energy converters are subjected to stochastic turbulent wind loading causing corresponding stochastic structural response and vibrations associated with an extreme number of stress cycles (up to 109 according to the rotation of the blades). Currently, wind energy converters are constructed for a service life of about 20 years. However, this estimation is more or less made by rule of thumb and not backed by profound scientific analyses or accurate simulations. By contrast, modern structural health monitoring systems allow an improved identification of deteriorations and, thereupon, to drastically advance the lifespan assessment of wind energy converters. In particular, monitoring systems based on artificial intelligence techniques represent a promising approach towards cost-efficient and reliable real-time monitoring. Therefore, an innovative real-time structural health monitoring concept based on software agents is introduced in this contribution. For a short time, this concept is also turned into a real-world monitoring system developed in a DFG joint research project in the authors’ institute at the Ruhr-University Bochum. In this paper, primarily the agent-based development, implementation and application of the monitoring system is addressed, focusing on the real-time monitoring tasks in the deserved detail.

We present StarWatch, our application for real-time analysis of radio astronomical data in Virtual Environment. Serving as an interface to radio astronomical databases or being applied to live data from the radio telescopes, the application supports various data filters measuring signal-to-noise ratio (SNR), Doppler's drift, degree of signal localization on celestial sphere and other useful tools for signal extraction and classification. Originally designed for the database of narrow band signals from SETI Institute (setilive.org), the application has been recently extended for the detection of wide band periodic signals, necessary for the search of pulsars. We will also address the detection of week signals possessing arbitrary waveforms and present several data filters suitable for this purpose.

Quality is one of the most important properties of a product. Providing the optimal quality can reduce costs for rework, scrap, recall or even legal actions while satisfying customers demand for reliability. The aim is to achieve ``built-in'' quality within product development process (PDP). The common approach therefore is the robust design optimization (RDO). It uses stochastic values as constraint and/or objective to obtain a robust and reliable optimal design. In classical approaches the effort required for stochastic analysis multiplies with the complexity of the optimization algorithm. The suggested approach shows that it is possible to reduce this effort enormously by using previously obtained data. Therefore the support point set of an underlying metamodel is filled iteratively during ongoing optimization in regions of interest if this is necessary. In a simple example, it will be shown that this is possible without significant loss of accuracy.

The concept is presented of the sensitivity analysis of the limit state of the structure with respect to selected basic variables. The sensitivity is presented in the form of the probability distribution of the limit state of the structure. The analysis is performed by the problem-oriented Monte Carlo simulation procedure. The procedure is based on the problem's definition of the elementary event, as a structural limit state. Thus the sample space consists of limit states of the structure. Defined on the sample space the one-dimensional random multiplier is introduced. This multiplier refers to the dominant basic variable (group of variables) of the problem. Numerical procedure results in the set of random numbers. Normalized relative histogram of this set is an estimator of the PDF of the limit state of the structure. Estimators of reliability, or the probability of failure are statistical characteristics of this histogram. The procedure is illustrated by the example of sensitivity analysis of the serviceability limit state of monumental structure. It is the colonnade of Licheń Basilica, situated in central Poland. Limit state of the structure is examined with reference to the upper deck horizontal deflection. Wind actions are taken as dominant variables. An assumption is made that the wind load intensities acting on the lower and on the upper storey of the colonnade, respectively, are identically distributed, but correlated random variables. Three correlation variants of these variables are considered. Relevant limit state histograms are analysed thereafter. The paper ends with the conclusions referring to the method and some general remarks on the fully probabilistic design.

In Zeiten volatiler Immobilienmärkte und einer hohen Wettbewerbsintensität sind leistungsfähige Systeme der Analyse und Entscheidungsunterstützung unverzichtbar. Entscheidungen zu Investitionsstrategien und Einzelinvestitionen basieren zumeist auf mehreren entscheidungsrelevanten Kriterien. Unterschiedliche immobilienwirtschaftliche Entscheidungsalternativen können dabei durchaus Kriterienausprägungen aufweisen, die eine bestimmte Alternative nicht als stets besser bzw. stets schlechter ausweisen. Klassische finanzwirtschaftliche Modelle oder verbreitete qualitative Verfahren wie das Scoring können die gegebene Komplexität meist nicht angemessen berücksichtigen. Eine Weiterentwicklung immobilienwirtschaftlicher Entscheidungsmodelle ist durch die Übertragung und Spezifizierung multikriterieller Verfahren der Entscheidungsunterstützung möglich. Speziell die Untergruppe des Outranking beschäftigt sich mit der schrittweisen Strukturierung, Ordnung und Priorisierung von komplexen Auswahlalternativen. Als spezifische immobilienwirtschaftliche Fragestellung dient hier die Auswahl und Priorisierung von Zielmärkten im taktischen Portfoliomanagement eines institutionellen Immobilienportfoliosmit internationaler Ausrichtung. Die Formalisierung des Entscheidungsproblems „Priorisierung von Zielmärkten“ erfolgt mit dem ELECTRE-Verfahren.

In vielen öffentlichen Gebäuden besteht ein hohes wirtschaftliches Einsparpotenzial bei den relevanten Energieträgern Wärme und Strom. Projekte zur energetischen Optimierungen refinanzieren sich häufig nach wenigen Jahren. Die notwenigen Investitionsmittel stehen jedoch nur begrenzt zur Verfügung. Zielgerichtete Analysen und Potenzialschätzungen sind erforderlich, um eine Priorisierung optionaler Maßnahmen zu erreichen. Die Studie zeigt anhand eines öffentlichen Portfolios notwendige Untersuchungsschritte auf. Die Einzelpotenziale werden über geeignete Benchmarks ermittelt. Auf Portfolioebene werden u. a. spezifische Potenzial-Matrizen genutzt. Die kennzahlenbasierte Priorisierung von Maßnahmen ist umso wichtiger, je stärker das Potenzial auf wenige Objekte konzentriert ist.

The paper proposes a new method for general 3D measurement and 3D point reconstruction. Looking at its features, the method explicitly aims at practical applications. These features especially cover low technical expenses and minimal user interaction, a clear problem separation into steps that are solved by simple mathematical methods (direct, stable and optimal with respect to least error squares), and scalability. The method expects the internal and radial distortion parameters of the used camera(s) as inputs, and a plane quadrangle with known geometry within the scene. At first, for each single picture the 3D position of the reference quadrangle (with respect to each camera coordinate frame) is calculated. These 3D reconstructions of the reference quadrangle are then used to yield the relative external parameters of each camera regarding the first one. With known external parameters, triangulation is finally possible. The differences from other known procedures are outlined, paying attention to the stable mathematical methods (no usage of nonlinear optimization) and the low user interaction with good results at the same time.

Traffic simulation is a valuable tool for the design and evaluation of road networks. Over the years, the level of detail to which urban and freeway traffic can be simulated has increased steadily, shifting from a merely qualitative macroscopic perspective to a very detailed microscopic view, where the behavior of individual vehicles is emulated realistically. With the improvement of behavioral models, however, the computational complexity has also steadily increased, as more and more aspects of real-life traffic have to be considered by the simulation environment. Despite the constant increase in computing power of modern personal computers, microscopic simulation stays computationally expensive, limiting the maximum network size than can be simulated on a single-processor computer in reasonable time. Parallelization can distribute the computing load from a single computer system to a cluster of several computing nodes. To this end, the exisiting simulation framework had to be adapted to allow for a distributed approach. As the simulation is ultimately targeted to be executed in real-time, incorporating real traffic data, only a spatial partition of the simulation was considered, meaning the road network has to be partitioned into subnets of comparable complexity, to ensure a homogenous load balancing. The partition process must also ensure, that the division between subnets does only occur in regions, where no strong interaction between the separated road segments occurs (i.e. not in the direct vicinity of junctions). In this paper, we describe a new microscopic reasoning voting strategy, and discuss in how far the increasing computational costs of these more complex behaviors lend themselves to a parallelized approach. We show the parallel architecture employed, the communication between computing units using MPIJava, and the benefits and pitfalls of adapting a single computer application to be used on a multi-node computing cluster.

In civil engineering it is very difficult and often expensive to excite constructions such as bridges and buildings with an impulse hammer or shaker. This problem can be avoided with the output-only method as special feature of stochastic system identification. The permanently existing ambient noise (e.g. wind, traffic, waves) is sufficient to excite the structures in their operational conditions. The output-only method is able to estimate the observable part of a state-space-model which contains the dynamic characteristics of the measured mechanical system. Because of the assumption that the ambient excitation is white there is no requirement to measure the input. Another advantage of the output-only method is the possibility to get high detailed models by a special method, called polyreference setup. To pretend the availability of a much larger set of sensors the data from varying sensor locations will be collected. Several successive data sets are recorded with sensors at different locations (moving sensors) and fixed locations (reference sensors). The covariance functions of the reference sensors are bases to normalize the moving sensors. The result of the following subspace-based system identification is a high detailed black-box-model that contains the weighting function including the well-known dynamic parameters eigenfrequencies and mode shapes of the mechanical system. Emphasis of this lecture is the presentation of an extensive damage detection experiment. A 53-year old prestressed concrete tied-arch-bridge in Hünxe (Germany) was deconstructed in 2005. Preliminary numerous vibration measurements were accomplished. The first experiment for system modification was an additional support near the bridge bearing of one main girder. During a further experiment one hanger from one tied arch was cut through as an induced damage. Some first outcomes of the described experiments will be presented.

It is well known that complex quaternion analysis plays an important role in the study of higher order boundary value problems of mathematical physics. Following the ideas given for real quaternion analysis, the paper deals with certain orthogonal decompositions of the complex quaternion Hilbert space into its subspaces of null solutions of Dirac type operator with an arbitrary complex potential. We then apply them to consider related boundary value problems, and to prove the existence and uniqueness as well as the explicit representation formulae of the underlying solutions.

This paper deals with the development of a new multi-objective evolution strategy in combination with an integrated pollution-load and water-quality model. The optimization algorithm combines the advantages of the Non-Dominated Sorting Genetic Algorithm and Self-Adaptive Evolution Strategies. The identification of a good spread of solutions on the pareto-optimum front and the optimization of a large number of decision variables equally demands numerous simulation runs. In addition, statements with regard to the frequency of critical concentrations and peak discharges require continuous long-term simulations. Therefore, a fast operating integrated simulation model is needed providing the required precision of the results. For this purpose, a hydrological deterministic pollution-load model has been coupled with a river water-quality and a rainfall-runoff model. Wastewater treatment plants are simulated in a simplified way. The functionality of the optimization and simulation tool has been validated by analyzing a real catchment area including sewer system, WWTP, water body and natural river basin. For the optimization/rehabilitation of the urban drainage system, both innovative and approved measures have been examined and used as decision variables. As objective functions, investment costs and river water quality criteria have been used.

The sizing of simple resonators like guitar strings or laser mirrors is directly connected to the wavelength and represents no complex optimisation problem. This is not the case with liquid-filled acoustic resonators of non-trivial geometries, where several masses and stiffnesses of the structure and the fluid have to fit together. This creates a scenario of many competing and interacting resonances varying in relative strength and frequency when design parameters change. Hence, the resonator design involves a parameter-tuning problem with many local optima. As its solution evolutionary algorithms (EA) coupled to a forced-harmonic FE simulation are presented. A new hybrid EA is proposed and compared to two state-of-theart EAs based on selected test problems. The motivating background is the search for better resonators suitable for sonofusion experiments where extreme states of matter are sought in collapsing cavitation bubbles.

Performing parameter identification prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental setup, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and to reduce the number of monitoring points. This Paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability of soft clays. With an initial setup of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the afore mentioned parameters. Starting from these identified parameters, the optimal measurement setup is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity, and the results are discussed.

The Laguerre polynomials appear naturally in many branches of pure and applied mathematics and mathematical physics. Debnath introduced the Laguerre transform and derived some of its properties. He also discussed the applications in study of heat conduction and to the oscillations of a very long and heavy chain with variable tension. An explicit boundedness for some class of Laguerre integral transforms will be present.

In many branches companies often lose the visibility of their human and technical resources of their field service. On the one hand the people in the fieldservice are often free like kings on the other hand they do not take part of the daily communication in the central office and suffer under the lacking involvement in the decisions inside the central office. The result is inefficiency. Reproaches in both directions follow. With the radio systems and then mobile phones the ditch began to dry up. But the solutions are far from being productive.

Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.

ON THE NAVIER-STOKES EQUATION WITH FREE CONVECTION IN STRIP DOMAINS AND 3D TRIANGULAR CHANNELS
(2006)

The Navier-Stokes equations and related ones can be treated very elegantly with the quaternionic operator calculus developed in a series of works by K. Guerlebeck, W. Sproeossig and others. This study will be extended in this paper. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one basically needs to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. With special variants of quaternionic holomorphic multiperiodic functions we obtain explicit formulas for three dimensional parallel plate channels, rectangular block domains and regular triangular channels. The explicit knowledge of the integral kernels makes it then possible to evaluate the operator equations in order to determine the solutions of the boundary value problem explicitly.

In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.

As numerical techniques for solving PDE or integral equations become more sophisticated, treatments of the generation of the geometric inputs should also follow that numerical advancement. This document describes the preparation of CAD data so that they can later be applied to hierarchical BEM or FEM solvers. For the BEM case, the geometric data are described by surfaces which we want to decompose into several curved foursided patches. We show the treatment of untrimmed and trimmed surfaces. In particular, we provide prevention of smooth corners which are bad for diffeomorphism. Additionally, we consider the problem of characterizing whether a Coons map is a diffeomorphism from the unit square onto a planar domain delineated by four given curves. We aim primarily at having not only theoretically correct conditions but also practically efficient methods. As for FEM geometric preparation, we need to decompose a 3D solid into a set of curved tetrahedra. First, we describe some method of decomposition without adding too many Steiner points (additional points not belonging to the initial boundary nodes of the boundary surface). Then, we provide a methodology for efficiently checking whether a tetrahedral transfinite interpolation is regular. That is done by a combination of degree reduction technique and subdivision. Along with the method description, we report also on some interesting practical results from real CAD data.

Cyber security has become a major concern for users and businesses alike. Cyberstalking and harassment have been identified as a growing anti-social problem. Besides detecting cyberstalking and harassment, there is the need to gather digital evidence, often by the victim. To this end, we provide an overview of and discuss relevant technological means, in particular coming from text analytics as well as machine learning, that are capable to address the above challenges. We present a framework for the detection of text-based cyberstalking and the role and challenges of some core techniques such as author identification, text classification and personalisation. We then discuss PAN, a network and evaluation initiative that focusses on digital text forensics, in particular author identification.

The paper is devoted to a study of properties of homogeneous solutions of massless field equation in higher dimensions. We first treat the case of dimension 4. Here we use the two-component spinor language (developed for purposes of general relativity). We describe how are massless field operators related to a higher spin analogues of the de Rham sequence - the so called Bernstein-Gel'fand-Gel'fand (BGG) complexes - and how are they related to the twisted Dirac operators. Then we study similar question in higher (even) dimensions. Here we have to use more tools from representation theory of the orthogonal group. We recall the definition of massless field equations in higher dimensions and relations to higher dimensional conformal BGG complexes. Then we discuss properties of homogeneous solutions of massless field equation. Using some recent techniques for decomposition of tensor products of irreducible $Spin(m)$-modules, we are able to add some new results on a structure of the spaces of homogenous solutions of massless field equations. In particular, we show that the kernel of the massless field equation in a given homogeneity contains at least on specific irreducible submodule.

The Bernstein polynomials are used for important applications in many branches of Mathematics and the other sciences, for instance, approximation theory, probability theory, statistic theory, num- ber theory, the solution of the di¤erential equations, numerical analysis, constructing Bezier curves, q-calculus, operator theory and applications in computer graphics. The Bernstein polynomials are used to construct Bezier curves. Bezier was an engineer with the Renault car company and set out in the early 1960’s to develop a curve formulation which would lend itself to shape design. Engineers may …nd it most understandable to think of Bezier curves in terms of the center of mass of a set of point masses. Therefore, in this paper, we study on generating functions and functional equations for these polynomials. By applying these functions, we investigate interpolation function and many properties of these polynomials.

The p-Laplace equation is a nonlinear generalization of the Laplace equation. This generalization is often used as a model problem for special types of nonlinearities. The p-Laplace equation can be seen as a bridge between very general nonlinear equations and the linear Laplace equation. The aim of this paper is to solve the p-Laplace equation for 2 < p < 3 and to find strong solutions. The idea is to apply a hypercomplex integral operator and spatial function theoretic methods to transform the p-Laplace equation into the p-Dirac equation. This equation will be solved iteratively by using a fixed point theorem.

Since the 90-ties the Pascal matrix, its generalizations and applications have been in the focus of a great amount of publications. As it is well known, the Pascal matrix, the symmetric Pascal matrix and other special matrices of Pascal type play an important role in many scientific areas, among them Numerical Analysis, Combinatorics, Number Theory, Probability, Image processing, Sinal processing, Electrical engineering, etc. We present a unified approach to matrix representations of special polynomials in several hypercomplex variables (new Bernoulli, Euler etc. polynomials), extending results of H. Malonek, G.Tomaz: Bernoulli polynomials and Pascal matrices in the context of Clifford Analysis, Discrete Appl. Math. 157(4)(2009) 838-847. The hypercomplex version of a new Pascal matrix with block structure, which resembles the ordinary one for polynomials of one variable will be discussed in detail.

Sand-bentonite mixtures are well recognized as buffer and sealing material in nuclear waste repository constructions. The behaviour of compacted sand-bentonite mixture needs to be well understood in order to guarantee the safety and the efficiency of the barrier construction. This paper presents numerical simulations of swelling test and coupled thermo-hydro-mechanical (THM) test on compacted sand-bentonite mixture in order to reveal the influence of the temperature and hydraulic gradients on the distribution of temperature, mechanical stress and water content in such materials. Sensitivity analysis is carried out to identify the parameters which influence the most the response of the numerical model. Results of back analysis of the model parameters are reported and critically assessed.

We present the way of calculation of displacement in the bent reinforced concrete bar elements where rearrangement of internal forces and plastic hinge occurred. The described solution is based on prof. Borcz’s mathematical model. It directly takes into consideration the effects connected with the occurrence of plastic hinge, such as for example a crack, by means of a differential equation of axis of the bent reinforced concrete beam. The EN Eurocode 2 makes it possible to consider the influence of plastic hinge on the values of the reinforced concrete structures. This influence can also be assumed using other analytical methods. However, the results obtained by the application of Eurocode 2 are higher from those received in testing. Just comparably big error level occurs when calculations are made by means of Borcz’s method, but in the latter case, the results depend on the assumptions made beforehand. This method makes it possible to apply the experimental results using parameters r1 i r0. When the experimental results are taken into account, one could observe the compatibility between the calculations and actual deflections of the structure.

The article presents analysis of stress distribution in the reinforced concrete support beam bracket which is a component of prefabricated reinforced concrete building. The building structure is spatial frame where dilatations were applied. The proper stiffness of its structure is provided by frames with stiff joints, monolithic lift shifts and staircases. The prefabricated slab floors are supported by beam shelves which are shaped as inverted letter ‘T’. Beams are supported by the column brackets. In order to lower the storey height and fulfill the architectural demands at the same time, the designer lowered the height of beam at the support zone. The analyzed case refers to the bracket zone where the slant crack. on the support beam bracket was observed. It could appear as a result of overcrossing of allowable tension stresses in reinforced concrete, in the bracket zone. It should be noted that the construction solution applied, i.e. concurrent support of the “undercut” beam on the column bracket causes local concentration of stresses in the undercut zone where the strongest transverse forces and tangent stresses occur concurrently. Some additional rectangular stresses being a result of placing the slab floors on the lower part of beam shelves sum up with those described above.

NONZONAL WAVELETS ON S^N
(2010)

In the present article we will construct wavelets on an arbitrary dimensional sphere S^n due the approach of approximate Identities. There are two equivalently approaches to wavelets. The group theoretical approach formulates a square integrability condition for a group acting via unitary, irreducible representation on the sphere. The connection to the group theoretical approach will be sketched. The concept of approximate identities uses the same constructions in the background, here we select an appropriate section of dilations and translations in the group acting on the sphere in two steps. At First we will formulate dilations in terms of approximate identities and than we call in translations on the sphere as rotations. This leads to the construction of an orthogonal polynomial system in L²(SO(n+1)). That approach is convenient to construct concrete wavelets, since the appropriate kernels can be constructed form the heat kernel leading to the approximate Identity of Gauss-Weierstra\ss. We will work out conditions to functions forming a family of wavelets, subsequently we formulate how we can construct zonal wavelets from a approximate Identity and the relation to admissibility of nonzonal wavelets. Eventually we will give an example of a nonzonal Wavelet on $S^n$, which we obtain from the approximate identity of Gauss-Weierstraß.

Nodal integration of finite elements has been investigated recently. Compared with full integration it shows better convergence when applied to incompressible media, allows easier remeshing and highly reduces the number of material evaluation points thus improving efficiency. Furthermore, understanding it may help to create new integration schemes in meshless methods as well. The new integration technique requires a nodally averaged deformation gradient. For the tetrahedral element it is possible to formulate a nodal strain which passes the patch test. On the downside, it introduces non-physical low energy modes. Most of these "spurious modes" are local deformation maps of neighbouring elements. Present stabilization schemes rely on adding a stabilizing potential to the strain energy. The stabilization is discussed within this article. Its drawbacks are easily identified within numerical experiments: Nonlinear material laws are not well represented. Plastic strains may often be underestimated. Geometrically nonlinear stabilization greatly reduces computational efficiency. The article reinterpretes nodal integration in terms of imposing a nonconforming C0-continuous strain field on the structure. By doing so, the origins of the spurious modes are discussed and two methods are presented that solve this problem. First, a geometric constraint is formulated and solved using a mixed formulation of Hu-Washizu type. This assumption leads to a consistent representation of the strain energy while eliminating spurious modes. The solution is exact, but only of theoretical interest since it produces global support. Second, an integration scheme is presented that approximates the stabilization criterion. The latter leads to a highly efficient scheme. It can even be extended to other finite element types such as hexahedrals. Numerical efficiency, convergence behaviour and stability of the new method is validated using linear tetrahedral and hexahedral elements.

New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea that Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n×n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multilinear quantities.

We show a close relation between the Schrödinger equation and the conductivity equation to a Vekua equation of a special form. Under quite general conditions we propose an algorithm for explicit construction of pseudoanalytic positive formal powers for the Vekua equation that as a consequence gives us a complete system of solutions for the Schrödinger and the conductivity equations. Besides the construction of complete systems of exact solutions for the above mentioned second order equations and the Dirac equation, we discuss some other applications of pseudoanalytic function theory.

Durch die Betrachtung des Produktions-Prozesses als zentrales Transformationselement wird die Struktur der Bauproduktion realitätsnah gefasst. Die Integration der prozessorientierten Kostendefinition setzt relevante Kostenparameter und Produktionsfaktoren so in Beziehung, dass sie im Einklang mit der realen Kostenstruktur und Kostendynamik einer Baustelle stehen. Die Beziehung zwischen Bauzeit und Kosten wird direkt erfasst und ausgewertet. Der hohen Dynamik der Bauproduktion zwischen kapazitätsbeschränkten Einsatzmitteln und Produktionsprozessen wurde durch das Poolmodell und der Simulation als Berechnungsmethode Rechnung getragen. Eine einfache Modellierung von sich zyklusartig wiederholenden Arbeitsvorgängen (Taktplanung) ist möglich. Die Taktbildung vollzieht sich bei der Simulation durch Kapazitätsbeschränkungen ohne Zutun des Benutzers. Durch eine Optimierungsmethode kann automatisiert nach der kostengünstigsten oder zeitlich schnellsten Produktionsvariante gesucht werden

A stress based remodeling approach is used to investigate the sensitivity of the collagen architecture in humane eye tissues on the biomechanical response of the lamina cribrosa with a particular focus on the stress environment of the nerve fibers. This approach is based on a multi-level biomechanical framework, where the biomechanical properties of eye tissues are derived from a single crimped fibril at the micro-scale via the collagen network of distributed fibrils at the meso-scale to the incompressible and anisotropic soft tissue at the macro-scale. Biomechanically induced remodeling of the collagen network is captured on the meso-scale by allowing for a continuous reorientation of collagen fibrils. To investigate the multi-scale phenomena related to glaucomatous neuropathy a generalized computational homogenization scheme is applied to a coupled two-scale analysis of the human eye considering a numerical macro- and meso-scale model of the lamina cribrosa.

For many applications, nonuniformly distributed functional data is given which lead to large–scale scattered data problems. We wish to represent the data in terms of a sparse representation with a minimal amount of degrees of freedom. For this, an adaptive scheme which operates in a coarse-to-fine fashion using a multiscale basis is proposed. Specifically, we investigate hierarchical bases using B-splines and spline-(pre)wavelets. At each stage a leastsquares approximation of the data is computed. We take into account different requests arising in large-scale scattered data fitting: we discuss the fast iterative solution of the least square systems, regularization of the data, and the treatment of outliers. A particular application concerns the approximate continuation of harmonic functions, an issue arising in geodesy.

Requires for reliability and durability of structures and their elements with simultaneous material economy have stimulated improvement of constitutive equations for description of elasto-plastic deformation processes. This has led to the development of phenomenological modelling of complex phenomena of irreversible deformation including history-dependent and rate-dependent effects. During the last several decades many works have been devoted to the development of elasto-plastic models, in order to better predict the material behavior under combined variable thermo-mechanical loading. The increase of accuracy of stress analysis and safety factors for complex structures with the help of modern finite-element packages (ABAQUS, ANSYS, COSMOS, LS-DYNA, MSC.MARC, MSC.NASTRAN, PERMAS and other) can be provided only by use of complex and special variants of plasticity theories, which are adequate for the considered loading conditions and based on authentic information about properties of materials. The areas of application of the various theories (models) are as a rule unknown to the users of finite-element packages at the existing variety loading condition sin machine-building designs. At the moment a universal theory of inelasticity is absent and even the most accomplished theories can not guarantee adequate description of deformation processes for arbitrary structure under wide range of loading programs. The classifier of materials, loading conditions, effects (phenomena) and list of basic experiments are developed by the authors. Use of these classifiers for an establishment of hierarchy of models is a first step for introduction of the multimodel analysis into computational practice. The set of the classic and modern inelasticity theories is considered, so that they are applicable for stress analysis of structures under complex loading programs. Among them there are plastic flow theories with linear and nonlinear isotropic and kinematic hardening, multisurface theories, endochronic theory, holonomic theory, rheologic models, theory of elasto-plastic processes, slip theory, physical theories (single crystal and polycrystalline models) and others. The classification of materials provides rearranging by a degree of homogeneous, chemical composition, level of strength and plasticity, behavior under cyclic loading, anisotropy of properties at initial condition, anisotropy of properties during deformation process, structural stability. The classification of loading conditions takes into consideration proportional and non-proportional loading, temperature range, combination of cyclic and monotonous loading, one-axial, two-axial and complex stress state, curvature of strain path, presence of stress concentrators and level of strain gradient. A unified general form of constitutive equations is presented for all used material models based upon the concept of internal state variables. The wide range of mentioned above inelastic material models has been implemented into finite element program PANTOCRATOR developed by authors (see for details www.pantocrator.narod.ru). Application possibility of different material models is considered both for material element and for complex structures subjected to complex non-proportional loading.

MULTI-SITE CONSTRUCTION PROJECT SCHEDULING CONSIDERING RESOURCE MOVING TIME IN DEVELOPING COUNTRIES
(2010)

Under the booming construction demands in developing countries, particularly in Vietnam situation, construction contractors often perform multiple concurrent projects in different places. In construction project scheduling processes, the existing scheduling methods often assume the resource moving time between activities/projects to be negligible. When multiple projects are deployed in different places and far from each other, this assumption has many shortcomings for properly modelling the real-world constraints. Especially, with respect to developing countries such as the Vietnam which contains transportation systems that are still in backward and low technical standards. This paper proposes a new algorithm named Multi-Site Construction Project Scheduling - MCOPS. The objective of this algorithm is to solve the problem of minimising multi-site construction project duration under limited available conditions of renewable resources (labour, machines and equipment) combining with the moving time of required resource among activities/projects. Additionally, in order to mitigate the impact of resource moving time into the multi-site project duration, this paper proposed a new priority rule: Minimum Resource Moving Time (MinRMT). The MinRMT is applied to rank the finished activities according to a priority order, to support the released resources to the scheduling activities. In order to investigate the impact of the resource moving time among activities during the scheduling process, computational experimentation was implemented. The results of the MCOPS-based computational experiments showed that, the resource moving time among projects has significantly impacted the multi-site project durations and this amount of time can not be ignored in the multi-site project scheduling process. Besides, the efficient application of the MinRMT is also demonstrated through the achieved results of the computational experiment in this paper. Though the efforts in this paper are based on the Vietnamese construction conditions, the proposed method can be usefully applied in other developing countries which have similar construction conditions.

Planning and construction processes are characterized by the peculiarity that they need to be designed individually for each project. It is necessary to set up an individual schedule for each project. As a basis for a new project, schedules from already finished projects are used, but adaptions are always necessary. In practice, scheduling tools only document a process. Schedules cover a set of activities, their duration and a set of interdependencies between activities. The design of a process is up to the user. It is not necessary to specify each interdependency, and completeness and correctness need to be checked manually. No methodologies are available to guarantee properties such as correctness or completeness. The considerations presented in the paper are based on an approach where a planning and a construction process including the interdependencies between planning and construction activities are regarded as a result. Selected information need to be specified by a user, and a proposal for an order of planning and construction activities is computed. As a consequence, process properties such as correctness and completeness can be guaranteed with respect to user input. Especially in Germany, clients are allowed to modify their requirements at any time. This leads to modifications in the planning and construction processes. This paper covers a mathematical formulation for this problem based on set theory. A complex structure is set up covering objects and relations; and operations are defined that guarantee consistency in the underlying and versioned process description. The presented considerations are based on previous work. This paper can be regarded as the next step in a series of previous work describing how a suitable concept for handling, planning and construction processes in civil engineering can be formed.

Several results concerning the distribution of the headway of busses in the flow behind a traffic signal are presented. In the main focus of interest is the description of analytical models, which are verified by the results of Monte-Carlo-Methods. The advantage of analytical models (verified, but not derived by simulation methods) is their flexibility with respect to possible generalizations. For instance, several random distributions of the flow incoming to the traffic signal can be compared. The attention will be directed at the question, how the primary headway H (analyzed in front of the traffic signal) is mapped to the headway H’ analyzed behind of the traffic signal and how the random distribution of H is mapped to that of H’. For the traffic flow in front of the traffic signal several models will be discussed. The first case considers the situation, that busses operate on a common lane with the individual motor car traffic and the traffic flow is saturated. In the second situation, busses operate on a separated bus lane. Moreover, a mixed situation is discussed to model as close to reality as possible.