Refine
Has Fulltext
- yes (71) (remove)
Document Type
- Conference Proceeding (67)
- Article (4)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (50)
- Professur Informatik im Bauwesen (10)
- Institut für Strukturmechanik (ISM) (4)
- Professur Theorie und Geschichte der modernen Architektur (2)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (1)
- Institut für Konstruktiven Ingenieurbau (IKI) (1)
- Junior-Professur Computational Architecture (1)
- Professur Bodenmechanik (1)
- Professur Tragwerkslehre (1)
Keywords
- CAD (71) (remove)
Practical examples show that the improvement in cost flow and total amount of money spend in construction and further use may be cut significantly. The calculation is based on spreadsheets calculation, very easy to develop on most PC´s now a days. Construction works, are a field where the evaluation of Cash Flow can be and should be applied. Decisions about cash flow in construction are decisions with long-term impact and long-term memory. Mistakes from the distant past have a massive impact on situations in the present and into the far economic future of economic activities. Two approaches exist. The Just-in-Time (JIT) approach and life cycle costs (LCC) approach. The calculation example shows the dynamic results for the production speed in opposition to stable flow of production in duration of activities. More sophisticated rescheduling in optimal solution might bring in return extra profit. In the technologies and organizational processes for industrial buildings, railways and road reconstruction, public utilities and housing developments there are assembly procedures that are very appropriate for the given purpose, complicated research-, development-, innovation-projects are all very good aspects of these kinds of applications. The investors of large investments and all public invested money may be spent more efficiently if an optimisation speed-strategy can be calculated.
In this study we introduce a concept of discrete Laplacian on the plane lattice and consider its iteration dynamical system. At first we discuss some basic properties on the dynamical system to be proved. Next making their computer simulations, we show that we can realize the following phenomena quite well:(1) The crystal of waters (2) The designs of carpets, embroideries (3) The time change of the numbers of families of extinct animals, and (4) The echo systems of life things. Hence we may expect that we can understand the evolutions and self organizations by use of the dynamical systems. Here we want to make a stress on the following fact: Although several well known chaotic dynamical systems can describe chaotic phenomena, they have difficulties in the descriptions of the evolutions and self organizations.
In earlier research, generalized multidimensional Hilbert transforms have been constructed in m-dimensional Euclidean space, in the framework of Clifford analysis. Clifford analysis, centred around the notion of monogenic functions, may be regarded as a direct and elegant generalization to higher dimension of the theory of the holomorphic functions in the complex plane. The considered Hilbert transforms, usually obtained as a part of the boundary value of an associated Cauchy transform in m+1 dimensions, might be characterized as isotropic, since the metric in the underlying space is the standard Euclidean one. In this paper we adopt the idea of a so-called anisotropic Clifford setting, which leads to the introduction of a metric dependent m-dimensional Hilbert transform, showing, at least formally, the same properties as the isotropic one. The Hilbert transform being an important tool in signal analysis, this metric dependent setting has the advantage of allowing the adjustment of the co-ordinate system to possible preferential directions in the signals to be analyzed. A striking result to be mentioned is that the associated anisotropic (m+1)-dimensional Cauchy transform is no longer uniquely determined, but may stem from a diversity of (m+1)-dimensional "mother" metrics.
The paper contains a description of dynamic effects in the silo wall during the outflow of a stored material. The work allows for determining the danger of construction damage due to resonant vibrations and is of practical importance by determining the influence of cyclic pressures and vibro–creeping during prolonged use of a silo. The paper was devised as a result of tests on silo walls in semi-technical scale. The model is generally applicable and allows for identification of parameters in real- size silos as well.
The general motivation of this research is to develop software to support the handling of the increased complexity of architectural design. In this paper we describe a system providing general support during the whole process. Instead of only developing design tools we are also addressing the problem of the operating environment of these tools. We conclude that design tools have to be integrated in an open, modular, distributed, user friendly and efficient environment. Two major fields have to be addressed - the development of design tools and the realisation of an integrated system as their operation environment. We will briefly focus on the latter by discussing known technologies in the field of information technology and other design disciplines that can be used to realise such an environment. Regarding the first subject we have to state the need of a detailed tool specification. As a solution we suggest a strategy where the tool functions are specified on the basis of a transformation, where a hierarchical process model is mapped into specifications of different design tools realising appropriate support for all sub-processes of architectural design. Using this strategy the main steps to develop such a support system are: implementation of a framework as basis for the integrated design system decision whether the tool specification are already implemented in available tools in this case these tools can be integrated using known methods for tool coupling otherwise new design tools have to be developed according to the framework
Skinless architecture
(2003)
Wissenschaftliches Kolloquium vom 24. bis 27. April 2003 in Weimar an der Bauhaus-Universität zum Thema: ‚MediumArchitektur - Zur Krise der Vermittlung'
Ideally, multiple computational building evaluation routines (particularly simulation tools) should be coupled in real-time to the representational design model to provide timely performance feed-back to the system user. In this paper we demonstrate how this can be achieved effectively and conveniently via homology-based mapping. We consider two models as homologous if they entail isomorphic topological information. If the general design representation (i.e., a shared object model) is generated in a manner so as to include both the topological building information and pointers to the semantic information base, it can be used to directly derive the domain representations (>enriched< object models with detailed configurational information and filtered semantic data) needed for evaluation purposes. As a proof of concept, we demonstrate a computational design environment that dynamically links an object-oriented space-based design model, with structurally homologous object models of various simulation routines.
It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.
Procedures of a construction of general solutions for some classes of partial differential equations (PDEs) are proposed and a symmetry operators approach to the raising the orders of the polynomial solutions to linear PDEs are develops. We touch upon an ''operator analytic function theory'' as the solution of a frequent classes of the equations of mathematical physics, when its symmetry operators forms vast enough space. The MAPLE© package programs for the building the operator variables is elaborated also.
The use of process models in the analysis, optimization and simulation of processes has proven to be extremely beneficial in the instances where they could be applied appropriately. However, the Architecture/Engineering/Construction (AEC) industries present unique challenges that complicate the modeling of their processes. A simple Engineering process model, based on the specification of Tasks, Datasets, Persons and Tools, and certain relations between them, have been developed, and its advantages over conventional techniques have been illustrated. Graph theory is used as the mathematical foundation mapping Tasks, Datasets, Persons and Tools to vertices and the relations between them to edges forming a directed graph. The acceptance of process modeling in AEC industries not only depends on the results it can provide, but the ease at which these results can be attained. Specifying a complex AEC process model is a dynamic exercise that is characterized by many modifications over the process model's lifespan. This article looks at reducing specification complexity, reducing the probability for erroneous input and allowing consistent model modification. Furthermore, the problem of resource leveling is discussed. Engineering projects are often executed with limited resources and determining the impact of such restrictions on the sequence of Tasks is important. Resource Leveling concerns itself with these restrictions caused by limited resources. This article looks at using Task shifting strategies to find a near-optimal sequence of Tasks that guarantees consistent Dataset evolution while resolving resource restrictions.
The concept is presented of the sensitivity analysis of the limit state of the structure with respect to selected basic variables. The sensitivity is presented in the form of the probability distribution of the limit state of the structure. The analysis is performed by the problem-oriented Monte Carlo simulation procedure. The procedure is based on the problem's definition of the elementary event, as a structural limit state. Thus the sample space consists of limit states of the structure. Defined on the sample space the one-dimensional random multiplier is introduced. This multiplier refers to the dominant basic variable (group of variables) of the problem. Numerical procedure results in the set of random numbers. Normalized relative histogram of this set is an estimator of the PDF of the limit state of the structure. Estimators of reliability, or the probability of failure are statistical characteristics of this histogram. The procedure is illustrated by the example of sensitivity analysis of the serviceability limit state of monumental structure. It is the colonnade of Licheń Basilica, situated in central Poland. Limit state of the structure is examined with reference to the upper deck horizontal deflection. Wind actions are taken as dominant variables. An assumption is made that the wind load intensities acting on the lower and on the upper storey of the colonnade, respectively, are identically distributed, but correlated random variables. Three correlation variants of these variables are considered. Relevant limit state histograms are analysed thereafter. The paper ends with the conclusions referring to the method and some general remarks on the fully probabilistic design.
VARIATION OF ROTATIONAL RESTRAINT IN GRID DECK CONNECTION DUE TO CORROSION DAMAGE AND STRENGTHENING
(2006)
The approach to assessment of rotational restraint of stringer-to-crossbeam connection in a deck of 100-year old steel truss bridge is presented. Sensitivity of rotational restraint coefficient of the connection to corrosion damage and strengthening is analyzed. Two criteria of the assessment of the rotational restraint coefficient are applied: static and kinematic one. The former is based on bending moment distribution in the considered member, the latter one – on the member rotation at the given joint. 2D-element model of finite element method is described: webs and flanges are modeled with shell elements, while rivets in the connection – with system of beam and spring elements. The method of rivet modeling is verified by T-stub connection test results published in literature. FEM analyses proved that recorded extent of corrosion damage does not alter the initial rotational restraint of stringer-to-crossbeam connection. Strengthening of stringer midspan influences midspan bending moment and stringer end rotation in a different way. Usually restoring member load bearing capacity means strengthening its critical regions (where the highest stress levels occur). This alters flexural stiffness distribution over member length and influences rotational restraint at its connection to other members. The impact depends on criterion chosen for rotational restraint coefficient assessment.
Wissenschaftliches Kolloquium vom 24. bis 27. April 2003 in Weimar an der Bauhaus-Universität zum Thema: ‚MediumArchitektur - Zur Krise der Vermittlung'
Due to economical, technical or political reasons all over the world about 100 nuclear power plants have been disconnected until today. All these power stations are still waiting for their complete dismantling which, considering one reactor, causes cost of up to one Bil. Euros and lasts up to 15 years. In our contribution we present a resource-constrained project scheduling approach minimizing the total discounted cost of dismantling a nuclear power plant. A project of dismantling a nuclear power plant can be subdivided into a number of disassembling activities. The execution of these activities requires time and scarce resources like manpower, special equipment or storage facilities for the contaminated material arising from the dismantling. Moreover, we have to regard several minimum and maximum time lags (temporal constraints) between the start times of the different activities. Finally, each disassembling activity can be processed in two alternative execution modes, which lead to different disbursements and determine the resource requirements of the considered activity. The optimization problem is to determine a start time and an execution mode for each activity, such that the discounted cost of the project is minimum, and neither the temporal constraints are violated nor the activities' resource requirements exceed the availability of any scarce resource at any point in time. In our contribution we introduce an appropriate multi-mode project scheduling model with minimum and maximum time lags as well as renewable and cumulative resources for the described optimization problem. Furthermore, we show that the considered optimization problem is NP-hard in the strong sense. For small problem instances, optimal solutions can be gained from a relaxation based enumeration approach which is incorporated into a branch and bound algorithm. In order to be able to solve large problem instances, we also propose a truncated version of the devised branch and bound algorithm.
Reasonably accurate cost estimation of the structural system is quite desirable at the early stages of the design process of a construction project. However, the numerous interactions among the many cost-variables make the prediction difficult. Artificial neural networks (ANN) and case-based reasoning (CBR) are reported to overcome this difficulty. This paper presents a comparison of CBR and ANN augmented by genetic algorithms (GA) conducted by using spreadsheet simulations. GA was used to determine the optimum weights for the ANN and CBR models. The cost data of twenty-nine actual cases of residential building projects were used as an example application. Two different sets of cases were randomly selected from the data set for training and testing purposes. Prediction rates of 84% in the GA/CBR study and 89% in the GA/ANN study were obtained. The advantages and disadvantages of the two approaches are discussed in the light of the experiments and the findings. It appears that GA/ANN is a more suitable model for this example of cost estimation where the prediction of numerical values is required and only a limited number of cases exist. The integration of GA into CBR and ANN in a spreadsheet format is likely to improve the prediction rates.
Designing a structure follows a pattern of creating a structural design concept, executing a finite element analysis and developing a design model. A project was undertaken to create computer support for executing these tasks within a collaborative environment. This study focuses on developing a software architecture that integrates the various structural design aspects into a seamless functional collaboratory that satisfies engineering practice requirements. The collaboratory is to support both homogeneous collaboration i.e. between users operating on the same model and heterogeneous collaboration i.e. between users operating on different model types. Collaboration can take place synchronously or asynchronously, and the information exchange is done either at the granularity of objects or at the granularity of models. The objective is to determine from practicing engineers which configurations they regard as best and what features are essential for working in a collaborative environment. Based on the suggestions of these engineers a specification of a collaboration configuration that satisfies engineering practice requirements will be developed.
The one-dimensional continuous wavelet transform is a successful tool for signal and image analysis, with applications in physics and engineering. Clifford analysis offers an appropriate framework for taking wavelets to higher dimension. In the usual orthogonal case Clifford analysis focusses on monogenic functions, i.e. null solutions of the rotation invariant vector valued Dirac operator ∂, defined in terms of an orthogonal basis for the quadratic space Rm underlying the construction of the Clifford algebra R0,m. An intrinsic feature of this function theory is that it encompasses all dimensions at once, as opposed to a tensorial approach with products of one-dimensional phenomena. This has allowed for a very specific construction of higher dimensional wavelets and the development of the corresponding theory, based on generalizations of classical orthogonal polynomials on the real line, such as the radial Clifford-Hermite polynomials introduced by Sommen. In this paper, we pass to the Hermitian Clifford setting, i.e. we let the same set of generators produce the complex Clifford algebra C2n (with even dimension), which we equip with a Hermitian conjugation and a Hermitian inner product. Hermitian Clifford analysis then focusses on the null solutions of two mutually conjugate Hermitian Dirac operators which are invariant under the action of the unitary group. In this setting we construct new Clifford-Hermite polynomials, starting in a natural way from a Rodrigues formula which now involves both Dirac operators mentioned. Due to the specific features of the Hermitian setting, four different types of polynomials are obtained, two types of even degree and two types of odd degree. These polynomials are used to introduce a new continuous wavelet transform, after thorough investigation of all necessary properties of the involved polynomials, the mother wavelet and the associated family of wavelet kernels.
In distributed project organisations and collaboration there is a need for integrating unstructured self-contained text information with structured project data. We consider this a process of text integration in which various text technologies can be used to externalise text content and consolidate it into structured information or flexibly interlink it with corresponding information bases. However, the effectiveness of text technologies and the potentials of text integration greatly vary with the type of documents, the project setup and the available background knowledge. The goal of our research is to establish text technologies within collaboration environments to allow for (a) flexibly combining appropriate text and data management technologies, (b) utilising available context information and (c) the sharing of text information in accordance to the most critical integration tasks. A particular focus is on Semantic Service Environments that leverage on Web service and Semantic Web technologies and adequately support the required systems integration and parallel processing of semi-structured and structured information. The paper presents an architecture for text integration that extends Semantic Service Environments with two types of integration services. Backbone to the Information Resource Sharing and Integration Service is a shared environment ontology that consolidates information on the project context and the available model, text and general linguistic resources. It also allows for the configuration of Semantic Text Analysis and Annotation Services to analyse the text documents as well as for capturing the discovered text information and sharing it through semantic notification and retrieval engines. A particular focus of the paper is the definition of the overall integration process configuring a complementary set of analyses and information sharing components.
This is a paper about knowledge in design and how to elicit knowledge from design processes. The paper is a preparation for an empirical study of interaction in the design process. Reasonings of three authors - Schön, Broadbent and Lundequist - on design processes is presented. They all have a pragmatic perspective in common, and regard the process as an activity without a definite form. Design is seen as an activity of creating models of forms and shapes, by addressing expert knowledge in a dialogic way to problematic situations. Due to the pragmatic approach I find the pragmatist Dewey´s understanding of knowledge and elecitation of knowledge appropiate for studying design processes. According to him it is possible to build up objectified descriptions of experiences, also of such, which are based on experiences of emotional and intuitive nature. There need not be a definite border, which separates tacit knowledge from explicit knowledge - when it comes to the question of the possibility of verbal descriptions. Tacit knowledge is possible to articulate within pragmatic thinking. The conclusion is, that it is possible to study the tacit knowledge of design processes, and get some qualitative insights useful for theory building. A study of design processes can look at three different forms of knowledge. It appears as a precognitive understanding of the design situation, as integrated in the design activity - seeing the situation as something known - and in the process of creating something new.
The synchronous distributed processing of common source code in the software development process is supported by well proven methods. The planning process has similarities with the software development process. However, there are no consistent and similarly successful methods for applications in construction projects. A new approach is proposed in this contribution.
Processing technical and environmental data on building materials, components, and systems has become more important during the last few years. Increased sensitivity towards environmental and energy problems has lead to the demand for simulation and evaluation of the long term behavior of buildings. The results of such simulations are expected to enable architects and engineers to develop a broader, interdisciplinary understanding of the impact of their products (buildings) on the environment. However, conducting such evaluations is currently hampered by the lack of comprehensive, up-to-date, and ecologically relevant data on building materials, components, and systems. To address this problem, this paper proposes an approach to deal with the absent or uncertain attributes of building materials, components, and systems. In the past, various information systems have been developed to provide data on a limited set of building materials, including precise values pertaining to some of their characteristics, such as availability, manufacturers, costs, etc. These traditional information systems have difficulty in dealing with uncertain, incomplete and sparse data. However, uncertainty and incompleteness characterize the nature of most of the available and environmentally related characteristics of materials, components, and systems. In this paper, a fuzzy-logic-based augmentation of traditional information systems is proposed towards providing management, utilization and manipulation of incomplete and uncertain data.
Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.
For the dynamic behavior of lightweight structures like thin shells and membranes exposed to fluid flow the interaction between the two fields is often essential. Computational fluid-structure interaction provides a tool to predict this interaction and complement or eventually replace expensive experiments. Partitioned analyses techniques enjoy great popularity for the numerical simulation of these interactions. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular analysis software. We use, for the fluid, GLS stabilized finite elements on a moving domain based on the incompressible instationary Navier-Stokes equations, where the formulation guarantees geometric conservation on the deforming domain. The structure is discretized by nonlinear, three-dimensional shell elements.
Commonly used sequential staggered coupling schemes may exhibit instabilities due to the so-called artificial added mass effect. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. Since iterative coupling algorithms are computationally very costly, their convergence rate is very decisive for their usability. To ensure and accelerate the convergence of this iteration the updates of the interface position are relaxed. The time dependent, 'optimal' relaxation parameter is determined automatically without any user-input via exploiting a gradient method or applying an Aitken iteration scheme.
We consider efficient numerical methods for the solution of partial differential equations with stochastic coefficients or right hand side. The discretization is performed by the stochastic finite element method (SFEM). Separation of spatial and stochastic variables in the random input data is achieved via a Karhunen-Loève expansion or Wiener's polynomial chaos expansion. We discuss solution strategies for the Galerkin system that take advantage of the special structure of the system matrix. For stochastic coefficients linear in a set of independent random variables we employ Krylov subspace recycling techniques after having decoupled the large SFEM stiffness matrix.
Optimum technological solutions must take into account the entire life cycle of structures including design procedures as well as quality assurance, inspection, maintenance, and repair strategies. Unfortunately, current design standards do not provide a satisfactory basis to ensure expected structural lifetimes. The latter may vary from only a few years for temporary structures to over a century for bridges, water dams or nuclear repositories. Consistent scientific concepts are urgently required to cover this wide spectrum of lifetimes in structural design and maintenance. This was a motivation for a group of scientists at the Ruhr-University Bochum (RUB) to start a special research program supported by the German Research Foundation (DFG) within the Cooperative Research Center SFB 398 since 1996. Institutes of the University Wuppertal and of the University Essen-Duisburg joined the research group. The goal of the Center is to study sources of damage and deterioration in materials and structures, to develop consistent models and simulation methods, to predict structural lifetimes and finally to integrate this predictions into new lifetime-oriented design strategies.
Research activities in our center are organised in three Project Groups as follows:
- Modelling of lifetime effects
- Methods for lifetime-oriented structural analyses
- Future lifespan-oriented design strategies.
For assessment of old buildings, thermal graphic analysis aided with infra-red camera have been employed in a wide range nowadays. Image processing and evaluation can be economically practicable only if the image evaluation can also be automated to the largest extend. For that reason methods of computer vision are presented in this paper to evaluate thermal images. To detect typical thermal image elements, such as thermal bridges and lintels in thermal images respectively gray value images, methods of digital image processing have been applied, of which numerical procedures are available to transform, modify and encode images. At the same time, image processing can be regarded as a multi-stage process. In order to be able to accomplish the process of image analysis from image formation through perfecting and segmentation to categorization, appropriate functions must be implemented. For this purpose, different measuring procedures and methods for automated detection and evaluation have been tested.
We propose a new approach to the numerical solution of quasi-static elastic-plastic problems based on the Moreau-Yosida theorem. After the time discretization, the problem is expressed as an energy minimization problem for unknown displacement and plastic strain fields. The dependency of the minimization functional on the displacement is smooth whereas the dependency on the plastic strain is non-smooth. Besides, there exists an explicit formula, how to calculate the plastic strain from a given displacement field. This allows us to reformulate the original problem as a minimization problem in the displacement only. Using the Moreau-Yosida theorem from the convex analysis, the minimization functional in the displacements turns out to be Frechet-differentiable, although the hidden dependency on the plastic strain is non-differentiable. The seconds derivative exists everywhere apart from the elastic-plastic interface dividing elastic and plastic zones of the continuum. This motivates to implement a Newton-like method, which converges super-linearly as can be observed in our numerical experiments.
PKPM series CAD software is an integrated CAD system for building design, which integrated the following parts: architectural design, structural design, building service design and statistic analysis of quantity and budget. These four parts share the same database with high efficiency. Over 80% of design corporation in China are using PKPM series CAD software. The detailed information and some key modules of PKPM series CAD software are mainly introduced in this paper.
In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.
Car following models are used to describe the behavior of a number of cars on the road dependent on the distance to the car in front. We introduce a system of ordinary differential equations and perform a theoretical and numerical analysis in order to find solutions that reflect various traffic situations. We present three different variations of the model motivated by reality.
Image processing has been much inspired by the human vision, in particular with regard to early vision. The latter refers to the earliest stage of visual processing responsible for the measurement of local structures such as points, lines, edges and textures in order to facilitate subsequent interpretation of these structures in higher stages (known as high level vision) of the human visual system. This low level visual computation is carried out by cells of the primary visual cortex. The receptive field profiles of these cells can be interpreted as the impulse responses of the cells, which are then considered as filters. According to the Gaussian derivative theory, the receptive field profiles of the human visual system can be approximated quite well by derivatives of Gaussians. Two mathematical models suggested for these receptive field profiles are on the one hand the Gabor model and on the other hand the Hermite model which is based on analysis filters of the Hermite transform. The Hermite filters are derivatives of Gaussians, while Gabor filters, which are defined as harmonic modulations of Gaussians, provide a good approximation to these derivatives. It is important to note that, even if the Gabor model is more widely used than the Hermite model, the latter offers some advantages like being an orthogonal basis and having better match to experimental physiological data. In our earlier research both filter models, Gabor and Hermite, have been developed in the framework of Clifford analysis. Clifford analysis offers a direct, elegant and powerful generalization to higher dimension of the theory of holomorphic functions in the complex plane. In this paper we expose the construction of the Hermite and Gabor filters, both in the classical and in the Clifford analysis framework. We also generalize the concept of complex Gaussian derivative filters to the Clifford analysis setting. Moreover, we present further properties of the Clifford-Gabor filters, such as their relationship with other types of Gabor filters and their localization in the spatial and in the frequency domain formalized by the uncertainty principle.
In this paper proposed the application of two-parameters damage model, based on non-linear finite element approach, to the analysis of masonry panels. Masonry is treated as a homogenized material, for which the material characteristics can be defined by using homogenization technique. The masonry panels subjected to shear loading are studied by using the proposed procedure within the framework of three-dimensional analyses. The nonlinear behaviour of masonry can be modelled using concepts of damage theory. In this case an adequate damage function is defined for taking into account different response of masonry under tension and compression states. Cracking can, therefore, be interpreted as a local damage effect, defined by the evolution of known material parameters and by one or several functions which control the onset and evolution of damage. The model takes into account all the important aspects which should be considered in the nonlinear analysis of masonry structures such as the effect of stiffness degradation due to mechanical effects and the problem of objectivity of the results with respect to the finite element mesh. Finally the proposed damage model is validated with a comparison with experimental results available in the literature.
Adopting the European laws concerning environmental protection will require sustained efforts of the authorities and communities from Romania; implementing modern solutions will become a fast and effective option for the improvement of the functioning systems, in order to prevent disasters. As a part of the urban infrastructure, the drainage networks of pluvial and residual waters are included in the plan of promoting the systems which protect the environmental quality, with the purpose of integrated and adaptive management. The paper presents a distributed control system for sewer network of Iasi town. Unsatisfactory technical state of the actual sewer system is exposed, focusing on objectives related to implementation of the control system. The proposed distributed control system of Iasi drainage network is based on the implementation of the hierarchic control theory for diagnose, sewer planning and management. There are proposed two control levels: coordinating and local execution. Configuration of the distributed control system, including data acquisition and conversion equipment, interface characteristics, local data bus, data communication network, station configuration are widely described. The project wish to be an useful instrument for the local authorities in the preventing and reducing the impact of future natural disasters over the urban areas by means of modern technologies.
The execution of project activities generally requires the use of (renewable) resources like machines, equipment or manpower. The resource allocation problem consists in assigning time intervals to the execution of the project activities while taking into account temporal constraints between activities emanating from technological or organizational requirements and costs incurred by the resource allocation. If the total procurement cost of the different renewable resources has to be minimized we speak of a resource investment problem. If the cost depends on the smoothness of the resource utilization over time the underlying problem is called a resource levelling problem. In this paper we consider a new tree-based enumeration method for solving resource investment and resource levelling problems exploiting some fundamental properties of spanning trees. The enumeration scheme is embedded in a branch-and-bound procedure using a workload-based lower bound and a depth first search. Preliminary computational results show that the proposed procedure is promising for instances with up to 30 activities.
Interval analysis extends the concept of computing with real numbers to computing with real intervals. As a consequence, some interesting properties appear, such as the delivery of guaranteed results or confirmed global values. The former property is given in the sense that unknown numerical values are in known to lie in a computed interval. The latter property states that the global minimum value, for example, of a given function is also known to be contained in a interval (or a finite set of intervals). Depending upon the amount computation effort invested in the calculation, we can often find tight bounds on these enclosing intervals. The downside of interval analysis, however, is the mathematically correct, but often very pessimistic size of the interval result. This is in particularly due to the so-called dependency effect, where a single variable is used multiple times in one calculation. Applying interval analysis to structural analysis problems, the dependency has a great influence on the quality of numerical results. In this paper, a brief background of interval analysis is presented and shown how it can be applied to the solution of structural analysis problems. A discussion of possible improvements as well as an outlook to parallel computing is also given.
The contribution presents a model that is able to simulate construction duration and cost for a building project. This model predicts set of expected project costs and duration schedule depending on input parameters such as production speed, scope of work, time schedule, bonding conditions and maximum and minimum deviations from scope of work and production speed. The simulation model is able to calculate, on the basis of input level of probability, the adequate construction cost and time duration of a project. The reciprocal view attends to finding out the adequate level of probability for construction cost and activity durations. Among interpretive outputs of the application software belongs the compilation of a presumed dynamic progress chart. This progress chart represents the expected scenario of development of a building project with the mapping of potential time dislocations for particular activities. The calculation of a presumed dynamic progress chart is based on an algorithm, which calculates mean values as a partial result of the simulated building project. Construction cost and time models are, in many ways, useful tools in project management. Clients are able to make proper decisions about the time and cost schedules of their investments. Consequently, building contractors are able to schedule predicted project cost and duration before any decision is finalized.
A fast solver method called the multigrid preconditioned conjugate gradient method is proposed for the mechanical analysis of heterogeneous materials on the mesoscale. Even small samples of a heterogeneous material such as concrete show a complex geometry of different phases. These materials can be modelled by projection onto a uniform, orthogonal grid of elements. As one major problem the possible resolution of the concrete specimen is generally restricted due to (a) computation times and even more critical (b) memory demand. Iterative solvers can be based on a local element-based formulation while orthogonal grids consist of geometrical identical elements. The element-based formulation is short and transparent, and therefore efficient in implementation. A variation of the material properties in elements or integration points is possible. The multigrid method is a fast iterative solver method, where ideally the computational effort only increases linear with problem size. This is an optimal property which is almost reached in the implementation presented here. In fact no other method is known which scales better than linear. Therefore the multigrid method gains in importance the larger the problem becomes. But for heterogeneous models with very large ratios of Young's moduli the multigrid method considerably slows down by a constant factor. Such large ratios occur in certain heterogeneous solids, as well as in the damage analysis of solids. As solution to this problem the multigrid preconditioned conjugate gradient method is proposed. A benchmark highlights the multigrid preconditioned conjugate gradient method as the method of choice for very large ratio's of Young's modulus. A proposed modified multigrid cycle shows good results, in the application as stand-alone solver or as preconditioner.
The aim of this paper is to present so-called discrete-continual boundary element method (DCBEM) of structural analysis. Its field of application comprises buildings constructions, structures and also parts and components for the residential, commercial and un-inhabitant structures with invariability of physical and geometrical parameters in some dimensions. We should mention here in particular such objects as beams, thin-walled bars, strip foundations, plates, shells, deep beams, high-rise buildings, extensional buildings, pipelines, rails, dams and others. DCBEM comes under group of semianalytical methods. Semianalytical formulations are contemporary mathematical models which currently becoming available for realization due to substantial speed-up of computer productivity. DCBEM is based on the theory of the pseudodifferential boundary equations. Corresponding pseudodifferential operators are discretely approximated using Fourier analysis or wavelet analysis. The main DCBEM advantages against the other methods of the numerical analysis is a double reduction in dimension of the problem (discrete numerical division applied not to the full region of the interest but only to the boundary of the region cross section, as a matter of fact one is solving an one-dimensional problem with the finite step on the boundary area of the region), one has opportunities to carrying out very detailed analysis of the specific chosen zones, simplified initial data preparation, simplistic and adaptive algorithms. There are two methods to define and conduct DCBEM analysis developed – indirect (IDCBEM) and direct (DDCBEM), thus indirect like in boundary element method (BEM) applied and used little bit more than direct.
Unconstrained models are very often found in the broad spectrum of different theories of traffic demand models. In these models there are none or only one-sided restrictions influencing the choice of the individual. However in the traffic demand different deciding dependencies of the traffic volume with regard to the specific conditions of the territory structure potentials exist. Kichhoff and Lohse introduced bi- and tri-linearly constrained models to show these dependencies. In principle, the dependencies are described as hard, elastic and open boundary sum criteria. In this article a model is formulated which gets away from these predefined boundary sum criteria and allows a free determination of minimal and maximal boundary sum criteria. The iterative solution algorithm is shown according to a FURNESS procedure at the same time. With the approach of freely selectable minimal and maximal boundary sum criteria the modeling transport planner gets the possibility to show the traffic event even better. Furthermore all common boundary sum criteria can be calculated with this model. Therewith the often necessary and sensible standard and special cases can also be modeled.
The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.
ON THE NAVIER-STOKES EQUATION WITH FREE CONVECTION IN STRIP DOMAINS AND 3D TRIANGULAR CHANNELS
(2006)
The Navier-Stokes equations and related ones can be treated very elegantly with the quaternionic operator calculus developed in a series of works by K. Guerlebeck, W. Sproeossig and others. This study will be extended in this paper. In order to apply the quaternionic operator calculus to solve these types of boundary value problems fully explicitly, one basically needs to evaluate two types of integral operators: the Teodorescu operator and the quaternionic Bergman projector. While the integral kernel of the Teodorescu transform is universal for all domains, the kernel function of the Bergman projector, called the Bergman kernel, depends on the geometry of the domain. With special variants of quaternionic holomorphic multiperiodic functions we obtain explicit formulas for three dimensional parallel plate channels, rectangular block domains and regular triangular channels. The explicit knowledge of the integral kernels makes it then possible to evaluate the operator equations in order to determine the solutions of the boundary value problem explicitly.
Interactive visualization based on 3D computer graphics nowadays is an indispensable part of any simulation software used in engineering. Nevertheless, the implementation of such visualization software components is often avoided in research projects because it is a challenging and potentially time consuming task. In this contribution, a novel Java framework for the interactive visualization of engineering models is introduced. It supports the task of implementing engineering visualization software by providing adequate program logic as well as high level classes for the visual representation of entities typical for engineering models. The presented framework is built on top of the open source visualization toolkit VTK. In VTK, a visualization model is established by connecting several filter objects in a so called visualization pipeline. Although designing and implementing a good pipeline layout is demanding, VTK does not support the reuse of pipeline layouts directly. Our framework tailors VTK to engineering applications on two levels. On the first level it adds new – engineering model specific – filter classes to VTK. On the second level, ready made pipeline layouts for certain aspects of engineering models are provided. For instance there is a pipeline class for one-dimensional elements like trusses and beams that is capable of showing the elements along with deformations and member forces. In order to facilitate the implementation of a graphical user interface (GUI) for each pipeline class, there exists a reusable Java Swing GUI component that allows the user to configure the appearance of the visualization model. Because of the flexible structure, the framework can be easily adapted and extended to new problem domains. Currently it is used in (i) an object-oriented p-version finite element code for design optimization, (ii) an agent based monitoring system for dam structures and (iii) the simulation of destruction processes by controlled explosives based on multibody dynamics. Application examples from all three domains illustrates that the approach presented is powerful as well as versatile.
The paper proposes a new method for general 3D measurement and 3D point reconstruction. Looking at its features, the method explicitly aims at practical applications. These features especially cover low technical expenses and minimal user interaction, a clear problem separation into steps that are solved by simple mathematical methods (direct, stable and optimal with respect to least error squares), and scalability. The method expects the internal and radial distortion parameters of the used camera(s) as inputs, and a plane quadrangle with known geometry within the scene. At first, for each single picture the 3D position of the reference quadrangle (with respect to each camera coordinate frame) is calculated. These 3D reconstructions of the reference quadrangle are then used to yield the relative external parameters of each camera regarding the first one. With known external parameters, triangulation is finally possible. The differences from other known procedures are outlined, paying attention to the stable mathematical methods (no usage of nonlinear optimization) and the low user interaction with good results at the same time.
We establish the basis of a discrete function theory starting with a Fischer decomposition for difference Dirac operators. Discrete versions of homogeneous polynomials, Euler and Gamma operators are obtained. As a consequence we obtain a Fischer decomposition for the discrete Laplacian. For the sake of simplicity we consider in the first part only Dirac operators which contain only forward or backward finite differences. Of course, these Dirac operators do not factorize the classic discrete Laplacian. Therefore, we will consider a different definition of a difference Dirac operator in the quaternionic case which do factorizes the discrete Laplacian.
Analysis of the reinforced concrete chimney geometry changes and their influence on the stresses in the chimney mantle was made. All the changes were introduced to a model chimney and compared. Relations between the stresses in the mantle of the chimney and the deformations determined by the change of the chimney's vertical axis geometry were investigated. The vertical axis of chimney was described by linear function (corresponding to the real rotation of the chimney together with the foundation), and by parabolic function (corresponding to the real dislocation of the chimney under the influence of the horizontal forces - wind). The positive stress pattern in the concrete as well as the negative stress pattern in the reinforcing steel have been presented. The two cases were compared. Analysis of the stress changes in the chimney mantle depending on the modification in the thickness of the mantle (the thickness of the chimney mantle was altered in the linear or the abrupt way) was carried out. The relation between the stresses and the chimney's diameter change from the bottom to the top of the chimney was investigated. All the analyses were conducted by means of a specially developed computer program created in Mathematica environment. The program makes it also possible to control calculations and to visualize the results of the calculations at every stage of the calculation process.
Traffic simulation is a valuable tool for the design and evaluation of road networks. Over the years, the level of detail to which urban and freeway traffic can be simulated has increased steadily, shifting from a merely qualitative macroscopic perspective to a very detailed microscopic view, where the behavior of individual vehicles is emulated realistically. With the improvement of behavioral models, however, the computational complexity has also steadily increased, as more and more aspects of real-life traffic have to be considered by the simulation environment. Despite the constant increase in computing power of modern personal computers, microscopic simulation stays computationally expensive, limiting the maximum network size than can be simulated on a single-processor computer in reasonable time. Parallelization can distribute the computing load from a single computer system to a cluster of several computing nodes. To this end, the exisiting simulation framework had to be adapted to allow for a distributed approach. As the simulation is ultimately targeted to be executed in real-time, incorporating real traffic data, only a spatial partition of the simulation was considered, meaning the road network has to be partitioned into subnets of comparable complexity, to ensure a homogenous load balancing. The partition process must also ensure, that the division between subnets does only occur in regions, where no strong interaction between the separated road segments occurs (i.e. not in the direct vicinity of junctions). In this paper, we describe a new microscopic reasoning voting strategy, and discuss in how far the increasing computational costs of these more complex behaviors lend themselves to a parallelized approach. We show the parallel architecture employed, the communication between computing units using MPIJava, and the benefits and pitfalls of adapting a single computer application to be used on a multi-node computing cluster.
The mathematical and technical foundations of optimization have been developed to a large extent. In the design of buildings, however, optimization is rarely applied because of insufficient adaptation of this method to the needs of building design. The use of design optimization requires the consideration of all relevant objectives in an interactive and multidisciplinary process. Disciplines such as structural, light, and thermal engineering, architecture, and economics impose various objectives on the design. A good solution calls for a compromise between these often contradictory objectives. This presentation outlines a method for the application of Multidisciplinary Design Optimization (MDO) as a tool for the designing of buildings. An optimization model is established considering the fact that in building design the non-numerical aspects are of major importance than in other engineering disciplines. A component-based decomposition enables the designer to manage the non-numerical aspects in an interactive design optimization process. A façade example demonstrates a way how the different disciplines interact and how the components integrate the disciplines in one optimization model. In this grid-based façade example, the materials switch between a discrete number of materials and construction types. For light and thermal engineering, architecture, and economics, analysis functions calculate the performance; utility functions serve as an important means for the evaluation since not every increase or decrease of a physical value improves the design. For experimental purposes, a genetic algorithm applied to the exemplary model demonstrates the use of optimization in this design case. A component-based representation first serves to manage non-numerical characteristics such as aesthetics. Furthermore, it complies with usual fabrication methods in building design and with object-oriented data handling in CAD. Therefore, components provide an important basis for an interactive MDO process in building design.