Filtern
Dokumenttyp
- Konferenzveröffentlichung (296)
- Artikel (Wissenschaftlicher) (238)
- Dissertation (70)
- Bericht (14)
- Teil eines Buches (Kapitel) (9)
- Masterarbeit (9)
- Buch (Monographie) (7)
- Preprint (4)
- Periodikum (2)
- Rezension (2)
Institut
- Professur Informatik im Bauwesen (190)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (116)
- Professur Theorie und Geschichte der modernen Architektur (83)
- Institut für Strukturmechanik (ISM) (50)
- Graduiertenkolleg 1462 (20)
- Professur Bauphysik (14)
- Professur Angewandte Mathematik (9)
- Institut für Europäische Urbanistik (8)
- Professur Computergestütztes kooperatives Arbeiten (8)
- Professur Betriebswirtschaftslehre im Bauwesen (7)
Schlagworte
- Computerunterstütztes Verfahren (150)
- Architektur <Informatik> (99)
- Angewandte Informatik (91)
- Angewandte Mathematik (86)
- CAD (71)
- Bauhaus-Kolloquium (66)
- Weimar (66)
- Architektur (58)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (41)
- 2003 (30)
Container sind nicht nur das bei weitem wichtigste Transportmittel für die allermeisten Waren, mit denen wir tagtäglich zu tun haben. Container sind, vielleicht wegen ihrer schlichten, klaren Ausdruckskraft, zu dem Symbol der Globalisierung geworden und vieler Phänomene, die man mit dieser Entwicklung in Zusammenhang bringt. Dabei handelt es sich um ein durch und durch ambivalentes Symbol. Container stehen genauso für die beeindruckende Dynamik des modernen Kapitalismus und den ihm trotz aller Krisen zugrunde liegenden Optimismus wie für die Ängste und Einwände dagegen; gegen die Indifferenz eines rein auf Optimierung ausgelegten logistischen Organisationshandelns und gegen die zwangsweise Annäherung und Angleichung ehedem entfernter Weltgegenden durch die exponentielle Vermehrung der Transport- und Kommunikationsvorgänge. Der Schwerpunkt der Arbeit liegt im 20. Jahrhundert. Sie untersucht die (Vor)Geschichte und Theorie des Containers als moderner Kulturtechnik und zentralem Bestandteil eines weltumspannenden logistischen Systems. Und sie zeigt ihn als Element eines Denkens und Organisationshandelns in modularen, beweglichen Raumeinheiten, das sich auch auf viele andere Bereiche außerhalb des Warentransports übertragen lässt. Dafür beschreibt und analysiert sie "Containersituationen" in so unterschiedlichen Feldern wie Handel und Transport, Architektur, Wissenschaften, Kunst und den sozialen Realitäten von Migranten und Seeleuten.
50 Jahre Dissertationen an der Hochschule für Architektur und Bauwesen und der Bauhaus-Universität
(2005)
Diese Veröffentlichung dokumentiert über einen Zeitraum von 50 Jahren die an unserer Hochschule entstandenen Dissertationen, deren Zahl sich auf 1100 beläuft. Damit werden ein wichtiger Teil der Hochschulgeschichte Weimars und zugleich ein Teil der Hochschulgeschichte der DDR aufgearbeitet. Die Bibliographie liefert Bausteine für eine Geschichte der Disziplinen und Fakultäten an der Weimarer Hochschule und darüber hinaus für eine Sozialgeschichte der Wissenschaftler in Thüringen. So hat z. B. eine Vielzahl der heute in leitender Stellung an der Universität Tätigen – sowohl Professoren als auch Mitarbeiter der Universitätsverwaltung – in Weimar promoviert. Das lässt sich auch ausdehnen auf Personen in führenden wissenschaftlichen, politischen oder wirtschaftlichen Positionen in der Region oder im Ort. Hier könnte die vorliegende Bibliographie Anstoß für weitere Forschungen geben.
The evolution of data exchange and integration standards within the Architectural, Engineering and Construction industry is gradually making the long-held vision of computer-integratedconstruction a reality. The Industry Foundations Classes and CIMSteel Integration Standards are two such standards that have seen remarkable successes over the past few years. Despite successes, these standards support the exchange of product data more than they do process data, especially those processes that are loosely coupled with product models. This paper reports on on-going research to evaluate the adequacy of the IFC and CIS/2 standards to support process modeling in the steel supply chain. Some initial recommendations are made regarding enhancements to the data standards to better support processes.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
Methods with the convergence order p 2 (Newton`s, tangent hyperbolas, tangent parabolas etc.) and their approximate variants are studied. Conditions are presented under which the approximate variants preserve their convergence rate intrinsic to these methods and some computational aspects (possibilities to organize parallel computation, globalization of a method, the solution of the linear equations versus the matrix inversion at every iteration etc.) are discussed. Polyalgorithmic computational schemes (hybrid methods) combining the best features of various methods are developed and possibilities of their application to numerical solution of two-point boundary-value problem in ordinary differential equations and decomposition-coordination problem in convex programming are analyzed.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
The synchronous distributed processing of common source code in the software development process is supported by well proven methods. The planning process has similarities with the software development process. However, there are no consistent and similarly successful methods for applications in construction projects. A new approach is proposed in this contribution.
Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources.
In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts.
Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques.
The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches.
Abstract Developing and emerging tropical Asian countries have encountered fast urban development due to the migration of farmers seeking a better life in the city. This resulted in a lack of appro-priate infrastructure and inappropriate social services in many cities. Municipal solid waste management is no exception and is in fact often placed at the bottom of the list of priorities for the cities’ appropriate urban management plans since laws and regulations must first be for-mulated and implemented. The problem of unmanaged municipal solid waste certainly leads to air pollution, disease, and to soil and water contamination. These problems in tropical climates are compounded with high temperature, high-level humidity, heavy rainfall and frequent flooding. Stagnant water and leachate from waste quickly become the breeding grounds of in-sects, rodents and bacteria, thus creating a health hazard for workers and local populations. Moreover, water and groundwater contamination may lead to serious environmental degrada-tion with direct impacts on water supplies, and in the fast degradation of agricultural products, the backbone of most tropical Asian countries. Many cities still allow or tolerate dumping of waste in uncontrolled sites, and open burning that disperses particulates that most likely contain dioxins and furans. Even with increasingly scarce land availability within or in proximity of the cities, sanitary landfill is still the most often cho-sen disposal method around Asia because of its lower cost when compared to modern treatment systems. Yet, most of these landfill sites do not have proper lining, daily covering, methane recovery devices, leachate control systems, nor do they have long-term closure and monitoring plans, which implies short and long-term hazards. Some municipalities opted for incineration, which usually entails high operation and maintenance costs because of the need for supple-mental fuel and often-inappropriate running conditions. Although tropical conditions appear to favor certain disposal systems such as composting, appropriate technology needs to be identi-fied in order to reduce operation and maintenance costs while ensuring good quality outputs; compost plants have often been closed because of poor quality products due to the high content of plastic and glass particulates in the finished product. Tropical Asian cities are now required to identify affordable and sustainable solutions for the management of their increasing amount of waste generated daily, while ensuring minimal environmental impact, social acceptance and minimal land use. The purpose of this dissertation was to develop a user-friendly decision-making tool for public administrators and government officials in tropical Asian developing and emerging cities. This tool was developed based on a list of selected decision-making issues necessary in making an informed decision. The decision-making tool is to be used by decision-makers in making a pre-liminary assessment of a most appropriate waste management and treatment system for their municipality. Tropical Asian cities must consider a number of issues when deciding on their waste management plan such as the continuously changing quantum and composition of waste associated with the increasing population and income per capita, the high humidity levels, and the often-limited financial resources. Other determinant factors include legal, political, institu-tional, social and technical issues. Furthermore, administrators must realize the importance of each stage involved in waste management, which includes waste generation, collection, trans-port, waste characteristics, disposal and treatment. To better understand the complexity of the issues involved in tropical Asian municipalities, the city of Bangkok, Thailand’s largest city and capital, was selected as a case study for the management of its 9,000 tonnes of waste gen-erated daily. Numerous interviews, meetings along with the review of documents, reports and site visits offered an inside view of the tropical city’s various decision-making issues towards its waste management plan, and examine specific problems encountered by the city’s decision-makers. The review and analysis of the decision-making issues involved in Bangkok’s waste management plan showed how the decision-making tool can be used in various Asian tropical cities. In conclusion, waste management in an emerging tropical country involves specific challenges that need to be addressed. Economical, technical and social criteria need to be fully understood as to capacitate government officials in the selection of the most appropriate urban waste man-agement system. Limited budgets, lack of public awareness and poor systems’ management often cloud decision-makers in choosing what appears to be the best solution in the short term, but more costly over the years. Weather conditions and scarcity of land in proximity of the city make waste management especially challenging. The decision-making framework offers a tool to decision-makers, as to facilitate the understanding and identification of key issues necessary in the formulation of a sustainable urban waste management plan and in the selection of a tech-nically, economically and socially acceptable integrated MSW management system. A detailed feasibility study and master plan will follow the preliminary study as to define the plant´s specifications, its location and its financing.
The methods currently used for scheduling building processes have some major advantages as well as disadvantages. The main advantages are the arrangement of the tasks of a project in a clear, easily readable form and the calculation of valuable information like critical paths. The main disadvantage on the other hand is the inflexibility of the model caused by the modeling paradigms. Small changes of the modeled information strongly influence the whole model and lead to the need to change many more details in the plan. In this article an approach is introduced allowing the creation of more flexible schedules. It aims towards a more robust model that lowers the need to change more than a few information while being able to calculate the important propositions of the known models and leading to further valuable conclusions.
Iso-parametric finite elements with linear shape functions show in general a too stiff element behavior, called locking. By the investigation of structural parts under bending loading the so-called shear locking appears, because these elements can not reproduce pure bending modes. Many studies dealt with the locking problem and a number of methods to avoid the undesirable effects have been developed. Two well known methods are the >Assumed Natural Strain< (ANS) method and the >Enhanced Assumed Strain< (EAS) method. In this study the EAS method is applied to a four-node plane element with four EAS-parameters. The paper will describe the well-known linear formulation, its extension to nonlinear materials and the modeling of material uncertainties with random fields. For nonlinear material behavior the EAS parameters can not be determined directly. Here the problem is solved by using an internal iteration at the element level, which is much more efficient and stable than the determination via a global iteration. To verify the deterministic element behavior the results of common test examples are presented for linear and nonlinear materials. The modeling of material uncertainties is done by point-discretized random fields. To show the applicability of the element for stochastic finite element calculations Latin Hypercube Sampling was applied to investigate the stochastic hardening behavior of a cantilever beam with nonlinear material. The enhanced linear element can be applied as an alternative to higher-order finite elements where more nodes are necessary. The presented element formulation can be used in a similar manner to improve stochastic linear solid elements.
This paper describes a framework for computer-aided conceptual design of building structures that results from building architectural considerations. The central task that is carried out during conceptual design is the synthesis of the structural system. This paper proposes a methodology for the synthesis of structural solutions. Given the nature of architectural constraints, user-model interactivity is devised as the most suitable computer methodology for driving the structural synthesis process. Taking advantage of the hierarchical organization of the structural system, this research proposes a top-down approach for structural synthesis. Through hierarchical refinement, the approach lends itself to the synthesis of global and local structural solutions. The components required for implementing the proposed methodology are briefly described. The main components have been incorporated in a proof-of-concept prototype that is being tested and validated with actual buildings.
Interactive visualization based on 3D computer graphics nowadays is an indispensable part of any simulation software used in engineering. Nevertheless, the implementation of such visualization software components is often avoided in research projects because it is a challenging and potentially time consuming task. In this contribution, a novel Java framework for the interactive visualization of engineering models is introduced. It supports the task of implementing engineering visualization software by providing adequate program logic as well as high level classes for the visual representation of entities typical for engineering models. The presented framework is built on top of the open source visualization toolkit VTK. In VTK, a visualization model is established by connecting several filter objects in a so called visualization pipeline. Although designing and implementing a good pipeline layout is demanding, VTK does not support the reuse of pipeline layouts directly. Our framework tailors VTK to engineering applications on two levels. On the first level it adds new – engineering model specific – filter classes to VTK. On the second level, ready made pipeline layouts for certain aspects of engineering models are provided. For instance there is a pipeline class for one-dimensional elements like trusses and beams that is capable of showing the elements along with deformations and member forces. In order to facilitate the implementation of a graphical user interface (GUI) for each pipeline class, there exists a reusable Java Swing GUI component that allows the user to configure the appearance of the visualization model. Because of the flexible structure, the framework can be easily adapted and extended to new problem domains. Currently it is used in (i) an object-oriented p-version finite element code for design optimization, (ii) an agent based monitoring system for dam structures and (iii) the simulation of destruction processes by controlled explosives based on multibody dynamics. Application examples from all three domains illustrates that the approach presented is powerful as well as versatile.
A comprehensive framework of information management system for construction projects in China has been established through extensive literature survey and field investigation. It utilizes the potential information technologies and covers the practical management patterns as well as the major aspects of construction project management. It can be used to guide and evaluate the design of the information management systems for construction projects in order to make the system to be applicable to a wide variety of construction projects and survive the changes in project management.
Modern distributed engineering applications are based on complex systems consisting of various subsystems that are connected through the Internet. Communication and collaboration within an entire system requires reliable and efficient data exchange between the subsystems. Middleware developed within the web evolution during the past years provides reliable and efficient data exchange for web applications, which can be adopted for solving the data exchange problems in distributed engineering applications. This paper presents a generic approach for reliable and efficient data exchange between engineering devices using existing middleware known from web applications. Different existing middleware is examined with respect to the suitability in engineering applications. In this paper, a suitable middleware is shown and a prototype implementation simulating distributed wind farm control is presented and validated using several performance measurements.
A Hybrid Clustering and Classification Technique for Forecasting Short-Term Energy Consumption
(2018)
Electrical energy distributor companies in Iran have to announce their energy demand at least three 3-day ahead of the market opening. Therefore, an accurate load estimation is highly crucial. This research invoked methodology based on CRISP data mining and used SVM, ANN, and CBA-ANN-SVM (a novel hybrid model of clustering with both widely used ANN and SVM) to predict short-term electrical energy demand of Bandarabbas. In previous studies, researchers introduced few effective parameters with no reasonable error about Bandarabbas power consumption. In this research we tried to recognize all efficient parameters and with the use of CBA-ANN-SVM model, the rate of error has been minimized. After consulting with experts in the field of power consumption and plotting daily power consumption for each week, this research showed that official holidays and weekends have impact on the power consumption. When the weather gets warmer, the consumption of electrical energy increases due to turning on electrical air conditioner. Also, con-sumption patterns in warm and cold months are different. Analyzing power consumption of the same month for different years had shown high similarity in power consumption patterns. Factors with high impact on power consumption were identified and statistical methods were utilized to prove their impacts. Using SVM, ANN and CBA-ANN-SVM, the model was built. Sine the proposed method (CBA-ANN-SVM) has low MAPE 5 1.474 (4 clusters) and MAPE 5 1.297 (3 clusters) in comparison with SVM (MAPE 5 2.015) and ANN (MAPE 5 1.790), this model was selected as the final model. The final model has the benefits from both models and the benefits of clustering. Clustering algorithm with discovering data structure, divides data into several clusters based on similarities and differences between them. Because data inside each cluster are more similar than entire data, modeling in each cluster will present better results. For future research, we suggest using fuzzy methods and genetic algorithm or a hybrid of both to forecast each cluster. It is also possible to use fuzzy methods or genetic algorithms or a hybrid of both without using clustering. It is issued that such models will produce better and more accurate results.
This paper presents a hybrid approach to predict the electric energy usage of weather-sensitive loads. The presented methodutilizes the clustering paradigm along with ANN and SVMapproaches for accurate short-term prediction of electric energyusage, using weather data. Since the methodology beinginvoked in this research is based on CRISP data mining, datapreparation has received a gr eat deal of attention in thisresear ch. Once data pre-processing was done, the underlyingpattern of electric energy consumption was extracted by themeans of machine learning methods to precisely forecast short-term energy consumption. The proposed approach (CBA-ANN-SVM) was applied to real load data and resulting higher accu-racy comparing to the existing models.
2018 American Institute of Chemical Engineers Environ Prog, 2018
https://doi.org/10.1002/ep.12934
We apply keyquery-based taxonomy composition to compute a classification system for the CORE dataset, a shared crawl of about 850,000 scientific papers. Keyquery-based taxonomy composition can be understood as a two-phase hierarchical document clustering technique that utilizes search queries as cluster labels: In a first phase, the document collection is indexed by a reference search engine, and the documents are tagged with the search queries they are relevant—for their so-called keyqueries. In a second phase, a hierarchical clustering is formed from the keyqueries within an iterative process. We use the explicit topic model ESA as document retrieval model in order to index the CORE dataset in the reference search engine. Under the ESA retrieval model, documents are represented as vectors of similarities to Wikipedia articles; a methodology proven to be advantageous for text categorization tasks. Our paper presents the generated taxonomy and reports on quantitative properties such as document coverage and processing requirements.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
This paper focuses on a new three-level discretisation strategy which enables the transition between continuum/structural (I) and structural/black box modelling (II). The transition (I) is realised by means of a model adaptive concept based on an innovative finite element technology. For transition (II) we apply the truncated balanced realisation method (TBR). The latter represents an established system theoretical model reduction technique which is here combined with a novel substructure technique. The approach provides a modular concept to facilitate the computational analysis of complex structures. The final goal is to apply the strategy to life time estimation.
Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.
In earlier research, generalized multidimensional Hilbert transforms have been constructed in m-dimensional Euclidean space, in the framework of Clifford analysis. Clifford analysis, centred around the notion of monogenic functions, may be regarded as a direct and elegant generalization to higher dimension of the theory of the holomorphic functions in the complex plane. The considered Hilbert transforms, usually obtained as a part of the boundary value of an associated Cauchy transform in m+1 dimensions, might be characterized as isotropic, since the metric in the underlying space is the standard Euclidean one. In this paper we adopt the idea of a so-called anisotropic Clifford setting, which leads to the introduction of a metric dependent m-dimensional Hilbert transform, showing, at least formally, the same properties as the isotropic one. The Hilbert transform being an important tool in signal analysis, this metric dependent setting has the advantage of allowing the adjustment of the co-ordinate system to possible preferential directions in the signals to be analyzed. A striking result to be mentioned is that the associated anisotropic (m+1)-dimensional Cauchy transform is no longer uniquely determined, but may stem from a diversity of (m+1)-dimensional "mother" metrics.
A Multi-objective Model for Optimizing Construction Planning of Repetitive Infrastructure Projects
(2004)
This paper presents the development of a model for optimizing resource utilization in repetitive infrastructure projects. The model provides the capability of simultaneous minimization of both project duration and work interruptions for construction crews. The model provides in a single run, a set of nondominated solutions that represent the tradeoff between these two objectives. The model incorporates a multiobjective genetic algorithm and scheduling algorithm. The model initially generates a randomly selected set of solutions that evolves to a near optimal set of tradeoff solutions in subsequent generations. Each solution represents a unique scheduling solution that is associated with certain project duration and a number of interruption days for utilized construction crews. As such, the model provides project planners with alternative schedules along with their expected duration and resource utilization efficiency.
The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.
For the analysis of arbitrary, by Finite Elements discretized shell structures, an efficient numerical simulation strategy with quadratic convergence including geometrically and physically nonlinear effects will be presented. In the beginning, a Finite-Rotation shell theory allowing constant shear deformations across the shell thickness is given in an isoparametric formulation. The assumed-strain concept enables the derivation of a locking-free finite element. The Layered Approach will be applied to ensure a sufficiently precise prediction of the propagation of plastic zones even throughout the shell thickness. The Riks-Wempner-Wessels global iteration scheme will be enhanced by a Line-Search procedure to ensure the tracing of nonlinear deformation paths with rather great load steps even in the post-peak range. The elastic-plastic material model includes isotropic hardening. A new Operator-Split return algorithm ensures considerably exact solution of the initial-value problem even for greater load steps. The combination with consistently linearized constitutive equations ensures quadratic convergence in a close neighbourhood to the exact solution. Finally, several examples will demonstrate accuracy and numerical efficiency of the developed algorithm.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
The truss model for predicting shear resistance of reinforced concrete beams has usually been criticized because of its underestimation of the concrete shear strength especially for beams with low shear reinforcement. Two challengers are commonly encountered in any truss model and are responsible for its inaccurate shear strength prediction. First: the cracking angle is usually assumed empirically and second the shear contribution of the arching action is usually neglected. This research introduces a nouvelle approach, by using Artificial Neural Network (ANN) for accurately evaluating the shear cracking angle of reinforced and prestressed concrete beams. The model inputs include the beam geometry, concrete strength, the shear reinforcement ratio and the prestressing stress if any. ...
Global structural analyses in civil engineering are usually performed considering linear-elastic material behavior. However, for steel structures, a certain degree of plasticization depending on the member classification may be considered. Corresponding plastic analyses taking material nonlinearities into account are effectively realized using numerical methods. Frequently applied finite elements of two and three-dimensional models evaluate the plasticity at defined nodes using a yield surface, i.e. by a yield condition, hardening rule, and flow rule. Corresponding calculations are connected to a large numerical as well as time-consuming effort and they do not rely on the theoretical background of beam theory, to which the regulations of standards mainly correspond. For that reason, methods using beam elements (one-dimensional) combined with cross-sectional analyses are commonly applied for steel members in terms of plastic zones theories. In these approaches, plasticization is in general assessed by means of axial stress only. In this paper, more precise numerical representation of the combined stress states, i.e. axial and shear stresses, is presented and results of the proposed approach are validated and discussed.
We present a software prototype for fluid flow problems in civil engineering, which combines essential features of Computational Steering approaches with efficient methods for model transfer and high performance computing. The main components of the system are described: - The modeler with a focus on the data management of the product model - The pre-processing and the post-processing toolkit - The simulation kernel based on the Lattice Boltzmann method - The required hardware for real-time computing
A Product Model of a Road
(1997)
Many errors and delays frequently appear when data is exchanged between particular tasks in the lifecycle of the road. Inter-task connections are therefore of great importance for the quality of the final product. The article describes a product model of a road wich is the kernel of an integrated information system intended to support all important stages of the road lifecycle: design, evaluation (through different analysis procedures), construction, and maintainance. Since particular tasks are often executed at different places and in different companies, the interconnections are supported by a special metafile which contains all specific data of the product model. The concept of the integrated system is object and component oriented. Additionally, existing conventional program packages are included to support some common tasks (methods). A conventional relational database system as well as an open spatial database system with the relevant GIS functionality are included to support the data structures of the model.
Information technology plays a key role in the everyday operation of buildings and campuses. Many proprietary technologies and methodologies can assist in effective Building Performance Monitoring (BPM) and efficient managing of building resources. The integration of related tools like energy simulator packages, facility, energy and building management systems, and enterprise resource planning systems is of benefit to BPM. However, the complexity to integrating such domain specific systems prevents their common usage. Service Oriented Architecture (SOA) has been deployed successfully in many large multinational companies to create integrated and flexible software systems, but so far this methodology has not been applied broadly to the field of BPM. This paper envisions that SOA provides an effective integration framework for BPM. Service oriented architecture for the ITOBO framework for sustainable and optimised building operation is proposed and an implementation for a building performance monitoring system is introduced.
As computer programs become ever more complex, software development has shifted from focusing on programming towards focusing on integration. This paper describes a simulation access language (SimAL) that can be used to access and compose software applications over the Internet. Specifically, the framework is developed for the integration of tools for project management applications. The infrastructure allows users to specify and to use existing heterogeneous tools (e.g., Microsoft Project, Microsoft Excel, Primavera Project Planner, and AutoCAD) for simulation of project scenarios. This paper describes the components of the SimAL language and the implementation efforts required in the development of the SimAL framework. An illustration example bringing on-line weather forecasting service for project scheduling and management applications is provided to demonstrate the use of the simulation language and the infrastructure framework.
A four-node quadrilateral shell element with smoothed membrane-bending based on Mindlin-Reissner theory is proposed. The element is a combination of a plate bending and membrane element. It is based on mixed interpolation where the bending and membrane stiffness matrices are calculated on the boundaries of the smoothing cells while the shear terms are approximated by independent interpolation functions in natural coordinates. The proposed element is robust, computationally inexpensive and free of locking. Since the integration is done on the element boundaries for the bending and membrane terms, the element is more accurate than the MITC4 element for distorted meshes. This will be demonstrated for several numerical examples.
This paper presents a new design environment based on Multi-Agents and Virtual Reality (VR). In this research, a design system with a virtual reality function was developed. The virtual world was realized by using GL4Java, liquid crystal shutter glasses, sensor systems, etc. And the Multi-Agent CAD system with product models, which had been developed before, was integrated with the VR design system. A prototype system was developed for highway steel plate girder bridges, and was applied to a design problem. The application verified the effectiveness of the developed system.
Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.
Rice husk ash (RHA) is classified as a highly reactive pozzolan. It has a very high silica content similar to that of silica fume (SF). Using less-expensive and locally available RHA as a mineral admixture in concrete brings ample benefits to the costs, the technical properties of concrete as well as to the environment. An experimental study of the effect of RHA blending on workability, strength and durability of high performance fine-grained concrete (HPFGC) is presented. The results show that the addition of RHA to HPFGC improved significantly compressive strength, splitting tensile strength and chloride penetration resistance. Interestingly, the ratio of compressive strength to splitting tensile strength of HPFGC was lower than that of ordinary concrete, especially for the concrete made with 20 % RHA. Compressive strength and splitting tensile strength of HPFGC containing RHA was similar and slightly higher, respectively, than for HPFGC containing SF. Chloride penetration resistance of HPFGC containing 10–15 % RHA was comparable with that of HPFGC containing 10 % SF.
In this Paper, we explored the relation between the electricity consumption in residential sector and the automobile energy consumption in transportation sector in accordance with the location of city by employing Geographic Information System (GIS). We found in the study that the electricity consumption per capita has a tendency that is higher in city center and lower in suburbs in Utsunomiya city. It is also noted that there is little difference among total consumption between city center and suburbs, despite the fact that the density of electric appliances tends to increase in a small size house of city center and the amount of automobile energy consumption from residence is lower in city center than in suburbs.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)
Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.
In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk.
An architecture of a distributed planning system for the building industry has been developed. The emphasis is on highly collaborative environments in steelwork, timber construction etc. where designers concurrently handle 3D models. The overall system connects local design systems by the so-called Design Framework DFW. This framework consists of the definition of distributed components and protocols which make the collaborative design work. The process of collaborative design has been formalized on an abstract level. This paper describes how this has been done. A sample is given to illustrate the mapping of concrete scenarios of the ‘real design world’ to an abstract scenario level. This work is funded by the Deutsche Forschungsgemeinschaft DFG as part of the project SPP1103 (Meißner et al. 2003).
The paper presents the abstraction of process relevant information in order to enable the workflow management based on semantic data. It is shown for three examples, how the standards define the information needed to perform a certain planning activity. Abstraction of process relevant information is discussed for different granularities of the underlying processmodel. As one possible application ProMiSE is introduced, which uses process relevant data in individual tokens in a petri-net based process-model.
Acoustic travel-time tomography (ATOM) determines the distribution of the temperature in a propagation medium by measuring the travel-time of acoustic signals between transmitters and receivers. To employ ATOM for indoor climate measurements, the impulse responses have been measured in the climate chamber lab of the Bauhaus-University Weimar and compared with the theoretical results of its image source model (ISM). A challenging task is distinguishing the reflections of interest in the reflectogram when the sound rays have similar travel-times. This paper presents a numerical method to address this problem by finding optimal positions of transmitter and receiver, since they have a direct impact on the distribution of travel times. These optimal positions have the minimum number of simultaneous arrival time within a threshold level. Moreover, for the tomographic reconstruction, when some of the voxels remain empty of sound-rays, it leads to inaccurate determination of the air temperature within those voxels. Based on the presented numerical method, the number of empty tomographic voxels are minimized to ensure the best sound-ray coverage of the room. Subsequently, a spatial temperature distribution is estimated by simultaneous iterative reconstruction technique (SIRT). The experimental set-up in the climate chamber verifies the simulation results.
The uncertainty existing in the construction industry is bigger than in other industries. Consequently, most construction projects do not go totally as planned. The project management plan needs therefore to be adapted repeatedly within the project lifecycle to suit the actual project conditions. Generally, the risks of change in the project management plan are difficult to be identified in advance, especially if these risks are caused by unexpected events such as human errors or changes in the client preferences. The knowledge acquired from different resources is essential to identify the probable deviations as well as to find proper solutions to the faced change risks. Hence, it is necessary to have a knowledge base that contains known solutions for the common exceptional cases that may cause changes in each construction domain. The ongoing research work presented in this paper uses the process modeling technique of Event-driven Process Chains to describe different patterns of structure changes in the schedule networks. This results in several so called “change templates”. Under each template different types of change risk/ response pairs can be categorized and stored in a knowledge base. This knowledge base is described as an ontology model populated with reference construction process data. The implementation of the developed approach can be seen as an iterative scheduling cycle that will be repeated within the project lifecycle as new change risks surface. This can help to check the availability of ready solutions in the knowledge base for the situation at hand. Moreover, if the solution is adopted, CPSP, “Change Project Schedule Plan „a prototype developed for the purpose of this research work, will be used to make the needed structure changes of the schedule network automatically based on the change template. What-If scenarios can be implemented using the CPSP prototype in the planning phase to study the effect of specific situations without endangering the success of the project objectives. Hence, better designed and more maintainable project schedules can be achieved.
We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.
Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.
In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.