Refine
Document Type
- Article (1016)
- Conference Proceeding (857)
- Doctoral Thesis (494)
- Master's Thesis (99)
- Part of a Book (50)
- Book (45)
- Report (43)
- Periodical (28)
- Preprint (27)
- Bachelor Thesis (22)
Institute
- Professur Theorie und Geschichte der modernen Architektur (493)
- Professur Informatik im Bauwesen (484)
- Institut für Strukturmechanik (ISM) (344)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (201)
- Professur Baubetrieb und Bauverfahren (143)
- Institut für Europäische Urbanistik (68)
- Professur Bauphysik (52)
- Professur Stochastik und Optimierung (46)
- Graduiertenkolleg 1462 (42)
- F. A. Finger-Institut für Baustoffkunde (FIB) (38)
Keywords
- Weimar (446)
- Bauhaus-Kolloquium (442)
- Angewandte Mathematik (331)
- Computerunterstütztes Verfahren (289)
- Architektur (246)
- Architektur <Informatik> (201)
- Strukturmechanik (189)
- CAD (184)
- Angewandte Informatik (155)
- Bauhaus (125)
Year of publication
- 2004 (220)
- 2003 (197)
- 2006 (172)
- 1997 (165)
- 2020 (123)
- 2015 (122)
- 2010 (113)
- 2008 (110)
- 2005 (104)
- 2012 (104)
- 2000 (100)
- 2022 (94)
- 2011 (91)
- 2021 (88)
- 2013 (85)
- 2014 (83)
- 2016 (70)
- 2019 (65)
- 2023 (65)
- 1987 (63)
- 1990 (60)
- 2018 (58)
- 2017 (56)
- 2007 (52)
- 1983 (49)
- 2009 (48)
- 1979 (36)
- 1976 (29)
- 2001 (24)
- 2002 (24)
- 1993 (23)
- 1999 (18)
- 1992 (16)
- 1998 (7)
- 2024 (4)
- 1995 (1)
The paper is about model based parameter identification and damage localization of elastomechanical systems using input and output measurements in the frequency domain. An adaptation of the Projective Input Residual Method to subsystem damage identification is presented. For this purpose the projected residuals were adapted with respect to a given subsystem to be analysed. Based on the gradients of these projected subsystem residuals a damage indicator was introduced which is sensitive to parameter changes and structural damages in this subsystem. Since the computations are done w.r.t. the smaller dimension of a subsystem this indicator shows a computational performance gain compared to the non-subsystem approach. This gain in efficiency makes the indicator applicable in online-monitoring and online-damage-diagnosis where continuous and fast data processing is required. The presented application of the indicator to a gantry robot could illustrate the ability of the indicator to indicate and locate real damage of a complex structure. Since in civil engineering applications the system input is often unknown, further investigations will focus on the output-only case since the generalization of the presented methods to this case will broaden its application spectrum.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
For the analysis of arbitrary, by Finite Elements discretized shell structures, an efficient numerical simulation strategy with quadratic convergence including geometrically and physically nonlinear effects will be presented. In the beginning, a Finite-Rotation shell theory allowing constant shear deformations across the shell thickness is given in an isoparametric formulation. The assumed-strain concept enables the derivation of a locking-free finite element. The Layered Approach will be applied to ensure a sufficiently precise prediction of the propagation of plastic zones even throughout the shell thickness. The Riks-Wempner-Wessels global iteration scheme will be enhanced by a Line-Search procedure to ensure the tracing of nonlinear deformation paths with rather great load steps even in the post-peak range. The elastic-plastic material model includes isotropic hardening. A new Operator-Split return algorithm ensures considerably exact solution of the initial-value problem even for greater load steps. The combination with consistently linearized constitutive equations ensures quadratic convergence in a close neighbourhood to the exact solution. Finally, several examples will demonstrate accuracy and numerical efficiency of the developed algorithm.
The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.
A Multi-objective Model for Optimizing Construction Planning of Repetitive Infrastructure Projects
(2004)
This paper presents the development of a model for optimizing resource utilization in repetitive infrastructure projects. The model provides the capability of simultaneous minimization of both project duration and work interruptions for construction crews. The model provides in a single run, a set of nondominated solutions that represent the tradeoff between these two objectives. The model incorporates a multiobjective genetic algorithm and scheduling algorithm. The model initially generates a randomly selected set of solutions that evolves to a near optimal set of tradeoff solutions in subsequent generations. Each solution represents a unique scheduling solution that is associated with certain project duration and a number of interruption days for utilized construction crews. As such, the model provides project planners with alternative schedules along with their expected duration and resource utilization efficiency.
In earlier research, generalized multidimensional Hilbert transforms have been constructed in m-dimensional Euclidean space, in the framework of Clifford analysis. Clifford analysis, centred around the notion of monogenic functions, may be regarded as a direct and elegant generalization to higher dimension of the theory of the holomorphic functions in the complex plane. The considered Hilbert transforms, usually obtained as a part of the boundary value of an associated Cauchy transform in m+1 dimensions, might be characterized as isotropic, since the metric in the underlying space is the standard Euclidean one. In this paper we adopt the idea of a so-called anisotropic Clifford setting, which leads to the introduction of a metric dependent m-dimensional Hilbert transform, showing, at least formally, the same properties as the isotropic one. The Hilbert transform being an important tool in signal analysis, this metric dependent setting has the advantage of allowing the adjustment of the co-ordinate system to possible preferential directions in the signals to be analyzed. A striking result to be mentioned is that the associated anisotropic (m+1)-dimensional Cauchy transform is no longer uniquely determined, but may stem from a diversity of (m+1)-dimensional "mother" metrics.
Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.
This paper focuses on a new three-level discretisation strategy which enables the transition between continuum/structural (I) and structural/black box modelling (II). The transition (I) is realised by means of a model adaptive concept based on an innovative finite element technology. For transition (II) we apply the truncated balanced realisation method (TBR). The latter represents an established system theoretical model reduction technique which is here combined with a novel substructure technique. The approach provides a modular concept to facilitate the computational analysis of complex structures. The final goal is to apply the strategy to life time estimation.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
We apply keyquery-based taxonomy composition to compute a classification system for the CORE dataset, a shared crawl of about 850,000 scientific papers. Keyquery-based taxonomy composition can be understood as a two-phase hierarchical document clustering technique that utilizes search queries as cluster labels: In a first phase, the document collection is indexed by a reference search engine, and the documents are tagged with the search queries they are relevant—for their so-called keyqueries. In a second phase, a hierarchical clustering is formed from the keyqueries within an iterative process. We use the explicit topic model ESA as document retrieval model in order to index the CORE dataset in the reference search engine. Under the ESA retrieval model, documents are represented as vectors of similarities to Wikipedia articles; a methodology proven to be advantageous for text categorization tasks. Our paper presents the generated taxonomy and reports on quantitative properties such as document coverage and processing requirements.
A Hybrid Clustering and Classification Technique for Forecasting Short-Term Energy Consumption
(2018)
Electrical energy distributor companies in Iran have to announce their energy demand at least three 3-day ahead of the market opening. Therefore, an accurate load estimation is highly crucial. This research invoked methodology based on CRISP data mining and used SVM, ANN, and CBA-ANN-SVM (a novel hybrid model of clustering with both widely used ANN and SVM) to predict short-term electrical energy demand of Bandarabbas. In previous studies, researchers introduced few effective parameters with no reasonable error about Bandarabbas power consumption. In this research we tried to recognize all efficient parameters and with the use of CBA-ANN-SVM model, the rate of error has been minimized. After consulting with experts in the field of power consumption and plotting daily power consumption for each week, this research showed that official holidays and weekends have impact on the power consumption. When the weather gets warmer, the consumption of electrical energy increases due to turning on electrical air conditioner. Also, con-sumption patterns in warm and cold months are different. Analyzing power consumption of the same month for different years had shown high similarity in power consumption patterns. Factors with high impact on power consumption were identified and statistical methods were utilized to prove their impacts. Using SVM, ANN and CBA-ANN-SVM, the model was built. Sine the proposed method (CBA-ANN-SVM) has low MAPE 5 1.474 (4 clusters) and MAPE 5 1.297 (3 clusters) in comparison with SVM (MAPE 5 2.015) and ANN (MAPE 5 1.790), this model was selected as the final model. The final model has the benefits from both models and the benefits of clustering. Clustering algorithm with discovering data structure, divides data into several clusters based on similarities and differences between them. Because data inside each cluster are more similar than entire data, modeling in each cluster will present better results. For future research, we suggest using fuzzy methods and genetic algorithm or a hybrid of both to forecast each cluster. It is also possible to use fuzzy methods or genetic algorithms or a hybrid of both without using clustering. It is issued that such models will produce better and more accurate results.
This paper presents a hybrid approach to predict the electric energy usage of weather-sensitive loads. The presented methodutilizes the clustering paradigm along with ANN and SVMapproaches for accurate short-term prediction of electric energyusage, using weather data. Since the methodology beinginvoked in this research is based on CRISP data mining, datapreparation has received a gr eat deal of attention in thisresear ch. Once data pre-processing was done, the underlyingpattern of electric energy consumption was extracted by themeans of machine learning methods to precisely forecast short-term energy consumption. The proposed approach (CBA-ANN-SVM) was applied to real load data and resulting higher accu-racy comparing to the existing models.
2018 American Institute of Chemical Engineers Environ Prog, 2018
https://doi.org/10.1002/ep.12934
A geometrical inclusion-matrix model for the finite element analysis of concrete at multiple scales
(2003)
This paper introduces a method to generate adequate inclusion-matrix geometries of concrete in two and three dimensions, which are independent of any specific numerical discretization. The article starts with an analysis on shapes of natural aggregates and discusses corresponding mathematical realizations. As a first prototype a two-dimensional generation of a mesoscale model is introduced. Particle size distribution functions are analysed and prepared for simulating an adequate three-dimensional representation of the aggregates within a concrete structure. A sample geometry of a three-dimensional test cube is generated and the finite element analysis of its heterogeneous geometry by a uniform mesh is presented. Concluding, aspects of a multiscale analysis are discussed and possible enhancements are proposed.
Modern distributed engineering applications are based on complex systems consisting of various subsystems that are connected through the Internet. Communication and collaboration within an entire system requires reliable and efficient data exchange between the subsystems. Middleware developed within the web evolution during the past years provides reliable and efficient data exchange for web applications, which can be adopted for solving the data exchange problems in distributed engineering applications. This paper presents a generic approach for reliable and efficient data exchange between engineering devices using existing middleware known from web applications. Different existing middleware is examined with respect to the suitability in engineering applications. In this paper, a suitable middleware is shown and a prototype implementation simulating distributed wind farm control is presented and validated using several performance measurements.
We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible types of light modulation. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear system that can be solved with respect to the projector patterns. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes interactive compensation rates possible. Our generalized method unifies existing approaches that address individual problems. Based on examples, we show that it is possible to project corrected images onto complex surfaces such as an inter-reflecting statuette, glossy wallpaper, or through highly-refractive glass. Furthermore, we illustrate that a side-effect of our approach is an increase in the overall sharpness of defocused projections.
A comprehensive framework of information management system for construction projects in China has been established through extensive literature survey and field investigation. It utilizes the potential information technologies and covers the practical management patterns as well as the major aspects of construction project management. It can be used to guide and evaluate the design of the information management systems for construction projects in order to make the system to be applicable to a wide variety of construction projects and survive the changes in project management.
Interactive visualization based on 3D computer graphics nowadays is an indispensable part of any simulation software used in engineering. Nevertheless, the implementation of such visualization software components is often avoided in research projects because it is a challenging and potentially time consuming task. In this contribution, a novel Java framework for the interactive visualization of engineering models is introduced. It supports the task of implementing engineering visualization software by providing adequate program logic as well as high level classes for the visual representation of entities typical for engineering models. The presented framework is built on top of the open source visualization toolkit VTK. In VTK, a visualization model is established by connecting several filter objects in a so called visualization pipeline. Although designing and implementing a good pipeline layout is demanding, VTK does not support the reuse of pipeline layouts directly. Our framework tailors VTK to engineering applications on two levels. On the first level it adds new – engineering model specific – filter classes to VTK. On the second level, ready made pipeline layouts for certain aspects of engineering models are provided. For instance there is a pipeline class for one-dimensional elements like trusses and beams that is capable of showing the elements along with deformations and member forces. In order to facilitate the implementation of a graphical user interface (GUI) for each pipeline class, there exists a reusable Java Swing GUI component that allows the user to configure the appearance of the visualization model. Because of the flexible structure, the framework can be easily adapted and extended to new problem domains. Currently it is used in (i) an object-oriented p-version finite element code for design optimization, (ii) an agent based monitoring system for dam structures and (iii) the simulation of destruction processes by controlled explosives based on multibody dynamics. Application examples from all three domains illustrates that the approach presented is powerful as well as versatile.
This paper describes a framework for computer-aided conceptual design of building structures that results from building architectural considerations. The central task that is carried out during conceptual design is the synthesis of the structural system. This paper proposes a methodology for the synthesis of structural solutions. Given the nature of architectural constraints, user-model interactivity is devised as the most suitable computer methodology for driving the structural synthesis process. Taking advantage of the hierarchical organization of the structural system, this research proposes a top-down approach for structural synthesis. Through hierarchical refinement, the approach lends itself to the synthesis of global and local structural solutions. The components required for implementing the proposed methodology are briefly described. The main components have been incorporated in a proof-of-concept prototype that is being tested and validated with actual buildings.
Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling.
The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts.
Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem.
For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time.
In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them.
Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design.
Iso-parametric finite elements with linear shape functions show in general a too stiff element behavior, called locking. By the investigation of structural parts under bending loading the so-called shear locking appears, because these elements can not reproduce pure bending modes. Many studies dealt with the locking problem and a number of methods to avoid the undesirable effects have been developed. Two well known methods are the >Assumed Natural Strain< (ANS) method and the >Enhanced Assumed Strain< (EAS) method. In this study the EAS method is applied to a four-node plane element with four EAS-parameters. The paper will describe the well-known linear formulation, its extension to nonlinear materials and the modeling of material uncertainties with random fields. For nonlinear material behavior the EAS parameters can not be determined directly. Here the problem is solved by using an internal iteration at the element level, which is much more efficient and stable than the determination via a global iteration. To verify the deterministic element behavior the results of common test examples are presented for linear and nonlinear materials. The modeling of material uncertainties is done by point-discretized random fields. To show the applicability of the element for stochastic finite element calculations Latin Hypercube Sampling was applied to investigate the stochastic hardening behavior of a cantilever beam with nonlinear material. The enhanced linear element can be applied as an alternative to higher-order finite elements where more nodes are necessary. The presented element formulation can be used in a similar manner to improve stochastic linear solid elements.
Business and engineering knowledge in AEC/FM is captured mainly implicitly in project and corporate document repositories. Even with the increasing integration of model-based systems with project information spaces, a large percentage of the information exchange will further on rely on isolated and rather poorly structured text documents. In this paper we propose an approach enabling the use of product model data as a primary source of engineering knowledge to support information externalisation from relevant construction documents, to provide for domain-specific information retrieval, and to help in re-organising and re-contextualising documents in accordance to the user’s discipline-specific tasks and information needs. Suggested is a retrieval and mining framework combining methods for analysing text documents, filtering product models and reasoning on Bayesian networks to explicitly represent the content of text repositories in personalisable semantic content networks. We describe the proposed basic network that can be realised on short-term using minimal product model information as well as various extensions towards a full-fledged added value integration of document-based and model-based information.
The methods currently used for scheduling building processes have some major advantages as well as disadvantages. The main advantages are the arrangement of the tasks of a project in a clear, easily readable form and the calculation of valuable information like critical paths. The main disadvantage on the other hand is the inflexibility of the model caused by the modeling paradigms. Small changes of the modeled information strongly influence the whole model and lead to the need to change many more details in the plan. In this article an approach is introduced allowing the creation of more flexible schedules. It aims towards a more robust model that lowers the need to change more than a few information while being able to calculate the important propositions of the known models and leading to further valuable conclusions.
A Flexible Model for Incorporating Construction Product Data into Building Information Models
(2006)
When considering the integration and interoperability between AEC-FM software applications and construction products' data, it is essential to investigate the state-of-the-art and conduct an extensive review in the literature of both Building Information Models and electronic product catalogues. It was found that there are many reasons and key-barriers that hinder the developed solutions from being implemented. Among the reasons that are attributed to the failure of many previous research projects to achieve this integration aim are the proprietary developments of CAD vendors, the fragmented nature of construction product data i.e. commercial and technical data, the prefabrication versus on-site production, marketing strategies and brand-naming, the referencing of a product to the data of its constituents, availability of life-cycle data in a single point in time where it is needed all over the whole life-cycle of the product itself, taxonomy problems, the inability to extract search parameters from the building information model to participate in the conduction of parametric searches. Finally and most important is keeping the product data in the building information model consistent and up-to-date. Hence, it was found that there is a great potential for construction product data to be integrated to building information models by electronic means in a dynamic and extensible manner that prevents the model from getting obsolete. The study has managed to establish a solution concept that links continually updated and extensible life-cycle product data to a software independent building information model (IFC) all over the life span of the product itself. As a result, the solution concept has managed to reach a reliable building information model that is capable of overcoming the majority of the above mentioned barriers. In the meantime, the solution is capable of referencing, retrieving, updating, and merging product data at any point in time. A distributed network application that represents all the involved parties in the construction product value chain is simulated by real software tools to demonstrate the proof of concept of this research work.
The purpose of this research is to develop the method to retrieve a building name from the impression of the building. First, the images of the building are registered as database by the questionnaire. Next, the images of the objective building are compared with the degree of matching in image databases, and the building with high synthetic matching degree is retrieved. This system could get a good retrieval result. Moreover, image processing was done, and image databases are trained by neural network from the amount of characteristics of the image, and the retrieval system by image processing was examined.
Abstract Developing and emerging tropical Asian countries have encountered fast urban development due to the migration of farmers seeking a better life in the city. This resulted in a lack of appro-priate infrastructure and inappropriate social services in many cities. Municipal solid waste management is no exception and is in fact often placed at the bottom of the list of priorities for the cities’ appropriate urban management plans since laws and regulations must first be for-mulated and implemented. The problem of unmanaged municipal solid waste certainly leads to air pollution, disease, and to soil and water contamination. These problems in tropical climates are compounded with high temperature, high-level humidity, heavy rainfall and frequent flooding. Stagnant water and leachate from waste quickly become the breeding grounds of in-sects, rodents and bacteria, thus creating a health hazard for workers and local populations. Moreover, water and groundwater contamination may lead to serious environmental degrada-tion with direct impacts on water supplies, and in the fast degradation of agricultural products, the backbone of most tropical Asian countries. Many cities still allow or tolerate dumping of waste in uncontrolled sites, and open burning that disperses particulates that most likely contain dioxins and furans. Even with increasingly scarce land availability within or in proximity of the cities, sanitary landfill is still the most often cho-sen disposal method around Asia because of its lower cost when compared to modern treatment systems. Yet, most of these landfill sites do not have proper lining, daily covering, methane recovery devices, leachate control systems, nor do they have long-term closure and monitoring plans, which implies short and long-term hazards. Some municipalities opted for incineration, which usually entails high operation and maintenance costs because of the need for supple-mental fuel and often-inappropriate running conditions. Although tropical conditions appear to favor certain disposal systems such as composting, appropriate technology needs to be identi-fied in order to reduce operation and maintenance costs while ensuring good quality outputs; compost plants have often been closed because of poor quality products due to the high content of plastic and glass particulates in the finished product. Tropical Asian cities are now required to identify affordable and sustainable solutions for the management of their increasing amount of waste generated daily, while ensuring minimal environmental impact, social acceptance and minimal land use. The purpose of this dissertation was to develop a user-friendly decision-making tool for public administrators and government officials in tropical Asian developing and emerging cities. This tool was developed based on a list of selected decision-making issues necessary in making an informed decision. The decision-making tool is to be used by decision-makers in making a pre-liminary assessment of a most appropriate waste management and treatment system for their municipality. Tropical Asian cities must consider a number of issues when deciding on their waste management plan such as the continuously changing quantum and composition of waste associated with the increasing population and income per capita, the high humidity levels, and the often-limited financial resources. Other determinant factors include legal, political, institu-tional, social and technical issues. Furthermore, administrators must realize the importance of each stage involved in waste management, which includes waste generation, collection, trans-port, waste characteristics, disposal and treatment. To better understand the complexity of the issues involved in tropical Asian municipalities, the city of Bangkok, Thailand’s largest city and capital, was selected as a case study for the management of its 9,000 tonnes of waste gen-erated daily. Numerous interviews, meetings along with the review of documents, reports and site visits offered an inside view of the tropical city’s various decision-making issues towards its waste management plan, and examine specific problems encountered by the city’s decision-makers. The review and analysis of the decision-making issues involved in Bangkok’s waste management plan showed how the decision-making tool can be used in various Asian tropical cities. In conclusion, waste management in an emerging tropical country involves specific challenges that need to be addressed. Economical, technical and social criteria need to be fully understood as to capacitate government officials in the selection of the most appropriate urban waste man-agement system. Limited budgets, lack of public awareness and poor systems’ management often cloud decision-makers in choosing what appears to be the best solution in the short term, but more costly over the years. Weather conditions and scarcity of land in proximity of the city make waste management especially challenging. The decision-making framework offers a tool to decision-makers, as to facilitate the understanding and identification of key issues necessary in the formulation of a sustainable urban waste management plan and in the selection of a tech-nically, economically and socially acceptable integrated MSW management system. A detailed feasibility study and master plan will follow the preliminary study as to define the plant´s specifications, its location and its financing.
Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources.
In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts.
Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques.
The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches.
For the safe and efficient operation of dams, frequent monitoring and maintenance are required. These are usually expensive, time consuming, and cumbersome. To alleviate these issues, we propose applying a wave-based scheme for the location and quantification of damages in dams.
To obtain high-resolution “interpretable” images of the damaged regions, we drew inspiration from non-linear full-multigrid methods for inverse problems and applied a new cyclic multi-stage full-waveform inversion (FWI) scheme. Our approach is less susceptible to the stability issues faced by the standard FWI scheme when dealing with ill-posed problems. In this paper, we first selected an optimal acquisition setup and then applied synthetic data to demonstrate the capability of our approach in identifying a series of anomalies in dams by a mixture of reflection and transmission tomography. The results had sufficient robustness, showing the prospects of application in the field of non-destructive testing of dams.
A coupled thermo-hydro-mechanical model of jointed hard rock for compressed air energy storage
(2014)
Renewable energy resources such as wind and solar are intermittent, which causes instability when being connected to utility grid of electricity. Compressed air energy storage (CAES) provides an economic and technical viable solution to this problem by utilizing subsurface rock cavern to store the electricity generated by renewable energy in the form of compressed air. Though CAES has been used for over three decades, it is only restricted to salt rock or aquifers for air tightness reason. In this paper, the technical feasibility of utilizing hard rock for CAES is investigated by using a coupled thermo-hydro-mechanical (THM) modelling of nonisothermal gas flow. Governing equations are derived from the rules of energy balance, mass balance, and static equilibrium. Cyclic volumetric mass source and heat source models are applied to simulate the gas injection and production. Evaluation is carried out for intact rock and rock with discrete crack, respectively. In both cases, the heat and pressure losses using air mass control and supplementary air injection are compared.
The contribution focuses on the development of a basic computational scheme that provides a suitable calculation environment for the coupling of analytical near-field solutions with numerical standard procedures in the far-field of the singularity. The proposed calculation scheme uses classical methods of complex function theory, which can be generalized to 3-dimensional problems by using the framework of hypercomplex analysis. The adapted approach is mainly based on the factorization of the Laplace operator EMBED Equation.3 by the Cauchy-Riemann operator EMBED Equation.3 , where exact solutions of the respective differential equation are constructed by using an orthonormal basis of holomorphic and anti-holomorphic functions.
The research of the best building design requires a concerted design approach of both structure and foundation. Our work is an application of this approach. Our objective is also to create an interactive tool, which will be able to define, at the early design stages, the orientations of structure and foundation systems that satisfy as well as possible the client and the architect. If the concerns of these two actors are primarily technical and economical, they also wish to apprehend the environmental and social dimensions of their projects. Thus, this approach bases on alternative studies and on a multi-criterion analysis. In this paper, we present the context of our work, the problem formulation, which allows a concerted design of Structure and Foundation systems and the feasible solutions identifying process.
The synchronous distributed processing of common source code in the software development process is supported by well proven methods. The planning process has similarities with the software development process. However, there are no consistent and similarly successful methods for applications in construction projects. A new approach is proposed in this contribution.
Complex vortex flow patterns around bridge piers, especially during floods, cause scour process that can result in the failure of foundations. Abutment scour is a complex three-dimensional phenomenon that is difficult to predict especially with traditional formulas obtained using empirical approaches such as regressions. This paper presents a test of a standalone Kstar model with five novel hybrid algorithm of bagging (BA-Kstar), dagging (DA-Kstar), random committee (RC-Kstar), random subspace (RS-Kstar), and weighted instance handler wrapper (WIHWKstar) to predict scour depth (ds) for clear water condition. The dataset consists of 99 scour depth data from flume experiments (Dey and Barbhuiya, 2005) using abutment shapes such as vertical, semicircular and 45◦ wing. Four dimensionless parameter of relative flow depth (h/l), excess abutment Froude number (Fe), relative sediment size (d50/l) and relative submergence (d50/h) were considered for the prediction of relative scour depth (ds/l). A portion of the dataset was used for the calibration (70%), and the remaining used for model validation. Pearson correlation coefficients helped deciding relevance of the input parameters combination and finally four different combinations of input parameters were used. The performance of the models was assessed visually and with quantitative metrics. Overall, the best input combination for vertical abutment shape is the combination of Fe, d50/l and h/l, while for semicircular and 45◦ wing the combination of the Fe and d50/l is the most effective input parameter combination. Our results show that incorporating Fe, d50/l and h/l lead to higher performance while involving d50/h reduced the models prediction power for vertical abutment shape and for semicircular and 45◦ wing involving h/l and d50/h lead to more error. The WIHW-Kstar provided the highest performance in scour depth prediction around vertical abutment shape while RC-Kstar model outperform of other models for scour depth prediction around semicircular and 45◦ wing.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
Determining the earthquake hazard of any settlement is one of the primary studies for reducing earthquake damage. Therefore, earthquake hazard maps used for this purpose must be renewed over time. Turkey Earthquake Hazard Map has been used instead of Turkey Earthquake Zones Map since 2019. A probabilistic seismic hazard was performed by using these last two maps and different attenuation relationships for Bitlis Province (Eastern Turkey) were located in the Lake Van Basin, which has a high seismic risk. The earthquake parameters were determined by considering all districts and neighborhoods in the province. Probabilistic seismic hazard analyses were carried out for these settlements using seismic sources and four different attenuation relationships. The obtained values are compared with the design spectrum stated in the last two earthquake maps. Significant differences exist between the design spectrum obtained according to the different exceedance probabilities. In this study, adaptive pushover analyses of sample-reinforced concrete buildings were performed using the design ground motion level. Structural analyses were carried out using three different design spectra, as given in the last two seismic design codes and the mean spectrum obtained from attenuation relationships. Different design spectra significantly change the target displacements predicted for the performance levels of the buildings.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
Methods with the convergence order p 2 (Newton`s, tangent hyperbolas, tangent parabolas etc.) and their approximate variants are studied. Conditions are presented under which the approximate variants preserve their convergence rate intrinsic to these methods and some computational aspects (possibilities to organize parallel computation, globalization of a method, the solution of the linear equations versus the matrix inversion at every iteration etc.) are discussed. Polyalgorithmic computational schemes (hybrid methods) combining the best features of various methods are developed and possibilities of their application to numerical solution of two-point boundary-value problem in ordinary differential equations and decomposition-coordination problem in convex programming are analyzed.
This paper extends further the strain smoothing technique in finite elements to 8-noded hexahedral elements (CS-FEM-H8). The idea behind the present method is similar to the cell-based smoothed 4-noded quadrilateral finite elements (CS-FEM-Q4). In CSFEM, the smoothing domains are created based on elements, and each element can be further subdivided into 1 or several smoothing cells. It is observed that: 1) The CS-FEM using a single smoothing cell can produce higher stress accuracy, but insufficient rank and poor displacement accuracy; 2) The CS-FEM using several smoothing cells has proper rank, good displacement accuracy, but lower stress accuracy, especially for nearly incompressible and bending dominant problems. We therefore propose 1) an extension of strain smoothing to 8-noded hexahedral elements and 2) an alternative CS-FEM form, which associates the single smoothing cell issue with multi-smoothing cell one via a stabilization technique. Several numerical examples are provided to show the reliability and accuracy of the present formulation.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
The evolution of data exchange and integration standards within the Architectural, Engineering and Construction industry is gradually making the long-held vision of computer-integratedconstruction a reality. The Industry Foundations Classes and CIMSteel Integration Standards are two such standards that have seen remarkable successes over the past few years. Despite successes, these standards support the exchange of product data more than they do process data, especially those processes that are loosely coupled with product models. This paper reports on on-going research to evaluate the adequacy of the IFC and CIS/2 standards to support process modeling in the steel supply chain. Some initial recommendations are made regarding enhancements to the data standards to better support processes.
The primary objective of initial shape analysis of a cable stayed bridge is to calculate initial installation cable tension forces and to evaluate fabrication camber of main span and pylon providing the final longitudinal profile of the bridge at the end of construction. In addition, the initial cable forces depending on the alternation of the bridge’s shape can be obtained from the analysis, and will be used to provide construction safety during construction. In this research, we conducted numerical experiments for initial shape of Ko-ha bridge, which will be constructed in the near future, using three different typical methods such as continuous beam method, linear truss method, and IIMF (Introducing Initial Member Force) method
Paper-based data acquisition and manual transfer between incompatible software or data formats during inspections of bridges, as done currently, are time-consuming, error-prone, cumbersome, and lead to information loss. A fully digitized workflow using open data formats would reduce data loss, efforts, and the costs of future inspections. On the one hand, existing studies proposed methods to automatize data acquisition and visualization for inspections. These studies lack an open standard to make the gathered data available for other processes. On the other hand, several studies discuss data structures for exchanging damage information among different stakeholders. However, those studies do not cover the process of automatic data acquisition and transfer. This study focuses on a framework that incorporates automatic damage data acquisition, transfer, and a damage information model for data exchange. This enables inspectors to use damage data for subsequent analyses and simulations. The proposed framework shows the potentials for a comprehensive damage information model and related (semi-)automatic data acquisition and processing.