Refine
Document Type
- Article (1015)
- Conference Proceeding (857)
- Doctoral Thesis (494)
- Master's Thesis (115)
- Part of a Book (50)
- Book (45)
- Report (43)
- Periodical (28)
- Preprint (27)
- Bachelor Thesis (22)
Institute
- Professur Theorie und Geschichte der modernen Architektur (493)
- Professur Informatik im Bauwesen (484)
- Institut für Strukturmechanik (ISM) (346)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (201)
- Professur Baubetrieb und Bauverfahren (145)
- Institut für Europäische Urbanistik (71)
- Professur Bauphysik (53)
- Professur Stochastik und Optimierung (46)
- Graduiertenkolleg 1462 (42)
- F. A. Finger-Institut für Baustoffkunde (FIB) (38)
Keywords
- Weimar (446)
- Bauhaus-Kolloquium (442)
- Angewandte Mathematik (331)
- Computerunterstütztes Verfahren (289)
- Architektur (247)
- Architektur <Informatik> (201)
- Strukturmechanik (189)
- CAD (184)
- Angewandte Informatik (155)
- Bauhaus (125)
Year of publication
- 2004 (220)
- 2003 (197)
- 2006 (173)
- 1997 (165)
- 2015 (125)
- 2020 (123)
- 2010 (114)
- 2008 (112)
- 2005 (106)
- 2012 (105)
- 2000 (100)
- 2022 (94)
- 2011 (91)
- 2021 (88)
- 2013 (85)
- 2014 (85)
- 2016 (70)
- 2019 (65)
- 2023 (65)
- 1987 (63)
- 1990 (60)
- 2017 (58)
- 2018 (58)
- 2007 (53)
- 1983 (49)
- 2009 (49)
- 1979 (36)
- 1976 (29)
- 2001 (24)
- 2002 (24)
- 1993 (23)
- 1999 (18)
- 1992 (16)
- 1998 (7)
- 2024 (4)
- 1995 (1)
In earlier research, generalized multidimensional Hilbert transforms have been constructed in m-dimensional Euclidean space, in the framework of Clifford analysis. Clifford analysis, centred around the notion of monogenic functions, may be regarded as a direct and elegant generalization to higher dimension of the theory of the holomorphic functions in the complex plane. The considered Hilbert transforms, usually obtained as a part of the boundary value of an associated Cauchy transform in m+1 dimensions, might be characterized as isotropic, since the metric in the underlying space is the standard Euclidean one. In this paper we adopt the idea of a so-called anisotropic Clifford setting, which leads to the introduction of a metric dependent m-dimensional Hilbert transform, showing, at least formally, the same properties as the isotropic one. The Hilbert transform being an important tool in signal analysis, this metric dependent setting has the advantage of allowing the adjustment of the co-ordinate system to possible preferential directions in the signals to be analyzed. A striking result to be mentioned is that the associated anisotropic (m+1)-dimensional Cauchy transform is no longer uniquely determined, but may stem from a diversity of (m+1)-dimensional "mother" metrics.
A Multi-objective Model for Optimizing Construction Planning of Repetitive Infrastructure Projects
(2004)
This paper presents the development of a model for optimizing resource utilization in repetitive infrastructure projects. The model provides the capability of simultaneous minimization of both project duration and work interruptions for construction crews. The model provides in a single run, a set of nondominated solutions that represent the tradeoff between these two objectives. The model incorporates a multiobjective genetic algorithm and scheduling algorithm. The model initially generates a randomly selected set of solutions that evolves to a near optimal set of tradeoff solutions in subsequent generations. Each solution represents a unique scheduling solution that is associated with certain project duration and a number of interruption days for utilized construction crews. As such, the model provides project planners with alternative schedules along with their expected duration and resource utilization efficiency.
The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.
For the analysis of arbitrary, by Finite Elements discretized shell structures, an efficient numerical simulation strategy with quadratic convergence including geometrically and physically nonlinear effects will be presented. In the beginning, a Finite-Rotation shell theory allowing constant shear deformations across the shell thickness is given in an isoparametric formulation. The assumed-strain concept enables the derivation of a locking-free finite element. The Layered Approach will be applied to ensure a sufficiently precise prediction of the propagation of plastic zones even throughout the shell thickness. The Riks-Wempner-Wessels global iteration scheme will be enhanced by a Line-Search procedure to ensure the tracing of nonlinear deformation paths with rather great load steps even in the post-peak range. The elastic-plastic material model includes isotropic hardening. A new Operator-Split return algorithm ensures considerably exact solution of the initial-value problem even for greater load steps. The combination with consistently linearized constitutive equations ensures quadratic convergence in a close neighbourhood to the exact solution. Finally, several examples will demonstrate accuracy and numerical efficiency of the developed algorithm.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
The paper is about model based parameter identification and damage localization of elastomechanical systems using input and output measurements in the frequency domain. An adaptation of the Projective Input Residual Method to subsystem damage identification is presented. For this purpose the projected residuals were adapted with respect to a given subsystem to be analysed. Based on the gradients of these projected subsystem residuals a damage indicator was introduced which is sensitive to parameter changes and structural damages in this subsystem. Since the computations are done w.r.t. the smaller dimension of a subsystem this indicator shows a computational performance gain compared to the non-subsystem approach. This gain in efficiency makes the indicator applicable in online-monitoring and online-damage-diagnosis where continuous and fast data processing is required. The presented application of the indicator to a gantry robot could illustrate the ability of the indicator to indicate and locate real damage of a complex structure. Since in civil engineering applications the system input is often unknown, further investigations will focus on the output-only case since the generalization of the presented methods to this case will broaden its application spectrum.
Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons.
Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form.
The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows:
-The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method.
-A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation.
-A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved.
The truss model for predicting shear resistance of reinforced concrete beams has usually been criticized because of its underestimation of the concrete shear strength especially for beams with low shear reinforcement. Two challengers are commonly encountered in any truss model and are responsible for its inaccurate shear strength prediction. First: the cracking angle is usually assumed empirically and second the shear contribution of the arching action is usually neglected. This research introduces a nouvelle approach, by using Artificial Neural Network (ANN) for accurately evaluating the shear cracking angle of reinforced and prestressed concrete beams. The model inputs include the beam geometry, concrete strength, the shear reinforcement ratio and the prestressing stress if any. ...
Global structural analyses in civil engineering are usually performed considering linear-elastic material behavior. However, for steel structures, a certain degree of plasticization depending on the member classification may be considered. Corresponding plastic analyses taking material nonlinearities into account are effectively realized using numerical methods. Frequently applied finite elements of two and three-dimensional models evaluate the plasticity at defined nodes using a yield surface, i.e. by a yield condition, hardening rule, and flow rule. Corresponding calculations are connected to a large numerical as well as time-consuming effort and they do not rely on the theoretical background of beam theory, to which the regulations of standards mainly correspond. For that reason, methods using beam elements (one-dimensional) combined with cross-sectional analyses are commonly applied for steel members in terms of plastic zones theories. In these approaches, plasticization is in general assessed by means of axial stress only. In this paper, more precise numerical representation of the combined stress states, i.e. axial and shear stresses, is presented and results of the proposed approach are validated and discussed.
The aim of my research is to observe the variance of energy efficiency of a typical multi-story office building under the exposure of different climatic conditions. Energy efficiency requirements in building codes or energy standards are among the most important single measures for buildings’ energy efficiency. Therefore, this study can be set up for a better understanding of how energy efficiency of a building changes under the effect of adverse to moderate climatic conditions which possess a mentionable effect on the operation of a building.
This thesis is structured in three balanced and conceptual steps. Following the aim of the project, the virtual building model is to be analyzed under the effect of seven distinct climatic conditions namely work environment of New Delhi, Mumbai, Berlin, Lisbon, Copenhagen, Dubai and Montreal. Firstly, the task is to do a complete literature research based on the scope of similar researches and studying the problems in detail along with the theoritical background all the concepts which are implemented to get the numerical results. This chapter also comprises a detailed study of the climatic conditions of the above-mentioned cities. Different climatic traits like temperature variations, count of heating and cooling degree days, relative humidity, temperature range and comfort zonal charts for the specified cities are studied in detail. This study helps to understand the effect of these adverse to moderate climates on the operation of the building. On the second step, the virtual building model is prepared on a software platform named Revit Structures. This virtual building model is not necessarily a complete building, but it has the relevant functionalities of a real building. We perform the energy analysis and the heating and cooling analysis on this virtual building model to study the operational outcome of the building under different climatic conditions in detail. By the end of these above two tasks, two scenarios are observed. On one hand, we have a literature research and on the other hand we have the numerical results. Therefore, finally we present a comparative scenario based on the energy efficient performances of the building under such variant climatic conditions. This is followed by the prediction of thermal comfort level inside the building and it based on Fanger’s PMV Model. Understanding the literature and the numerical values in detail helps us to predict the index thermal comfort level inside the building.
The conclusion of this master thesis focuses mainly on the scopes of improvement of energy efficiency requirements in energy codes if any, differentiated according to specific locations. The initial aim of my hypothesis which is to study the impacts of climatic variations on the energy efficient performances of a building is fulfilled but as such topics have very deep and broad roots, the scope of further improvements is always predominant.
The building sector is responsible for a large share of human environmental impacts. Architects and planners are the key players for reducing the environmental impacts of buildings, as they define them to a large extent. Life Cycle Assessment (LCA) allows for the holistic environmental analysis of a building. However, it is currently not employed to improve the environmental performance of buildings during the design process, although the potential for optimization is greatest there. One main reason is the lack of an adequate means of applying LCA in the architectural design process. As such, the main objective of this thesis is to develop a method for environmental building design optimization that is applicable in the design process. The key concept proposed in this thesis is to combine LCA with parametric design, because it proved to have a high potential for design optimization.
The research approach includes the analysis of the characteristics of LCA for buildings and the architectural design stages to identify the research gap, the establishment of a requirement catalogue, the development of a method based on a digital, parametric model, and an evaluation of the method.
An analysis of currently available approaches for LCA of buildings indicates that they are either holistic but very complex or simple but not holistic. Furthermore, none of them provide the opportunity for optimization in the architectural design process, which is the main research gap. The requirements derived from the analysis have been summarized in the form of a catalogue. This catalogue can be used to evaluate both existing approaches and potential methods developed in the future. In this thesis, it served as guideline for the development of the parametric method – Parametric Life Cycle Assessment (PLCA). The unique main feature of PLCA is that embodied and operational environmental impact are calculated together. In combination with the self-contained workflow of the method, this provides the basis for holistic, time-efficient environmental design optimization. The application of PLCA to three examples indicated that all established mandatory requirements are met. In all cases, environmental impact could be significantly reduced. In comparison to conventional approaches, PLCA was shown to be much more time-efficient.
PLCA allows architects to focus on their main task of designing the building, and finally makes LCA practically useful as one of several criteria for design optimization. With PLCA, the building design can be time-efficiently optimized from the beginning of the most influential early design stages, which has not been possible until now. PLCA provides a good starting point for further research. In the future, it could be extended by integrating the social and economic aspects of sustainability.
A parametric method for building design optimization based on Life Cycle Assessment - Appendix
(2016)
The building sector is responsible for a large share of human environmental impacts, over which architects and planners have a major influence. The main objective of this thesis is to develop a method for environmental building design optimization based on Life Cycle Assessment (LCA) that is applicable as part of the design process. The research approach includes a thorough analysis of LCA for buildings in relation to the architectural design stages and the establishment of a requirement catalogue. The key concept of the novel method called Parametric Life Cycle Assessment(PLCA) is to combine LCA with parametric design. The application of this method to three examples shows that building designs can be optimized time-efficiently and holistically from the beginning of the most influential early design stages, an achievement which has not been possible until now.
In many engineering applications two or more different interacting systems require the numer-ical solution of so-called multifield problems. In civil engineering the interaction of fluid and structures plays an important role, i.e. for fabric tensile structures of light and flexible materials often used for large roof systems, capacious umbrellas or canopies. Whereas powerful numerical simulation techniques have been established in structural engineering as well as in fluid mechan-ics, only relatively few approaches to simulate the interaction of fluids with civil engineering constructions have been presented. To determine the wind loads on complex structures, it is still state-of-the-art to apply semi-empirical, strongly simplifying methods or to perform expensive ex-periments in wind tunnels. In this paper an approach of a coupled fluid-structure simulation will be presented for membrane and thin shell structures. The interaction is described by the struc-tural deformation as response to wind forces, resulting in a modification of the fluid flow domain. Besides a realistic determination of the wind loads, information on the structural stability can be obtained. The so-called partitioned solution is based on an iterative frame algorithm, integrating different codes for Computational Fluid Dynamics (CFD) and for Computational Structural Dy-namics (CSD) in an explicit or an implicit time-stepping procedure. All data exchange between the two different applications is performed via a neutral geometric model provided by a coupling interface. A conservative interpolation method is used for the interpolation of the nodal loads. The time-dependent motion of the structure requires a dynamic modification of the different grids and a redefinition of the Navier-Stokes equations in an Arbitrary Langrangian Eulerian (ALE) formulation. As an example for the present implementation, results of a coupled fluid-structure simulation for a textile membrane canopy will be presented.