Refine
Document Type
- Doctoral Thesis (494) (remove)
Institute
- Institut für Strukturmechanik (ISM) (56)
- Institut für Europäische Urbanistik (29)
- Promotionsstudiengang Kunst und Design-Freie Kunst-Medienkunst (Ph.D) (25)
- F. A. Finger-Institut für Baustoffkunde (FIB) (20)
- Professur Sozialwissenschaftliche Stadtforschung (16)
- Professur Baubetrieb und Bauverfahren (15)
- Professur Denkmalpflege und Baugeschichte (14)
- Professur Informatik im Bauwesen (12)
- Professur Informatik in der Architektur (12)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Architektur (25)
- Beton (21)
- Stadtplanung (18)
- Finite-Elemente-Methode (17)
- Optimierung (14)
- Stadtentwicklung (13)
- Denkmalpflege (12)
- Isogeometric Analysis (10)
- Kunst (10)
- Modellierung (10)
Year of publication
The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales.
In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis.
Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems).
The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended.
At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied.
Diese Dissertation untersucht Handlungsressourcen von zivilgesellschaftlichen Akteuren in Planungsprozessen um innerstädtische Planungsverfahren. Den theoretischen Rahmen bilden die Kapitalarten von Pierre Bourdieu, die zusammen mit dem Matrixraum von Dieter Läpple zu einem neuen Feldbegriff des ‚Raumfeldes‘ zusammengeführt und operationalisiert wurden. Es handelt sich um eine qualitative Arbeit, die zwischen Stadtsoziologie und Urbanistik zu verorten ist. Als Fallbeispiele wurde die Erweiterung des Berliner Mauerparks sowie das Baugebiet „So! Berlin“ in Berlin gewählt.
El paisaje de la Cuenca Lechera Central Argentina: la huella de la producción sobre el territorio
(2022)
In recent times, the study of landscape heritage acquires value by virtue of becoming an alternative to rethink regional development, especially from the point of view of territorial planning. In this sense, the Central Argentine Dairy Basin (CADB) is presented as a space where the traces of different human projects have accumulated over centuries of occupation, which can be read as heritage. The impact of dairy farming and other productive activities has shaped the configuration of its landscape. The main hypothesis assumed that a cultural landscape would have been formed in the CADB, whose configuration would have depended to a great extent on the history of productive activities and their deployment over the territory, and this same history would hide the keys to its alternative.
The thesis approached the object of study from descriptive and cartographic methods that placed the narration of the history of territory and the resources of the landscape as a discursive axis. A series of intentional readings of the territory and its constituent parts pondered the layers of data that have accumulated on it in the form of landscape traces, with the help of an approach from complementary dimensions (natural, sociocultural, productive, planning). Furthermore, the intersection of historical sources was used in order to allow the construction of the territorial story and the detection of the origin of the landscape components. A meticulous cartographic work also helped to spatialise the set of phenomena and elements studied, and was reflected in a multiscalar scanning.
Material failure can be tackled by so-called nonlocal models, which introduce an intrinsic length scale into the formulation and, in the case of material failure, restore the well-posedness of the underlying boundary value problem or initial boundary value problem. Among nonlocal models, peridynamics (PD) has attracted a lot of attention as it allows the natural transition from continuum to discontinue and thus allows modeling of discrete cracks without the need to describe and track the crack topology, which has been a major obstacle in traditional discrete crack approaches. This is achieved by replacing the divergence of the Cauchy stress tensor through an integral over so-called bond forces, which account for the interaction of particles. A quasi-continuum approach is then used to calibrate the material parameters of the bond forces, i.e., equating the PD energy with the energy of a continuum. One major issue for the application of PD to general complex problems is that they are limited to fairly simple material behavior and pure mechanical problems based on explicit time integration. PD has been extended to other applications but losing simultaneously its simplicity and ease in modeling material failure. Furthermore, conventional PD suffers from instability and hourglass modes that require stabilization. It also requires the use of constant horizon sizes, which drastically reduces its computational efficiency. The latter issue was resolved by the so-called dual-horizon peridynamics (DH-PD) formulation and the introduction of the duality of horizons.
Within the nonlocal operator method (NOM), the concept of nonlocality is further extended and can be considered a generalization of DH-PD. Combined with the energy functionals of various physical models, the nonlocal forms based on the dual-support concept can be derived. In addition, the variation of the energy functional allows implicit formulations of the nonlocal theory. While traditional integral equations are formulated in an integral domain, the dual-support approaches are based on dual integral domains. One prominent feature of NOM is its compatibility with variational and weighted residual methods. The NOM yields a direct numerical implementation based on the weighted residual method for many physical problems without the need for shape functions. Only the definition of the energy or boundary value problem is needed to drastically facilitate the implementation. The nonlocal operator plays an equivalent role to the derivatives of the shape functions in meshless methods and finite element methods (FEM). Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease by a series of matrix multiplications. In addition, NOM can be used to derive many nonlocal models in strong form.
The principal contributions of this dissertation are the implementation and application of NOM, and also the development of approaches for dealing with fractures within the NOM, mostly for dynamic fractures. The primary coverage and results of the dissertation are as follows:
-The first/higher-order implicit NOM and explicit NOM, including a detailed description of the implementation, are presented. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combining with the method of weighted residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. For the sake of conciseness, the implementation in this chapter is focused on linear elastic solids only, though the NOM can handle more complex nonlinear problems. An explicit nonlocal operator method for the dynamic analysis of elasticity solid problems is also presented. The explicit NOM avoids the calculation of the tangent stiffness matrix as in the implicit NOM model. The explicit scheme comprises the Verlet-velocity algorithm. The NOM can be very flexible and efficient for solving partial differential equations (PDEs). It's also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Several numerical examples are presented to show the capabilities of this method.
-A nonlocal operator method for the dynamic analysis of (thin) Kirchhoff plates is proposed. The nonlocal Hessian operator is derived from a second-order Taylor series expansion. NOM is higher-order continuous, which is exploited for thin plate analysis that requires $C^1$ continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation.
-A nonlocal fracture modeling is developed and applied to the simulation of quasi-static and dynamic fractures using the NOM. The phase field's nonlocal weak and associated strong forms are derived from a variational principle. The NOM requires only the definition of energy. We present both a nonlocal implicit phase field model and a nonlocal explicit phase field model for fracture; the first approach is better suited for quasi-static fracture problems, while the key application of the latter one is dynamic fracture. To demonstrate the performance of the underlying approach, several benchmark examples for quasi-static and dynamic fracture are solved.
Analysis of Functionally Graded Porous Materials Using Deep Energy Method and Analytical Solution
(2022)
Porous materials are an emerging branch of engineering materials that are composed of two elements: One element is a solid (matrix), and the other element is either liquid or gas. Pores can be distributed within the solid matrix of porous materials with different shapes and sizes. In addition, porous materials are lightweight, and flexible, and have higher resistance to crack propagation and specific thermal, mechanical, and magnetic properties. These properties are necessary for manufacturing engineering structures such as beams and other engineering structures. These materials are widely used in solid mechanics and are considered a good replacement for classical materials by many researchers recently. Producing lightweight materials has been developed because of the possibility of exploiting the properties of these materials. Various types of porous material are generated naturally or artificially for a specific application such as bones and foams. Like functionally graded materials, pore distribution patterns can be uniform or non-uniform. Biot’s theory is a well-developed theory to study the behavior of poroelastic materials which investigates the interaction between fluid and solid phases of a fluid-saturated porous medium.
Functionally graded porous materials (FGPM) are widely used in modern industries, such as aerospace, automotive, and biomechanics. These advanced materials have some specific properties compared to materials with a classic structure. They are extremely light, while they have specific strength in mechanical and high-temperature environments. FGPMs are characterized by a gradual variation of material parameters over the volume. Although these materials can be made naturally, it is possible to design and manufacture them for a specific application. Therefore, many studies have been done to analyze the mechanical and thermal properties of FGPM structures, especially beams.
Biot was the pioneer in formulating the linear elasticity and thermoelasticity equations of porous material. Since then, Biot's formulation has been developed in continuum mechanics which is named poroelasticity. There are obstacles to analyzing the behavior of these materials accurately like the shape of the pores, the distribution of pores in the material, and the behavior of the fluid (or gas) that saturated pores. Indeed, most of the engineering structures made of FGPM have nonlinear governing equations. Therefore, it is difficult to study engineering structures by solving these complicated equations.
The main purpose of this dissertation is to analyze porous materials in engineering structures. For this purpose, the complex equations of porous materials have been simplified and applied to engineering problems so that the effect of all parameters of porous materials on the behavior of engineering structure has been investigated.
The effect of important parameters of porous materials on beam behavior including pores compressibility, porosity distribution, thermal expansion of fluid within pores, the interaction of stresses between pores and material matrix due to temperature increase, effects of pore size, material thickness, and saturated pores with fluid and unsaturated conditions are investigated.
Two methods, the deep energy method, and the exact solution have been used to reduce the problem hypotheses, increase accuracy, increase processing speed, and apply these in engineering structures. In both methods, they are analyzed nonlinear and complex equations of porous materials.
To increase the accuracy of analysis and study of the effect of shear forces, Timoshenko and Reddy's beam theories have been used. Also, neural networks such as residual and fully connected networks are designed to have high accuracy and less processing time than other computational methods.
The Finite Element Method (FEM) is widely used in engineering for solving Partial Differential Equations (PDEs) over complex geometries. To this end, it is required to provide the FEM software with a geometric model that is typically constructed in a Computer-Aided Design (CAD) software. However, FEM and CAD use different approaches for the mathematical description of the geometry. Thus, it is required to generate a mesh, which is suitable for FEM, based on the CAD model. Nonetheless, this procedure is not a trivial task and it can be time consuming. This issue becomes more significant for solving shape and topology optimization problems, which consist in evolving the geometry iteratively. Therefore, the computational cost associated to the mesh generation process is increased exponentially for this type of applications.
The main goal of this work is to investigate the integration of CAD and CAE in shape and topology optimization. To this end, numerical tools that close the gap between design and analysis are presented. The specific objectives of this work are listed below:
• Automatize the sensitivity analysis in an isogeometric framework for applications in shape optimization. Applications for linear elasticity are considered.
• A methodology is developed for providing a direct link between the CAD model and the analysis mesh. In consequence, the sensitivity analysis can be performed in terms of the design variables located in the design model.
• The last objective is to develop an isogeometric method for shape and topological optimization. This method should take advantage of using Non-Uniform Rational B-Splines (NURBS) with higher continuity as basis functions.
Isogeometric Analysis (IGA) is a framework designed to integrate the design and analysis in engineering problems. The fundamental idea of IGA is to use the same basis functions for modeling the geometry, usually NURBS, for the approximation of the solution fields. The advantage of integrating design and analysis is two-fold. First, the analysis stage is more accurate since the system of PDEs is not solved using an approximated geometry, but the exact CAD model. Moreover, providing a direct link between the design and analysis discretizations makes possible the implementation of efficient sensitivity analysis methods. Second, the computational time is significantly reduced because the mesh generation process can be avoided.
Sensitivity analysis is essential for solving optimization problems when gradient-based optimization algorithms are employed. Automatic differentiation can compute exact gradients, automatically by tracking the algebraic operations performed on the design variables. For the automation of the sensitivity analysis, an isogeometric framework is used. Here, the analysis mesh is obtained after carrying out successive refinements, while retaining the coarse geometry for the domain design. An automatic differentiation (AD) toolbox is used to perform the sensitivity analysis. The AD toolbox takes the code for computing the objective and constraint functions as input. Then, using a source code transformation approach, it outputs a code for computing the objective and constraint functions, and their sensitivities as well. The sensitivities obtained from the sensitivity propagation method are compared with analytical sensitivities, which are computed using a full isogeometric approach.
The computational efficiency of AD is comparable to that of analytical sensitivities. However, the memory requirements are larger for AD. Therefore, AD is preferable if the memory requirements are satisfied. Automatic sensitivity analysis demonstrates its practicality since it simplifies the work of engineers and designers.
Complex geometries with sharp edges and/or holes cannot easily be described with NURBS. One solution is the use of unstructured meshes. Simplex-elements (triangles and tetrahedra for two and three dimensions respectively) are particularly useful since they can automatically parameterize a wide variety of domains. In this regard, unstructured Bézier elements, commonly used in CAD, can be employed for the exact modelling of CAD boundary representations. In two dimensions, the domain enclosed by NURBS curves is parameterized with Bézier triangles. To describe exactly the boundary of a two-dimensional CAD model, the continuity of a NURBS boundary representation is reduced to C^0. Then, the control points are used to generate a triangulation such that the boundary of the domain is identical to the initial CAD boundary representation. Thus, a direct link between the design and analysis discretizations is provided and the sensitivities can be propagated to the design domain.
In three dimensions, the initial CAD boundary representation is given as a collection of NURBS surfaces that enclose a volume. Using a mesh generator (Gmsh), a tetrahedral mesh is obtained. The original surface is reconstructed by modifying the location of the control points of the tetrahedral mesh using Bézier tetrahedral elements and a point inversion algorithm. This method offers the possibility of computing the sensitivity analysis using the analysis mesh. Then, the sensitivities can be propagated into the design discretization. To reuse the mesh originally generated, a moving Bézier tetrahedral mesh approach was implemented.
A gradient-based optimization algorithm is employed together with a sensitivity propagation procedure for the shape optimization cases. The proposed shape optimization approaches are used to solve some standard benchmark problems in structural mechanics. The results obtained show that the proposed approach can compute accurate gradients and evolve the geometry towards optimal solutions. In three dimensions, the moving mesh approach results in faster convergence in terms of computational time and avoids remeshing at each optimization step.
For considering topological changes in a CAD-based framework, an isogeometric phase-field based shape and topology optimization is developed. In this case, the diffuse interface of a phase-field variable over a design domain implicitly describes the boundaries of the geometry. The design variables are the local values of the phase-field variable. The descent direction to minimize the objective function is found by using the sensitivities of the objective function with respect to the design variables. The evolution of the phase-field is determined by solving the time dependent Allen-Cahn equation.
Especially for topology optimization problems that require C^1 continuity, such as for flexoelectric structures, the isogeometric phase field method is of great advantage. NURBS can achieve the desired continuity more efficiently than the traditional employed functions. The robustness of the method is demonstrated when applied to different geometries, boundary conditions, and material configurations. The applications illustrate that compared to piezoelectricity, the electrical performance of flexoelectric microbeams is larger under bending. In contrast, the electrical power for a structure under compression becomes larger with piezoelectricity.
Finite Element Simulations of dynamically excited structures are mainly influenced by the mass, stiffness, and damping properties of the system, as well as external loads. The prediction quality of dynamic simulations of vibration-sensitive components depends significantly on the use of appropriate damping models. Damping phenomena have a decisive influence on the vibration amplitude and the frequencies of the vibrating structure. However, developing realistic damping models is challenging due to the multiple sources that cause energy dissipation, such as material damping, different types of friction, or various interactions with the environment.
This thesis focuses on thermoelastic damping, which is the main cause of material damping in homogeneous materials. The effect is caused by temperature changes due to mechanical strains. In vibrating structures, temperature gradients arise in adjacent tension and compression areas. Depending on the vibration frequency, they result in heat flows, leading to increased entropy and the irreversible transformation of mechanical energy into thermal energy.
The central objective of this thesis is the development of efficient simulation methods to incorporate thermoelastic damping in finite element analyses based on modal superposition. The thermoelastic loss factor is derived from the structure's mechanical mode shapes and eigenfrequencies. In subsequent analyses that are performed in the time and frequency domain, it is applied as modal damping.
Two approaches are developed to determine the thermoelastic loss in thin-walled plate structures, as well as three-dimensional solid structures. The realistic representation of the dissipation effects is verified by comparing the simulation results with experimentally determined data. Therefore, an experimental setup is developed to measure material damping, excluding other sources of energy dissipation.
The three-dimensional solid approach is based on the determination of the generated entropy and therefore the generated heat per vibration cycle, which is a measure for thermoelastic loss in relation to the total strain energy. For thin plate structures, the amount of bending energy in a modal deformation is calculated and summarized in the so-called Modal Bending Factor (MBF). The highest amount of thermoelastic loss occurs in the state of pure bending. Therefore, the MBF enables a quantitative classification of the mode shapes concerning the thermoelastic damping potential.
The results of the developed simulations are in good agreement with the experimental results and are appropriate to predict thermoelastic loss factors. Both approaches are based on modal superposition with the advantage of a high computational efficiency. Overall, the modeling of thermoelastic damping represents an important component in a comprehensive damping model, which is necessary to perform realistic simulations of vibration processes.
In recent years, lightweight materials, such as polymer composite materials (PNCs) have been studied and developed due to their excellent physical and chemical properties. Structures composed of these composite materials are widely used in aerospace engineering structures, automotive components, and electrical devices. The excellent and outstanding mechanical, thermal, and electrical properties of Carbon nanotube (CNT) make it an ideal filler to strengthen polymer materials’ comparable properties. The heat transfer of composite materials has very promising engineering applications in many fields, especially in electronic devices and energy storage equipment. It is essential in high-energy density systems since electronic components need heat dissipation functionality. Or in other words, in electronic devices the generated heat should ideally be dissipated by light and small heat sinks.
Polymeric composites consist of fillers embedded in a polymer matrix, the first ones will significantly affect the overall (macroscopic) performance of the material. There are many common carbon-based fillers such as single-walled carbon nanotubes (SWCNT), multi-walled carbon nanotubes (MWCNT), carbon nanobuds (CNB), fullerene, and graphene. Additives inside the matrix have become a popular subject for researchers. Some extraordinary characters, such as high-performance load, lightweight design, excellent chemical resistance, easy processing, and heat transfer, make the design of polymeric nanotube composites (PNCs) flexible. Due to the reinforcing effects with different fillers on composite materials, it has a higher degree of freedom and can be designed for the structure according to specific applications’ needs. As already stated, our research focus will be on SWCNT enhanced PNCs. Since experiments are timeconsuming, sometimes expensive and cannot shed light into phenomena taking place for instance at the interfaces/interphases of composites, they are often complemented through theoretical and computational analysis.
While most studies are based on deterministic approaches, there is a comparatively lower number of stochastic methods accounting for uncertainties in the input parameters. In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. However, uncertainties in the input parameters such as aspect ratio, volume fraction, thermal properties of fiber and matrix need to be taken into account for reliable predictions. In this research, a stochastic multiscale method is provided to study the influence of numerous uncertain input parameters on the thermal conductivity of the composite. Therefore, a hierarchical multi-scale method based on computational homogenization is presented in to predict the macroscopic thermal conductivity based on the fine-scale structure. In order to study the inner mechanism, we use the finite element method and employ surrogate models to conduct a Global Sensitivity Analysis (GSA). The SA is performed in order to quantify the influence of the conductivity of the fiber, matrix, Kapitza resistance, volume fraction and aspect ratio on the macroscopic conductivity. Therefore, we compute first-order and total-effect sensitivity indices with different surrogate models.
As stochastic multiscale models are computational expensive, surrogate approaches are commonly exploited. With the emergence of high performance computing and artificial intelligence, machine learning has become a popular modeling tool for numerous applications. Machine learning (ML) is commonly used in regression and maps data through specific rules with algorithms to build input and output models. They are particularly useful for nonlinear input-output relationships when sufficient data is available. ML has also been used in the design of new materials and multiscale analysis. For instance, Artificial neural networks and integrated learning seem to be ideally for such a task. They can theoretically simulate any non-linear relationship through the connection of neurons. Mapping relationships are employed to carry out data-driven simulations of inputs and outputs in stochastic modeling.
This research aims to develop a stochastic multi-scale computational models of PNCs in heat transfer. Multi-scale stochastic modeling with uncertainty analysis and machine learning methods consist of the following components:
-Uncertainty Analysis. A surrogate based global sensitivity analysis is coupled with a hierarchical multi-scale method employing computational homogenization. The effect of the conductivity of the fibers and the matrix, the Kapitza resistance, volume fraction and aspect ratio on the ’macroscopic’ conductivity of the composite is systematically studied. All selected surrogate models yield consistently the conclusions that the most influential input parameters are the aspect ratio followed by the volume fraction. The Kapitza Resistance has no significant effect on the thermal conductivity of the PNCs. The most accurate surrogate model in terms of the R2 value is the moving least square (MLS).
-Hybrid Machine Learning Algorithms. A combination of artificial neural network (ANN) and particle swarm optimization (PSO) is applied to estimate the relationship between variable input and output parameters. The ANN is used for modeling the composite while PSO improves the prediction performance through an optimized global minimum search. The thermal conductivity of the fibers and the matrix, the kapitza resistance, volume fraction and aspect ratio are selected as input parameters. The output is the macroscopic (homogenized) thermal conductivity of the composite. The results show that the PSO significantly improves the predictive ability of this hybrid intelligent algorithm, which outperforms traditional neural networks.
-Stochastic Integrated Machine Learning. A stochastic integrated machine learning based multiscale approach for the prediction of the macroscopic thermal conductivity in PNCs is developed. Seven types of machine learning models are exploited in this research, namely Multivariate Adaptive Regression Splines (MARS), Support Vector Machine (SVM), Regression Tree (RT), Bagging Tree (Bag), Random Forest (RF), Gradient Boosting Machine (GBM) and Cubist. They are used as components of stochastic modeling to construct the relationship between the variable of the inputs’ uncertainty and the macroscopic thermal conductivity of PNCs. Particle Swarm Optimization (PSO) is used for hyper-parameter tuning to find the global optimal values leading to a significant reduction in the computational cost. The advantages and disadvantages of various methods are also analyzed in terms of computing time and model complexity to finally give a recommendation for the applicability of different models.
The detailed structural analysis of thin-walled circular pipe members often requires the use of a shell or solid-based finite element method. Although these methods provide a very good approximation of the deformations, they require a higher degree of discretization which causes high computational costs. On the other hand, the analysis of thin-walled circular pipe members based on classical beam theories is easy to implement and needs much less computation time, however, they are limited in their ability to approximate the deformations as they cannot consider the deformation of the cross-section.
This dissertation focuses on the study of the Generalized Beam Theory (GBT) which is both accurate and efficient in analyzing thin-walled members. This theory is based on the separation of variables in which the displacement field is expressed as a combination of predetermined deformation modes related to the cross-section, and unknown amplitude functions defined on the beam's longitudinal axis. Although the GBT was initially developed for long straight members, through the consideration of complementary deformation modes, which amend the null transverse and shear membrane strain assumptions of the classical GBT, problems involving short members, pipe bends, and geometrical nonlinearity can also be analyzed using GBT. In this dissertation, the GBT formulation for the analysis of these problems is developed and the application and capabilities of the method are illustrated using several numerical examples. Furthermore, the displacement and stress field results of these examples are verified using an equivalent refined shell-based finite element model.
The developed static and dynamic GBT formulations for curved thin-walled circular pipes are based on the linear kinematic description of the curved shell theory. In these formulations, the complex problem in pipe bends due to the strong coupling effect of the longitudinal bending, warping and the cross-sectional ovalization is handled precisely through the derivation of the coupling tensors between the considered GBT deformation modes. Similarly, the geometrically nonlinear GBT analysis is formulated for thin-walled circular pipes based on the nonlinear membrane kinematic equations. Here, the initial linear and quadratic stress and displacement tangent stiffness matrices are built using the third and fourth-order GBT deformation mode coupling tensors.
Longitudinally, the formulation of the coupled GBT element stiffness and mass matrices are presented using a beam-based finite element formulation. Furthermore, the formulated GBT elements are tested for shear and membrane locking problems and the limitations of the formulations regarding the membrane locking problem are discussed.
Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics.
As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects.
As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models.
Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated.
In this thesis, a new approach is developed for applications of shape optimization on the time harmonic wave propagation (Helmholtz equation) for acoustic problems. This approach is introduced for different dimensional problems: 2D, 3D axi-symmetric and fully 3D problems. The boundary element method (BEM) is coupled with the isogeometric analysis (IGA) forming the so-called (IGABEM) which speeds up meshing and gives higher accuracy in comparison with standard BEM. BEM is superior for handling unbounded domains by modeling only the inner boundaries and avoiding the truncation error, present in the finite element method (FEM) since BEM solutions satisfy the Sommerfeld radiation condition automatically. Moreover, BEM reduces the space dimension by one from a volumetric three-dimensional problem to a surface two-dimensional problem, or from a surface two-dimensional problem to a perimeter one-dimensional problem. Non-uniform rational B-splines basis functions (NURBS) are used in an isogeometric setting to describe both the CAD geometries and the physical fields.
IGABEM is coupled with one of the gradient-free optimization methods, the Particle Swarm Optimization (PSO) for structural shape optimization problems. PSO is a straightforward method since it does not require any sensitivity analysis but it has some trade-offs with regard to the computational cost. Coupling IGA with optimization problems enables the NURBS basis functions to represent the three models: shape design, analysis and optimization models, by a definition of a set of control points to be the control variables and the optimization parameters as well which enables an easy transition between the three models.
Acoustic shape optimization for various frequencies in different mediums is performed with PSO and the results are compared with the benchmark solutions from the literature for different dimensional problems proving the efficiency of the proposed approach with the following remarks:
- In 2D problems, two BEM methods are used: the conventional isogeometric boundary element method (IGABEM) and the eXtended IGABEM (XIBEM) enriched with the partition-of-unity expansion using a set of plane waves, where the results are generally in good agreement with the linterature with some computation advantage to XIBEM which allows coarser meshes.
-In 3D axi-symmetric problems, the three-dimensional problem is simplified in BEM from a surface integral to a combination of two 1D integrals. The first is the line integral similar to a two-dimensional BEM problem. The second integral is performed over the angle of revolution. The discretization is applied only to the former integration. This leads to significant computational savings and, consequently, better treatment for higher frequencies over the full three-dimensional models.
- In fully 3D problems, a detailed comparison between two BEM methods: the conventional boundary integral equation (CBIE) and Burton-Miller (BM) is provided including the computational cost. The proposed models are enhanced with a modified collocation scheme with offsets to Greville abscissae to avoid placing collocation points at the corners. Placing collocation points on smooth surface enables accurate evaluation of normals for BM formulation in addition to straightforward prediction of jump-terms and avoids singularities in $\mathcal{O} (1/r)$ integrals eliminating the need for polar integration. Furthermore, no additional special treatment is required for the hyper-singular integral while collocating on highly distorted elements, such as those containing sphere poles. The obtained results indicate that, CBIE with PSO is a feasible alternative (except for a small number of fictitious frequencies) which is easier to implement. Furthermore, BM presents an outstanding treatment of the complicated geometry of mufflers with internal extended inlet/outlet tube as an interior 3D Helmholtz acoustic problem instead of using mixed or dual BEM.
In der Welt der Objekte zählt der Schlüssel zu den wichtigsten und ältesten Gebrauchs-gegenständen der Menschheit. Das Leben und die Werte der zivilisierten Kulturen werden durch dieses Objekt mitbestimmt. In dieser Arbeit wird der Wandel eines allgemeingültigen Produkts der heutigen Dingwelt in den Fokus gestellt.
Zunehmend werden die Objekte der Dingwelt dematerialisiert. Die Beziehung der Nutzer zu den Objekten verliert an Wirkungskraft. Das zentrale Objekt der Beobachtungen und der Forschung ist der Schlüssel als ein Artefakt der realen Dingwelt - im Wandel der Zeit, in Bezug auf die Notwendigkeiten der Nutzer und den ausschlaggebenden Technologien.
Auf der Grundlage von strukturierten Daten sollen Maßnahmen für eine nutzerorientierte Produktgestaltung aufgezeigt werden. Oftmals fehlt das entsprechende Verständnis für den Nutzer und dessen Anforderungen, um diese als Teil des Produktentstehungsprozess ein-beziehen zu können.
Gegenstand dieser Arbeit ist die Analyse der Nutzer-Generationen im Kontext der Technologien mit Hinblick auf die Erarbeitung einer konzeptionellen Entwurfsbasis. Mit der vorliegenden Forschungsarbeit wird aufgezeigt, wie mit dem Verständnis der historischen Untersuchungen, den Erkenntnissen der durchgeführten Studien und einem praxisorientierten Entstehungs-prozess künftige Ideen und Ansätze, eine fortwährende und fundierte Grundlage der nutzerorientierten Gestaltung generiert werden kann. Die Arbeit verfolgt das Ziel, die Gestaltung der Produkte förderlich zu betrachten und einsetzen zu können.
Due to the significant number of immigrants in Europe, especially Germany, integration is an ongoing subject of debate. Since the 1970s, with the emergence of the discussions on ‘place,’ it has also been realized that the immigrant experience is associated with location. Nevertheless, due to the challenges in capturing the place and migration relevance, there is a gap in understanding the role of the migrant’s geography of experiences and its outcomes (Phillips & Robinson, 2015).
This research aims to investigate the extent to which both the process of objective integration and the socio-spatial practices of high-skilled Iranian immigrants in Berlin outline and influence their sense of belonging to Berlin as the new “home.” The embedded mixed-method design had employed for this study. The quantitative analysis through Pearson’s correlation technique measured the strength of the association between Iranians’ settlement distribution and the characteristics of Berlins’ districts. The quantitative analysis provides contextual data to get a greater level of understanding of the case study’s interaction with place. The units of place intend to demonstrate the case study’s presence and possible interaction with places around their settlement location that relatively shapes their perception. The qualitative analysis comprises ethnographic fieldwork and semi-structured in-depth interviews with a homogeneous sample of Iranian immigrants in Berlin that provide data on individual and ethnic behaviors and trajectories and analyze the complex interactions between the immigrant’s experience and the role of place.
This research uncovers that Iranian highly skilled immigrants are successful in integrating objectively; However, in regards to their state of belonging, it illustrated the following: The role of socio-ethnic culture of the case study in denotation of home and belonging; Iranian high-skilled immigrants’ efforts towards reaching a level of upward mobility overshadow their attempt to shape social and spatial interaction with Berliners and Berlin itself, which manifests both in their perception and use of urban space; and finally, the identification practice and the boundary-making as an act of reassurance and self-protection against the generalization of adjacent nationalities, demonstrated in the intersection of demographical settlement distribution of Iranians in Berlin and the ethnic diversity, impact the sense of belonging and place-making.
Die Genese der Thüringer Kunstvereinslandschaft im ausgehenden 19. und frühen 20. Jahrhundert
(2023)
Zu Beginn des 19. Jahrhunderts existierten in Deutschland noch keine öffentlichen Museen, welche sich der Präsentation von Kunst widmeten und dem zeitgenössischen Kunstschaffen einen Raum bieten konnten. In den Genuss von Malerei und bildender Kunst kamen damit lediglich jene gesellschaftlichen Stände, denen durch Macht, Prestige und Adelszugehörigkeit dieses Privileg zu Teil wurde. Als neue Form des Erwachens der Bourgeoisie entwickelt, repräsentierten die sich im Entstehen befindlichen Kunstvereine erstmalig eine wirkungsvolle, dem Adel den Vormachtanspruch auf Kunst und Kultur entreißende Instanz, welche die Möglichkeit der öffentlichen Kunstvermittlung und Präsentation boten. Zwischen 1860 und 1945 entstand auf dem Gebiet des heutigen Landes Thüringen eine Unzahl an Kunstvereinen, Künstlerbünden und Kunstgesellschaften, welche die Gegenwartskunst und ihre lokalen Künstler förderten. Nach dem Ende des Zweiten Weltkrieges verschwanden diese Förderer jedoch gänzlich von der Bildfläche und ihr Wirken geriet in Vergessenheit. Bisher fand die Vergangenheit dieser Kunstvereine und -bünde nur wenig Aufmerksamkeit in der Kunstgeschichtsbetrachtung. Daher versucht die vorliegende Dissertation die Konstitutionen, Strukturen, die Wirksamkeit und den Einsatz für die (moderne) Kunst von 15 Kunstvereinen, -gesellschaften und Künstlervereinen nachzuzeichnen und damit längst überfällige Erkenntnisse der Beitragsleistung zur Kulturtätigkeit der Institutionen zu erbringen. Das Überblickswerk leistet einen ersten Vorstoß in die noch weithin unbekannten Kunstvereinsgefilde in Thüringen, vor allem auch im Kontext des nationalen Gefüges. Die historische Betrachtung, die Darstellung der Ausstellungspraxis und die finanziellen Handlungsrahmen sollen als direkte Vergleichsansätze die primäre Gegenüberstellung erleichtern.
Gründungsunternehmen stehen vor der Herausforderung, eine unternehmerische Leistung zu entwickeln und gleichzeitig eine Markenidentität aufzubauen. Dabei sind sie mit finanziellen, personellen und strukturellen Besonderheiten konfrontiert, aber auch mit Unbekanntheit und Kundenunsicherheit. Die Dissertation zielt darauf ab, durch die Analyse markenbildender Strategiepraktiken einen Beitrag zur wissenschaftlichen Auseinandersetzung mit der Markenbildung bei Gründungsunternehmen zu leisten und praktische Implikationen für angehende Gründer:innen sowie akademische Startup-Inkubatoren abzuleiten.
Der Fokus liegt dabei auf Gründungsvorhaben und Startups in der Vorgründungs- oder frühen Gründungsphase aus dem Hochschulumfeld. Zur Analyse markenbildender Strategiepraktiken wird der Strategy-as-Practice Ansatz (SAP) angewendet. Die qualitative Forschungsmethodik wird genutzt, um Handlungsmuster bestimmter Akteursgruppen in diesem spezifischen Kontext zu rekonstruieren. Beobachtungen, Interviews und Dokumente im Feld dienen als Grundlage für die Forschung.
Die Ergebnisse dieser Arbeit liefern umfassende Einblicke in die markenbildende Praxis bei Gründungsunternehmen. Auf Basis dieser Erkenntnisse wurde ein Modell entwickelt, das Komponenten beschreibt, die zusammen einen Handlungsrahmen für die Markenbildung in emergenten Strategiekontexten wie der Unternehmensgründung bilden. Dieses Modell bietet die Möglichkeit, sich vor dem Hintergrund der gründungsspezifischen Rahmenbedingungen mit der Markenentwicklung in der frühen Gründungsphase auseinanderzusetzen und sie systematisch in das Gründungsvorhaben zu implementieren.
Aktuell findet aufgrund gesellschaftspolitischer Forderungen in vielen Industriezweigen ein Umdenken in Richtung Effizienz und Ökologie aber auch Digitalisierung und Industrie 4.0 statt. In dieser Hinsicht steht die Bauindustrie, im Vergleich zu Industrien wie IT, Automobil- oder Maschinenbau, noch am Anfang.
Dabei sind die Potentiale zur Einsparung und Optimierung gerade in der Bauindustrie aufgrund der großen Mengen an zu verarbeiteten Materialien besonders hoch. Die internationale Ressourcen- und Klimadebatte führt verstärkt dazu, dass auch in der Zement- und Betonherstellung neue Konzepte erstellt und geprüft werden. Einerseits erfolgt intensive Forschung und Entwicklung im Bereich alternativer, klimafreundlicher Zemente. Andererseits werden auch auf Seiten der Betonherstellung innovative materialsparende Konzepte geprüft, wie die aktuelle Entwicklung von 3D-Druck mit Beton zeigt.
Aufgrund der hohen Anforderungen an Konstruktion, Qualität und Langlebigkeit von Bauwerken, besitzen Betonfertigteile oftmals Vorteile gegenüber Ortbeton. Die hohe Oberflächenqualität und Dauerhaftigkeit aber auch die Gleichmäßigkeit und witterungsunabhängige Herstellung sind Merkmale, die im Zusammenhang mit Betonfertigteilen immer wieder erwähnt werden. Dabei ist es essenziell, dass auch der Betonherstellungsprozess im Fertigteilwerk kritisch hinterfragt wird, damit eine effizientere und nachhaltigere Produktion von Betonfertigteilen möglich wird.
Bei der Herstellung von Betonteilen im Fertigteilwerk liegt ein besonderer Fokus auf der Optimierung der Frühfestigkeitsentwicklung. Hohe Frühfestigkeiten sind Voraussetzung für einen hochfrequenten Schalungszyklus, was Arbeiten im 2- bzw. 3-Schichtbetrieb ermöglicht. Oft werden zur Sicherstellung hoher Frühfestigkeiten hochreaktive Zemente in Kombination mit hohen Zementgehalten im Beton und/oder einer Wärmebehandlung eingesetzt. Unter dieser Prämisse ist eine ökologisch nachhaltige Betonproduktion mit verminderter CO2 Bilanz nicht möglich.
In der vorliegenden Arbeit wird ein neues Verfahren zur Beschleunigung von Beton eingeführt. Hierbei werden die Bestandteile Zement und Wasser (Zementsuspension) mit Ultraschall vorbehandelt. Ausgangspunkt der Arbeit sind vorangegangene Untersuchungen zum Einfluss von Ultraschall auf die Hydration von Zement bzw. dessen Hauptbestandteil Tricalciumsilikat (C3S), die im Rahmen dieser Arbeit weiter vertieft werden. Darüber hinaus wird die Produktion von Beton mit Ultraschall im Technikumsmaßstab betrachtet. Die so erlangten Erfahrungen dienten dazu, das Ultraschall-Betonmischsystem weiterzuentwickeln und erstmalig zur industriellen Betonproduktion zu nutzen.
In der vorliegenden Arbeit werden die Auswirkungen von Ultraschall auf die Hydratation von C3S zunächst weitergehend und grundlegend untersucht. Dies erfolgte mittels Messung der elektrischen Leitfähigkeit, Analyse der Ionenkonzentration (ICP-OES), Thermoanalyse, Messung der BET-Oberfläche sowie einer optischen Auswertung mittels Rasterelektronenmikroskopie (REM). Der Fokus liegt auf den ersten Stunden der Hydratation, also der Zeit, die durch die Ultraschallbehandlung am stärksten beeinflusst wird.
In den Untersuchungen zeigt sich, dass die Beschleunigungswirkung von Ultraschall in verdünnten C3S Suspensionen (w/f-Wert = 50) stark von der Portlanditkonzentration der Lösung abhängt. Je niedriger die Portlanditkonzentration, desto größer ist die Beschleunigung. Ergänzende Untersuchungen der Ionenkonzentration der Lösung sowie Untersuchungen am hydratisierten C3S zeigen, dass unmittelbar nach der Beschallung (nach ca. 15 Minuten Hydratation) erste Hydratphasen vorliegen. Die durch Ultraschall initiiere Beschleunigung ist in den ersten 24 Stunden am stärksten und klingt dann sukzessive ab. Die Untersuchungen schließen mit Experimenten an C3S-Pasten (w/f-Wert = 0,50), die die Beobachtungen an den verdünnten Suspensionen bestätigen und infolge der Beschallung ein früheres Auftreten und einen größeren Anteil an C-S-H Phasen zeigen. Es wird gefolgert, dass die unmittelbar infolge von Ultraschall erzeugten C-S-H Phasen als Kristallisationskeim während der folgenden Reaktion dienen und daher Ultraschall als in-situ Keimbildungstechnik angesehen werden kann. Optisch zeigt sich, dass die C-S-H Phasen der beschallten Pasten nicht nur viel früher auftreten, sondern kleiner sind und fein verteilt über die Oberfläche des C3S vorliegen. Auch dieser Effekt wird als vorteilhaft für den sich anschließenden regulären Strukturaufbau angesehen.
Im nächsten Schritt wird daher der Untersuchungsfokus vom Modellsystem mit C3S auf Portlandzement erweitert. Hierbei wird der Frage nachgegangen, wie sich eine Änderung der Zusammensetzung der Zementsuspension (w/z-Wert, Fließmittelmenge) beziehungsweise eine Änderung des Ultraschallenergieeintrag auf die Fließeigenschaften und das Erhärtungsverhalten auswirken.
Um den Einfluss verschiedener Faktoren gleichzeitig zu betrachten, werden mit Hilfe von statistischen Versuchsplänen Modelle erstellt, die das Verhalten der einzelnen Faktoren beschreiben. Zur Beschreibung der Fließeigenschaften wurde das Setzfließ- und Ausbreitmaß von Zementsuspensionen herangezogen. Die Beschleunigung der Erhärtung wurde mit Hilfe der Ermittlung des Zeitpunkts des normalen Erstarrens der Zementsuspension bestimmt.
Die Ergebnisse dieser Untersuchungen zeigen deutlich, dass die Fließeigenschaften und der Erstarrungsbeginn nicht linear mit steigendem Ultraschall-Energieeintrag verändert werden. Es zeigt sich, dass es besonders bei den Verarbeitungseigenschaften der Portlandzementsuspensionen zur Ausbildung eines spezifischen Energieeintrages kommt, bis zu welchem das Setzfließ- und das Ausbreitmaß erhöht werden. Bei Überschreiten dieses Punktes, der als kritischer Energieeintrag definiert wurde, nimmt das Setzfließ- und Ausbreitmaß wieder ab. Das Auftreten dieses Punktes ist im besonderen Maße abhängig vom w/z-Wert. Mit sinkendem w/z-Wert wird der Energieeintrag, der eine Verbesserung der Fließeigenschaften hervorruft, reduziert. Bei sehr niedrigen w/z-Werten (< 0,35), kann keine Verbesserung mehr beobachtet werden.
Wird Fließmittel vor der Beschallung zur Zementsuspension zugegeben, können die Eigenschaften der Zementsuspension maßgeblich beeinflusst werden. In beschallten Suspensionen mit Fließmittel, konnte in Abhängigkeit des Energieeintrages die fließmittelbedingte Verzögerung des Erstarrungsbeginns deutlich reduziert werden. Weiterhin zeigt sich, dass der Energieeintrag, der notwendig ist um den Erstarrungsbeginn um einen festen Betrag zu reduzieren, bei Suspensionen mit Fließmittel deutlich reduziert ist.
Auf Grundlage der Beobachtungen an Zementsuspensionen wird der Einfluss von Ultraschall in einen dispergierenden und einen beschleunigenden Effekt unterteilt. Bei hohen w/z-Werten dominiert der dispergierende Einfluss von Ultraschall und der Erstarrungsbeginn wird moderat verkürzt. Bei niedrigeren w/z-Werten der Zementsuspension, dominiert der beschleunigende Effekt wobei kein oder sogar ein negativer Einfluss auf die Verarbeitungseigenschaften beobachtet werden kann.
Im nächsten Schritt werden die Untersuchungen auf den Betonmaßstab mit Hilfe einer Technikumsanlage erweitert und der Einfluss eines zweistufigen Mischens (also dem Herstellen einer Zementsuspension im ersten Schritt und dem darauffolgenden Vermischen mit der Gesteinskörnung im zweiten Schritt) mit Ultraschall auf die Frisch- und Festbetoneigenschaften betrachtet. Durch die Anlagentechnik, die mit der Beschallung größerer Mengen Zementsuspension einhergeht, kommen weitere Einflussfaktoren auf die Zementsuspension hinzu (z. B. Pumpgeschwindigkeit, Temperatur, Druck). Im Rahmen der Untersuchungen wurde eine Betonrezeptur mit und ohne Ultraschall hergestellt und die Frisch- und Festbetoneigenschaften verglichen. Darüber hinaus wurde ein umfangreiches Untersuchungsprogramm zur Ermittlung wesentlicher Dauerhaftigkeitsparameter durchgeführt. Aufbauend auf den Erfahrungen mit der Technikumsanlage wurde das Ultraschall-Vormischsystem in mehreren Stufen weiterentwickelt und abschließend in einem Betonwerk zur Betonproduktion verwendet.
Die Untersuchungen am Beton zeigen eine deutliche Steigerung der Frühdruckfestigkeiten des Portlandzementbetons. Hierbei kann die zum Entschalen von Betonbauteilen notwendige Druckfestigkeit von 15 MPa deutlich früher erreicht werden. Das Ausbreitmaß der Betone (w/z-Wert = 0,47) wird infolge der Beschallung leicht reduziert, was sich mit den Ergebnissen aus den Untersuchungen an reinen Zementsuspensionen deckt. Bei Applikation eines Überdruckes in der Beschallkammer oder einer Kühlung der Suspension während der Beschallung, kann das Ausbreitmaß leicht gesteigert werden. Allerdings werden die hohen Frühdruckfestigkeiten der ungekühlten beziehungsweise drucklosen Variante nicht mehr erreicht.
In den Untersuchungen kann gezeigt werden, dass das Potential durch die Ultraschall-Beschleunigung genutzt werden kann, um entweder die Festigkeitsklasse des Zementes leitungsneutral zu reduzieren (von CEM I 52,5 R auf CEM I 42,5 R) oder eine 4-stündige Wärmebehandlung vollständig zu substituieren. Die Dauerhaftigkeit der Betone wird dabei nicht negativ beeinflusst. In den Untersuchungen zum Sulfat-, Karbonatisierung-, Chlorideindring- oder Frost/Tauwiderstand kann weder ein positiver noch ein negativer Einfluss durch die Beschallung abgeleitet werden. Ebenso kann in einer Untersuchung zur Alkali-Kieselsäure-Reaktion kein negativer Einfluss durch die Ultraschallbehandlung beobachtet werden.
In den darauf aufbauenden Untersuchungen wird die Anlagentechnik weiterentwickelt, um die Ultraschallbehandlung stärker an eine reale Betonproduktion anzupassen. In der ersten Iterationsstufe wird das in den Betonuntersuchungen verwendete Anlagenkonzept 1 modifiziert (von der In-line-Beschallung zur Batch-Beschallung) und als Analgenkonzept 2 für weitere Untersuchungen genutzt. Hierbei wird eine neue Betonrezeptur mit höherem w/z-Wert (0,52) verwendet, wobei die Druckfestigkeiten ebenfalls deutlich gesteigert werden können. Im Gegensatz zum ersten Beton, wird das Ausbreitmaß dieser Betonzusammensetzung gesteigert, was zur Reduktion von Fließmittel genutzt wird. Dies deckt sich ebenfalls mit den Beobachtungen an reinen Portlandzementsuspensionen, wo eine deutliche Verbesserung der Fließfähigkeit bei höheren w/z-Werten beschrieben wird.
Für diese Betonrezeptur wird ein Vergleich mit einem kommerziell erhältlichen Erhärtungsbeschleuniger (synthetische C-S-H-Keime) angestellt. Hierbei zeigt sich, dass die Beschleunigungswirkung beider Technologien vergleichbar ist. Eine Kombination beider Technologien führt zu einer weiteren deutlichen Steigerung der Frühfestigkeiten, so dass hier von einem synergistischen Effekt ausgegangen werden kann.
In der letzten Iterationsstufe, dem Anlagenkonzept 3, wird beschrieben, wie das Mischsystem im Rahmen einer universitären Ausgründung signifikant weiterentwickelt wird und erstmals in einem Betonwerk zur Betonproduktion verwendet wird. Bei den Überlegungen zur Weiterentwicklung des Ultraschall-Mischsystems wird der Fokus auf die Praktikabilität gelegt und gezeigt, dass das ultraschallgestütze Mischsystem die Druckfestigkeitsentwicklung auch im Werksmaßstab deutlich beschleunigen kann. Damit ist die Voraussetzung für eine ökologisch nachhaltige Optimierung eines Fertigteilbetons unter realen Produktionsbedingungen geschaffen worden.
Das Wissen um den realen Zustand eines Bauprojektes stellt eine entscheidende Kernkompetenz eines steuernden bauausführenden Unternehmens dar. Der bewusste Umgang mit Informationen und deren effiziente Nutzung sind entscheidende Erfolgsfaktoren für die zeit-, kosten- und qualitätsgerechte Realisierung von Bauprojekten.
Obwohl die erforderlichen Erfolgsfaktoren bekannt sind, sind Kosten- und Terminüberschreitungen von Bauprojekten keine Seltenheit – eher das Gegenteil ist der Fall.
Zukunftsweisende Digitalisierungsprojekte aber geben Anlass zu Hoffnung. Ein Beispiel ist der bereits im Dezember 2015 vom Bundesministerium für Verkehr und digitale Infrastruktur ins Leben gerufene Stufenplan Digitales Planen und Bauen. Dieser hat die Aufgabe flächendeckend die Methodik des Building Information Modeling (BIM) im Infrastrukturbereich einzuführen und somit die Digitalisierung in Deutschland zukunftsweisend voranzutreiben,
indem erfolgreiche Bauprojekte mit durchgängigen Informationsflüssen arbeiten. Seither existiert eine Vielzahl an Digitalisierungsprojekten, alle mit gleichen Zielen. Nachweislich
lassen sich hinsichtlich dessen allerdings auch vermehrt Defizite aufzeigen. So ist der
Fortschritt sehr heterogen verteilt und lässt sich für die Branche nicht allgemeingültig festlegen.
Mit einer internationalen Literaturrecherche sowie einer empirischen Studie als Untersuchungsmethode wurde in Form von Interviews mit Fachkundigen der tatsächliche Zustand der Digitalisierungs- und der BIM-Anwendungen im Straßenbau für den
Controllingprozess der Bauleistungsfeststellung untersucht. Die erhobenen Daten wurden
aufbereitet und anschließend softwaregestützt einer inhaltlichen Analyse unterzogen. In
Kombination mit den Ergebnissen der Literaturrecherche wurden notwendige Anforderungen
für den Controllingprozess der Bauleistungsfeststellung erhoben. Auf dieser Grundlage wurde
ein Modell im Sinne der Systemtheorie zur Optimierung der Bauleistungsfeststellung
entwickelt. Gegenstand der vorliegenden Arbeit ist die Integration der modellbasierten Arbeitsweise in die Prozesse der Bauleistungsfeststellung eines Bauunternehmens. Grundlage ist die objektive Auswertung des Fertigstellungsgrades (Baufortschrittes) mittels Luftbildaufnahmen. Deren Auswertung auf Basis eines Algorithmus und die systematische Identifikation des Baufortschrittes integriert in den Prozess der Bauleistungsfeststellung werden zu einem neu entwickelten Gesamtsystem mit dem Ergebnis eines optimierten Modells.
Das entwickelte Modell wurde hinsichtlich Anwendbarkeit und praktischer Relevanz an
ausgewählten Beispielszenarien untersucht und nachgewiesen.
Audiovisuelles Cut-Up
(2023)
Diese Forschungsarbeit bezeichnet und analysiert ein ästhetisches Phänomen audiovisueller Arbeiten mit sehr schnellen Schnitten. Der Begriff „Audiovisuelles Cut-Up“ wird vorgeschlagen um dieses Phänomen zu bezeichnen. Verschiedenste audiovisuelle Arbeiten aus unterschiedlichen Kontexten werden analysiert, welche das formale Kriterium von extrem kurzen Einstellungen erfüllen – einschließlich der eigenen Forschungsbeiträge des Autors. Die Werkzeuge und Technologien, welche die neuartige Ästhetik ermöglichten werden vorgestellt. Oftmals wurden diese von den Künstlern selbst geschaffen, da fertige Lösungen nicht verfügbar waren. Audiovisuelle Cut-Ups werden nach Kontext und Medium systematisiert und verortet. Es schließen sich Beobachtungen an, in wie fern Audio und Video sich in ihrem Charakter unterscheiden, was die kleinste wahrnehmbare Einheit ist und welche Rolle Latenzen und Antizipation bei der Wahrnehmung von audiovisuellen Medien spielen.
Drei Hauptthesen werden Aufgestellt: 1. Audiovisuelles Cut-Up hat die Kraft winzige, vormals überdeckte Details deutlich zu machen. Damit kann es das Quellmaterial verdichten aber auch den Inhalt manipulieren. 2. Technische Entwicklungen haben das audiovisuelle Cut-Up hervorgerufen. 3. Heute ist die Ästhetik als ein Stilmittel im Werkzeugkasten audiovisueller Gestaltung etabliert.
The most fundamental understating of hybridization methodology takes the form of stable but dynamic notions, accumulated over time in the memory of individuals. Schematized and abstracted, the hybrids representation needs to be reproduced and reused in order to reconstruct and bring back other memories. Reinvented, or reused hybrids can support getting access to social, traditional, religious understanding of nations. In this manner, they take the form of the messenger / the mediator an innate, equivalent to the use of mental places in the art of memory. We remember mythology in order to remember other things.
From individual memory perspective, or group collective memory, the act of recollection is assumed to be an individual act, biologically based in the brain, but by definition conditioned by social collectives. Following Halbwachs, this thesis does not recognize a dichotomy between individual and collective memory as two different types of remembering. Conversely, the collective is thought of as inherent to individual thought, questioning perspectives that regard individual recollection as isolated from social settings. The individual places himself in relation to the group and makes use of the collective frameworks of thought when he localizes and reconstructs the past, whether in private or in social settings. The frameworks of social relations, of time, and of space are constructs originating in social interaction and distributed in the memory of the group members. The individual has his own perspective on the collective frameworks of the group, and the group’s collective frameworks can be regarded as a common denominator of the individual outlooks on the framework.
In acts of remembering, the individual may actualize the depicted symbols in memory, but he could also employ precepts from the environment. The latter have been referred to as material or external frameworks of memory, suggesting their similar role as catalysts for processes of remembrance such as that of the hybrids in my paintings. It is only with reference to the hybrids, who work as messengers / mediators with a dual nature, that communicate between the past and the present, the internal and external space, that individual memory and group memory is in focus.
The exhibition at the Egyptian museum in Leipzig is my practical method to create a communicative memory, using hybrids as mediators in cultural transimission, as when the act refers to informal and everyday situations in which group members informally search for the past, it takes place in the communicative
162
memory. As explained in chapter one, the exhibition at the Egyptian museum in Leipzig is an act of remembering in search for the past with support of my paintings, which then can considered as part of the cultural memory.
In addition to the theoretical framework summarized above, I have applied my hypothesis practically in the form of the public exhibition, and shared the methodology with public audience from Cairo / Egypt and Leipzig / German in the form of visual art workshops and open discussions. I have also suggested an analyzed description of the meaning of hybrids in my artwork as mediators and messengers for the purpose of cultural transmission, as well as in relation to other artists’ work and use of a similar concept.
By using my hybrid creatures in my visual artwork, I am creating a bridge, mediators to represent both the past and the present, what we remember of the past, and how we understand the past. It is as explained in chapter two; that the hybridization methodology in terms of double membership represented in different cultures –Cairo / Egypt and Leipzig / Germany- can provide a framework which allows artistic discussions and could be individually interpreted, so individual cultures / individual memory can become transparent without losing their identities and turn into communicative memory. This transmission through the hybridization theoretical approech was explicitly clarified with the support of Krämer’s hypothesis. The practical attempt was examined by creating a relationship between the witness –me as an artist– and the audience –the exhibition visitors–, to cross space and time, not to bridge differences, rather to represent the contrasts transparently.
The Kin-making proposition is adopted by many academics and scholars in modern society and theoretical research; the topic was represented in the roots of the ancient Egyptian mindset and supported theoretically by similar understandings such as Haraway’s definition of kin-making. The practical implementation of kin- making can be observed in many of my artwork and was analyzed visually and artistically in chapter three.
My practical project outcome tested success by using hybrids in my paintings as mediators, it opened a communicative artistic discussion. This methodology gave a possible path of communication through paintings / visual analyses, and offered relativity through image self-interpretation.
Modell bedarfsorientierter Leistungserbringung im FM auf Grundlage von Sensortechnologien und BIM
(2023)
Während der Digitalisierung im Bauwesen insbesondere im Bereich der Planungs- und Errichtungsphase von Bauwerken immer größere Aufmerksamkeit zuteilwird, ist das digitale Potenzial im Facility Management weit weniger ausgeschöpft, als dies möglich wäre. Vor dem Hintergrund, dass die Bewirtschaftung von Gebäuden jedoch einen wesentlichen Kostenanteil im Lebenszyklus darstellt, ist eine Fokussierung auf digitale Prozesse im Gebäudebetrieb erforderlich. Im Facility Management werden Dienstleistungen häufig verrichtungsorientiert, d. h. nach statischen Intervallen, oder bedarfsorientiert erbracht. Beide Arten der Leistungserbringung weisen Defizite auf, beispielweise weil Tätigkeiten auf Basis definierter Intervalle erbracht werden, ohne dass eine Notwendigkeit besteht oder weil bestehende Bedarfe mangels Möglichkeiten der Bedarfsermittlung nicht identifiziert werden. Speziell die Definition und Ermittlung eines Bedarfs zur Leistungserbringung ist häufig subjektiv geprägt. Auch sind Dienstleister oft nicht in frühen Phasen der Gebäudeplanung involviert und erhalten für ihre Dienstleistungen notwendige Daten und Informationen erst kurz vor Inbetriebnahme des zu betreibenden Gebäudes.
Aktuelle Ansätze des Building Information Modeling (BIM) und die zunehmende Verfügbarkeit von Sensortechnologien in Gebäuden bieten Chancen, die o. g. Defizite zu beheben.
In der vorliegenden Arbeit werden deshalb Datenmodelle und Methoden entwickelt, die mithilfe von BIM-basierten Datenbankstrukturen sowie Auswertungs- und Entscheidungsmethodiken Dienstleistungen der Gebäudebewirtschaftung objektiviert und automatisiert auslösen können. Der Fokus der Arbeit liegt dabei auf dem Facility Service der Reinigungs- und Pflegedienste des infrastrukturellen Facility Managements.
Eine umfangreiche Recherche etablierter Normen und Standards sowie öffentlich zugänglicher Leistungsausschreibungen bilden die Grundlage der Definition erforderlicher Informationen zur Leistungserbringung. Die identifizierten statischen Gebäude- und Prozessinformationen werden in einem relationalen Datenbankmodell strukturiert, das nach einer Darstellung von Messgrößen und der Beschreibung des Vorgehens zur Auswahl geeigneter Sensoren für die Erfassung von Bedarfen, um Sensorinformationen erweitert wird. Um Messwerte verschiedener und bereits in Gebäuden existenten Sensoren für die Leistungsauslösung verwenden zu können, erfolgt die Implementierung einer Normierungsmethodik in das Datenbankmodell. Auf diese Weise kann der Bedarf zur Leistungserbringung ausgehend von Grenzwerten ermitteln werden. Auch sind Verknüpfungsmethoden zur Kombination verschiedener Anwendungen in dem Datenbankmodell integriert. Zusätzlich zur direkten Auslösung erforderlicher Aktivitäten ermöglicht das entwickelte Modell eine opportune Auslösung von Leistungen, d. h. eine Leistungserbringung vor dem eigentlich bestehenden Bedarf. Auf diese Weise können tätigkeitsähnliche oder räumlich nah beieinander liegende Tätigkeiten sinnvoll vorzeitig erbracht werden, um für den Dienstleister eine Wegstreckeneinsparung zu ermöglichen. Die Arbeit beschreibt zudem die für die Auswertung, Entscheidungsfindung und Auftragsüberwachung benötigen Algorithmen.
Die Validierung des entwickelten Modells bedarfsorientierter Leistungserbringung erfolgt in einer relationalen Datenbank und zeigt simulativ für unterschiedliche Szenarien des Gebäudebetriebs, dass Bedarfsermittlungen auf Grundlage von Sensortechnologien erfolgen und Leistungen opportun ausgelöst, beauftragt und dokumentiert werden können.