Refine
Document Type
- Conference Proceeding (38)
- Article (22)
- Doctoral Thesis (16)
- Part of a Book (10)
- Bachelor Thesis (8)
- Master's Thesis (3)
- Book (2)
- Periodical (2)
- Study Thesis (2)
- Habilitation (1)
Institute
- Graduiertenkolleg 1462 (19)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (19)
- Institut für Strukturmechanik (ISM) (18)
- Universitätsbibliothek (11)
- Professur Bauphysik (8)
- Professur Informatik in der Architektur (6)
- F. A. Finger-Institut für Baustoffkunde (FIB) (4)
- An-Institute (2)
- Institut für Europäische Urbanistik (2)
- Professur Bauchemie und Polymere Werkstoffe (2)
- Professur Stochastik und Optimierung (2)
- Geschichte und Theorie der Visuellen Kommunikation (1)
- Institut für Bauinformatik, Mathematik und Bauphysik (IBMB) (1)
- Junior-Professur Computational Architecture (1)
- Junior-Professur Psychophysiologie und Wahrnehmung (1)
- Professur Angewandte Mathematik (1)
- Professur Baubetrieb und Bauverfahren (1)
- Professur Baumechanik (1)
- Professur Content Management und Webtechnologien (1)
- Professur Entwerfen und Baugestaltung (1)
- Professur Grundbau (1)
- Professur Informatik im Bauwesen (1)
- Professur Medienmanagement (1)
- Professur Medienphilosophie (1)
- Professur Soziologie und Sozialgeschichte der Stadt (1)
- Professur Stahlbau (1)
- Professur Theorie und Geschichte der modernen Architektur (1)
- Professur Tragwerkslehre (1)
Keywords
- Angewandte Mathematik (50)
- Angewandte Informatik (36)
- Computerunterstütztes Verfahren (36)
- Strukturmechanik (14)
- Elektronisches Buch (7)
- Bauphysik (3)
- Bibliothek (3)
- E-Book-Reader (3)
- Urheberrecht (3)
- Architektur (2)
Year of publication
- 2012 (105) (remove)
An analytical molecular mechanics model for the elastic properties of crystalline polyethylene
(2012)
We present an analytical model to relate the elastic properties of crystalline polyethylene based on a molecular mechanics approach. Along the polymer chains direction, the united-atom (UA) CH2-CH2 bond stretching, angle bending potentials are replaced with equivalent Euler-Bernoulli beams. Between any two polymer chains, the explicit formulae are derived for the van der Waals interaction represented by the linear springs of different stiffness. Then, the nine independent elastic constants are evaluated systematically using the formulae. The analytical model is finally validated by present united-atom molecular dynamics (MD) simulations and against available all-atom molecular dynamics results in the literature. The established analytical model provides an efficient route for mechanical characterization of crystalline polymers and related materials.
The analysis of the response of complex structural systems requires the description of the material constitutive relations by means of an appropriate material model. The level of abstraction of such model may strongly affect the quality of the prognosis of the whole structure. In context to this fact, it is necessary to describe the material in a convenient sense as exact but as simple as possible. All material phenomena of crystalline materials e.g. steel, affecting the behavior of the structure, rely on physical effects which are interacting over spatial scales from subatomic to macroscopic range. Nevertheless, if the material is microscopically heterogenic, it might be appropriate to use phenomenological models for the purpose of civil engineering. Although constantly applied, these models are insufficient for steel materials with microscopic characteristics such as texture, typically occurring in hot rolled steel members or heat affected zones of welded joints. Hence, texture is manifested in crystalline materials as a regular crystallographic structure and crystallite orientation, influencing macroscopic material properties. The analysis of structural response of material with texture (e.g. rolled steel or heat affected zone of a welded joint) obliges the extension of the phenomenological material description of macroscopic scale by means of microscopic information. This paper introduces an enrichment approach for material models based on a hierarchical multiscale methodology. This has been done by describing the grain texture on a mesoscopic scale and coupling it with macroscopic constitutive relations by means of homogenization. Due to a variety of available homogenization methods, the question of an assessment of coupling quality arises. The applicability of the method and the effect of the coupling method on the reliability of the response are presented on an example.
Die Qualität von Beplankungselementen wirkt sich deutlich auf den Feuerwiderstand von Metallständer-Wandkonstruktionen aus. Daher wurde im Rahmen dieser Arbeit der Einfluss von Zusätzen in Gipsplatten bezüglich einer möglichen Verbesserung dieser Eigenschaft untersucht.
Zu diesem Zweck wurden spezielle, den jeweiligen Untersuchungsbedingungen angepasste Probekörper unter Verwendung verschiedenster Zusätze gefertigt. Die Beurteilung deren Auswirkungen erfolgte insbesondere mittels nachfolgender fünf Kriterien:
1) dem Zeitpunkt der Temperaturerhöhung nach der Probekörperentwässerung,
2) dem Maximalwert der Plattenrückseitentemperatur,
3) der Größe und der Anzahl der Risse,
4) der Plattenstabilität nach der Wärmebeanspruchung,
5) der Verkürzung von prismatischen Probekörpern.
Besonders wichtig war hierbei die Charakterisierung der Auswirkungen einer simulierten Brandbeanspruchung von 970 °C über 90 Minuten auf Labor-Gipsplatten. Dabei wurde die Temperaturänderung auf der Plattenrückseite über den gesamten Prüfzeitraum kontinuierlich erfasst. Die Bewertung des Zusammenhalts der Platten nach der thermischen Beanspruchung erfolgte erstmals quantitativ über Anzahl und Größe der an den Proben entstandenen Risse. Ursächlich für die Rissbildung ist die Verringerung des Probekörpervolumens infolge des ausgetriebenen Kristallwassers. Da dieser Parameter im Plattenversuch nicht bestimmt werden kann, wurde ergänzend das Längenänderungsverhalten von Prismen im Ergebnis einer 90minütigen Temperung bei 1000 °C im Muffelofen ermittelt.
Besonders vorteilhaft hat sich die Zugabe von 80 g/m2 Glasfasern und 7,75 % Kalksteinmehl auf das Verhalten von Gipsplatten bei Brandbeanspruchung ausgewirkt. Diese Verbesserung ist insbesondere auf höhere Stabilität und geringere Schrumpfung der Gipsplatte zurückzuführen.
Basierend auf den im Labormaßstab erhaltenen Ergebnissen wurden Rezepturvorschläge zur Verbesserung des Feuerwiderstandsverhaltens von Gipsplatten unter Praxisbedingungen entwickelt. Die Herstellung der erforderlichen großformatigen Platten erfolgte auf der Bandstraße der Knauf Gips KG. Diese Platten wurden als Wandkonstruktion mit zweilagiger Beplankung einer großtechnischen Prüfung erfolgreich unterzogen. Eine geringere Durchbiegung der Wandkonstruktion, eine verminderte Volumenreduzierung der Platten sowie eine erhöhte Plattenstabilität belegen die verbesserten Eigenschaften dieser modifizierten Feuerschutzplatte.
Weitere durchgeführte Untersuchungen ergaben, dass es unerheblich ist, ob die Platten auf Basis von Natur- oder REA-Gips bzw. mit hohem oder niedrigem Flächengewicht gefertigt wurden. Das eindeutig beste Ergebnis mit einer Feuerwiderstandsdauer von 118 Minuten hat eine Wandkonstruktion aus Feuerschutzplatten auf Basis eines Stuckgipses aus 100 % REA-Gips mit einem Anteil von 83,9 g/m2 Glasfasern und 1 % Vermiculit und einem Flächengewicht von 10,77 kg/m2, bei einer Plattenstärke von 12,5 mm.
Die als Ziel vorgebende Feuerwiderstandsdauer von 120 Minuten bei zweilagiger Beplankung ohne Dämmstoff könnte künftig erreicht werden, wenn es gelingt, die Volumenreduzierung noch besser zu kompensieren und die Plattenstabilität zu steigern. Eine Möglichkeit hierzu ist die Substitution der beidseitigen Kartonlagen durch eine Glasfaser-Vliesummantelung. Die Wandkonstruktion W112 ohne Dämmstoff erreicht dabei eine Feuerwiderstandsdauer von weit über 120 Minuten, wobei der Gipskern mit Glasfasern armiert ist.
Die Arbeit »Anachronismen: Historiografie und Kino« geht von einer zunächst einfachen Beobachtung aus: beinahe immer, wenn Historiker_innen sich mit Geschichtsfilmen auseinander setzen, findet sich die lautstark geführte Beschwerde über die zahlreichen und vermeidbaren Anachronismen der Filme, die sie als ernst zu nehmende historiografische Beiträge desavouieren.
Von hier ausgehend verfolgt die Arbeit ein dreifaches Projekt: zunächst in einer kritischen Analyse geschichtstheoretischer Texte einige Hinweise für den Status von Anachronismen für die moderne westliche Historiografie zu gewinnen. Zweitens zu untersuchen, welche Rolle Anachronismen für den Geschichtsfilm spielen. Und drittens von dort aus das epistemische Potential anachronistischen Geschichtskinos zu untersuchen.
Eine der Hauptthesen, welche den Blick sowohl auf die Filme wie auf die theoretischen Texte leitet, besagt, dass Anachronismen genau jene Punkte sind, an denen die Medien einer jeden Geschichtsschreibung beobachtbar werden. Die Beobachtung und Beschreibung dieser Medien der kinematografischen Geschichtsschreibung unternimmt die Arbeit unter Zuhilfenahme einiger theoretischer Überlegungen der Actor Network Theory (ANT).
Die Arbeit ist in vier Kapitel gegliedert, in deren Zentrum jeweils die Diskussion eines ANT-Begriffs sowie die Analyse eines Geschichtsfilmes steht. Zu den untersuchten Filmen gehören Shutter Island (Martin Scorsese, 2010), Chronik der Anna Magdalena Bach (Jean-Marie Straub/Danièle Huillet, 1968), Cleopatra (Joseph L. Mankiewicz, 1963) und Caravaggio (Derek Jarman, 1986). Die Arbeit kommentiert außerdem theoretische Texte zur Historiografie und zu Anachronismen von Walter Benjamin, Leo Bersani, Georges Didi-Huberman, Siegfried Kracauer, Friedrich Meinecke, Friedrich Nietzsche, Jacques Rancière, Leopold Ranke, Paul Ricœur, Georg Simmel, Hayden White u. a.
Gaze based human-computer-interaction has been a research topic for over a quarter century. Since then, the main scenario for gaze interaction has been helping handicapped people to communicate an interact with their environment. With the rapid development of mobile and wearable display technologies, a new application field for gaze interaction has appeared, opening new research questions.
This thesis investigates the feasibility of mobile gaze based interaction, studying deeply the use of pie menus as a generic and robust widget for gaze interaction as well as visual and perceptual issues on head mounted (wearable) optical see-through displays.
It reviews conventional gaze-based selection methods and investigates in detail the use of pie menus for gaze control. It studies and discusses layout issues, selection methods and applications. Results show that pie menus can allocate up to six items in width and multiple depth layers, allowing a fast and accurate navigation through hierarchical levels by using or combining multiple selection methods. Based on these results, several text entry methods based on pie menus are proposed. Character-by-character text entry, text entry with bigrams and with text entry with bigrams derived by word prediction, as well as possible selection methods, are examined in a longitudinal study. Data showed large advantages of the bigram entry methods over single character text entry in speed and accuracy. Participants preferred the novel selection method based on saccades (selecting by borders) over the conventional and well established dwell time method.
On the one hand, pie menus showed to be a feasible and robust widget, which may enable the efficient use of mobile eye tracking systems that may not be accurate enough for controlling elements on conventional interface. On the other hand, visual perception on mobile displays technologies need to be examined in order to deduce if the mentioned results can be transported to mobile devices.
Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load on the respective systems. We investigated visual performance on participants with a visual search tasks and dual tasks presenting visual stimuli on the optical see-through device, only on a computer screen, and simultaneously on both devices. Results showed that switching between the presentation devices (i.e. perceiving information simultaneously from both devices) produced costs in visual performance. The implications of these costs and of further perceptual and technical factors for mobile gaze-based interaction are discussed and solutions are proposed.
Volumerendering ist eine Darstellungstechnik, um verschiedene räumliche Mess- und Simulationsdaten anschaulich, interaktiv grafisch darzustellen. Im folgenden Beitrag wird ein Verfahren vorgestellt, mehrere Volumendaten mit einem Architekturflächenmodell zu überlagern. Diese komplexe Darstellungsberechnung findet mit hardwarebeschleunigten Shadern auf der Grafikkarte statt. Im Beitrag wird hierzu der implementierte Softwareprototyp "VolumeRendering" vorgestellt. Neben dem interaktiven Berechnungsverfahren wurde ebenso Wert auf eine nutzerfreundliche Bedienung gelegt. Das Ziel bestand darin, eine einfache Bewertung der Volumendaten durch Fachplaner zu ermöglichen. Durch die Überlagerung, z. B. verschiedener Messverfahren mit einem Flächenmodell, ergeben sich Synergien und neue Auswertungsmöglichkeiten. Abschließend wird anhand von Beispielen aus einem interdisziplinären Forschungsprojekt die Anwendung des Softwareprototyps illustriert.
This paper presents a novel numerical procedure based on the framework of isogeometric analysis for static, free vibration, and buckling analysis of laminated composite plates using the first-order shear deformation theory. The isogeometric approach utilizes non-uniform rational B-splines to implement for the quadratic, cubic, and quartic elements. Shear locking problem still exists in the stiffness formulation, and hence, it can be significantly alleviated by a stabilization technique. Several numerical examples are presented to show the performance of the method, and the results obtained are compared with other available ones.
A simple multiscale analysis framework for heterogeneous solids based on a computational homogenization technique is presented. The macroscopic strain is linked kinematically to the boundary displacement of a circular or spherical representative volume which contains the microscopic information of the material. The macroscopic stress is obtained from the energy principle between the macroscopic scale and the microscopic scale. This new method is applied to several standard examples to show its accuracy and consistency of the method proposed.
A simple multiscale analysis framework for heterogeneous solids based on a computational homogenization technique is presented. The macroscopic strain is linked kinematically to the boundary displacement of a circular or spherical representative volume which contains the microscopic information of the material. The macroscopic stress is obtained from the energy principle between the macroscopic scale and the microscopic scale. This new method is applied to several standard examples to show its accuracy and consistency of the method proposed.
Meshfree methods (MMs) such as the element free Galerkin (EFG)method have gained popularity because of some advantages over other numerical methods such as the finite element method (FEM). A group of problems that have attracted a great deal of attention from the EFG method community includes the treatment of large deformations and dealing with strong discontinuities such as cracks. One efficient solution to model cracks is adding special enrichment functions to the standard shape functions such as extended FEM, within the FEM context, and the cracking particles method, based on EFG method. It is well known that explicit time integration in dynamic applications is conditionally stable. Furthermore, in enriched methods, the critical time step may tend to very small values leading to computationally expensive simulations. In this work, we study the stability of enriched MMs and propose two mass-lumping strategies. Then we show that the critical time step for enriched MMs based on lumped mass matrices is of the same order as the critical time step of MMs without enrichment. Moreover, we show that, in contrast to extended FEM, even with a consistent mass matrix, the critical time step does not vanish even when the crack directly crosses a node.
A concept of non-commutative Galois extension is introduced and binary and ternary extensions are chosen. Non-commutative Galois extensions of Nonion algebra and su(3) are constructed. Then ternary and binary Clifford analysis are introduced for non-commutative Galois extensions and the corresponding Dirac operators are associated.
The aim of this study is to show an application of model robustness measures for soilstructure interaction (henceforth written as SSI) models. Model robustness defines a measure for the ability of a model to provide useful model answers for input parameters which typically have a wide range in geotechnical engineering. The calculation of SSI is a major problem in geotechnical engineering. Several different models exist for the estimation of SSI. These can be separated into analytical, semi-analytical and numerical methods. This paper focuses on the numerical models of SSI specific macro-element type models and more advanced finite element method models using contact description as continuum or interface elements. A brief description of the models used is given in the paper. Following this description, the applied SSI problem is introduced. The observed event is a static loaded shallow foundation with an inclined load. The different partial models to consider the SSI effects are assessed using different robustness measures during numerical application. The paper shows the investigation of the capability to use these measures for the assessment of the model quality of SSI partial models. A variance based robustness and a mathematical robustness approaches are applied. These different robustness measures are used in a framework which allows also the investigation of computational time consuming models. Finally the result shows that the concept of using robustness approaches combined with other model–quality indicators (e.g. model sensitivity or model reliability) can lead to unique model–quality assessment for SSI models.
Increasingly powerful hard- and software allows for the numerical simulation of complex physical phenomena with high levels of detail. In light of this development the definition of numerical models for the Finite Element Method (FEM) has become the bottleneck in the simulation process. Characteristic features of the model generation are large manual efforts and a de-coupling of geometric and numerical model. In the highly probable case of design revisions all steps of model preprocessing and mesh generation have to be repeated. This includes the idealization and approximation of a geometric model as well as the definition of boundary conditions and model parameters. Design variants leading to more resource-efficient structures might hence be disregarded due to limited budgets and constrained time frames.
A potential solution to above problem is given with the concept of Isogeometric Analysis (IGA). Core idea of this method is to directly employ a geometric model for numerical simulations, which allows to circumvent model transformations and the accompanying data losses. Basis for this method are geometric models described in terms of Non-uniform rational B-Splines (NURBS). This class of piecewise continuous rational polynomial functions is ubiquitous in computer graphics and Computer-Aided Design (CAD). It allows the description of a wide range of geometries using a compact mathematical representation. The shape of an object thereby results from the interpolation of a set of control points by means of the NURBS functions, allowing efficient representations for curves, surfaces and solid bodies alike. Existing software applications, however, only support the modeling and manipulation of the former two. The description of three-dimensional solid bodies consequently requires significant manual effort, thus essentially forbidding the setup of complex models.
This thesis proposes a procedural approach for the generation of volumetric NURBS models. That is, a model is not described in terms of its data structures but as a sequence of modeling operations applied to a simple initial shape. In a sense this describes the "evolution" of the geometric model under the sequence of operations. In order to adapt this concept to NURBS geometries, only a compact set of commands is necessary which, in turn, can be adapted from existing algorithms. A model then can be treated in terms of interpretable model parameters. This leads to an abstraction from its data structures and model variants can be set up by variation of the governing parameters.
The proposed concept complements existing template modeling approaches: templates can not only be defined in terms of modeling commands but can also serve as input geometry for said operations. Such templates, arranged in a nested hierarchy, provide an elegant model representation. They offer adaptivity on each tier of the model hierarchy and allow to create complex models from only few model parameters. This is demonstrated for volumetric fluid domains used in the simulation of vertical-axis wind turbines. Starting from a template representation of airfoil cross-sections, the complete "negative space" around the rotor blades can be described by a small set of model parameters, and model variants can be set up in a fraction of a second.
NURBS models offer a high geometric flexibility, allowing to represent a given shape in different ways. Different model instances can exhibit varying suitability for numerical analyses. For their assessment, Finite Element mesh quality metrics are regarded. The considered metrics are based on purely geometric criteria and allow to identify model degenerations commonly used to achieve certain geometric features. They can be used to decide upon model adaptions and provide a measure for their efficacy. Unfortunately, they do not reveal a relation between mesh distortion and ill-conditioning of the equation systems resulting from the numerical model.
The Bernstein polynomials are used for important applications in many branches of Mathematics and the other sciences, for instance, approximation theory, probability theory, statistic theory, num- ber theory, the solution of the di¤erential equations, numerical analysis, constructing Bezier curves, q-calculus, operator theory and applications in computer graphics. The Bernstein polynomials are used to construct Bezier curves. Bezier was an engineer with the Renault car company and set out in the early 1960’s to develop a curve formulation which would lend itself to shape design. Engineers may …nd it most understandable to think of Bezier curves in terms of the center of mass of a set of point masses. Therefore, in this paper, we study on generating functions and functional equations for these polynomials. By applying these functions, we investigate interpolation function and many properties of these polynomials.
The concept of isogeometric analysis, where functions that are used to describe geometry in CAD software are used to approximate the unknown fields in numerical simulations, has received great attention in recent years. The method has the potential to have profound impact on engineering design, since the task of meshing, which in some cases can add significant overhead, has been circumvented. Much of the research effort has been focused on finite element implementations of the isogeometric concept, but at present, little has been seen on the application to the Boundary Element Method. The current paper proposes an Isogeometric Boundary Element Method (BEM), which we term IGABEM, applied to two-dimensional elastostatic problems using Non-Uniform Rational B-Splines (NURBS). We find it is a natural fit with the isogeometric concept since both the NURBS approximation and BEM deal with quantities entirely on the boundary. The method is verified against analytical solutions where it is seen that superior accuracies are achieved over a conventional quadratic isoparametric BEM implementation.
Radiodiskussion bei bauhaus.fm am 5. November 2012.
Harald S. Liehr ist Lektor und Leiter der Niederlassung Weimar des Böhlau-Verlags (Wien / Köln / Weimar), Dr. Frank Simon-Ritz ist Direktor der Universitätsbibliothek der Bauhaus-Universität Weimar.
Die Fragen stellten René Tauschke und Jean-Marie Schaldach.
In crowdsourcingbasierten Systemen kommt der Qualitätssicherung des durch Benutzer generierten Inhaltes große Bedeutung für die Erhaltung der Benutzbarkeit zu. Das bauphysikalische Lehrspiel "BuildVille" benutzt für die Quiz-Anwendung einen Crowdsourcing-Ansatz: Die Quiz-Fragen werden von den Benutzern selbst generiert. Mit Hilfe dieser Arbeit soll sichergestellt werden, dass fehlerhafte, irrtümlicherweise oder zum Spaß eingegebene Fragen möglichst früh erkannt, korrigiert oder von der weiteren Verbreitung ausgeschlossen werden. Dazu soll mit Hilfe einer Analyse bestehender crowdsourcingbasierter Systeme bezüglich umgesetzter Qualitätssicherungsmaßnahmen ein Konzept für die QS-Maßnahmen in BuildVille entwickelt werden.
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
6 Zusammenfassung und Ausblick
Die hydrothermal induzierte Phasentransformation konnte für ATZ-Keramik mit tiefenge-mittelten und tiefenaufgelösten Methoden charkterisiert und quantifiziert werden.
Die zeit- und temperaturabhängige Alterungskinetik von ATZ wurde durch neun Tempera-turstufen in einem Temperaturbereich von 50 °C bis 134 °C untersucht und die kinetischen Parameter nummerisch bestimmt. Für 3Y-TZP wurde diese Prozedur bei drei Temperaturen im Temperaturbereich von 70 °C bis 134 °C angewendet. Aufgrund des ARRHENIUS-Verhaltens der Umwandlungskinetik konnte der zeitliche Verlauf der isotherm stattfinden-den hydrothermal induzierten Phasentransformation bei Körpertemperatur simuliert wer-den. Die Simulation dient zur Bewertung der Langzeitstabilität von medizinischen Implanta-ten aus ATZ bzw. 3Y-TZP. Die Untersuchungen wurden in Wasser und in Wasserdampf bzw. wasserdampfgesättigter Luft durchgeführt. Die Langzeitsimulation für 3Y-TZP wurde an-hand von Explantat-Untersuchungen verifiziert.
ATZ zeigt gegenüber 3Y-TZP eine höhere Alterungsstabilität bezogen auf die zeitliche Ent-wicklung der monoklinen Phase. Im Hinblick auf die Oberflächenhärte, die durch die Pha-senumwandlung stark beeinflusst wird, erweist sich ATZ über einen langen Alterungszeit-raum stabiler als 3Y-TZP. Bis zu einem monoklinen Gehalt von 40 % beweist ATZ einen deutlichen Härtevorteil gegenüber 3Y TZP, dieser entspricht in der Langzeitsimulation für die Wasserlagerung ca. 35 Jahre. Das wirkt sich insbesondere bei Verschleißpaarungen wie beim künstlichen Hüftgelenk positiv aus.
Verschleißuntersuchungen an einer neu entwickelten Kugel-auf-Scheibe-Geometrie mit li-nearer Kinematik, die dem Hüftgelenk nachempfunden wurde, belegen die vorteilhaften Verschleißeigenschaften von ATZ in Form von sehr geringen Abtragsraten und einer intak-ten Oberfläche nach 720 000 absolvierten Zyklen. Dabei wurde sogar eine Aufhärtung der Oberfläche durch die Verschleißbeanspruchung um bis zu 8 % nachgewiesen.
Bei der tiefengemittelten Charakterisierung der hydrothermalen Alterung wurde in beiden Materialtypen festgestellt, dass die Geschwindigkeit der Phasentransformation neben der Temperatur merklich von der Änderung der H2O-Stoffmengenkonzentrantion an der Ober-fläche der Keramik abhängig ist, was sich mit den unterschiedlichen Aktivierungsenergien für Wasser- bzw. Wasserdampflagerung belegen lässt. Die Aktivierungsenergie Ea der hyd-rothermalen Phasentransformation wurde mit Hilfe der ARRHENIUS-Beziehung ermittelt und beträgt für ATZ bei Wasserdampflagerung 102 kJ/mol und bei Wasserlagerung 92 kJ/mol. Für Y-TZP beträgt die Aktivierungsenergie 114 kJ/mol bei Wasserdampflagerung und 102 kJ/mol bei Wasserlagerung. Der resultierende präexponentielle Faktor k0 unterscheidet sich für Wasserlagerung und Wasserdampflagerung um eine Größenordnung, was auf einen leicht andersartigen thermisch aktivierten Gesamtprozess hinweist.
Der Avrami-Exponent n, der einen Hinweis auf den Mechanismus der Keimbildung sowie deren geometrische Ordnung geben kann, zeigte keine signifikante Abhängigkeit von der Temperatur und vom Umgebungsmedium. Er ist dagegen zeitabhängig und fällt mit zuneh-mender Alterungszeit, d.h. mit zunehmendem monoklinem Gehalt von ca. 4 auf 0,5 ab, was auf eine abnehmende Keimbildungsrate hindeutet. In Verbindung mit weiteren Untersu-chungen durch unabhängige und zum Teil tiefenauflösende Methoden wie GIXRD, NRA und Knoop-Mikrohärte-Messungen lässt sich der Alterungsmechanismus, bzw. sein zeitlicher und örtlicher Ablauf, durch die drei Stadien A, B und C beschreiben:
A 0-5 ma. % m-ZrO2 Quasi-homogene Keimbildung an bevorzugten Orten wie Kornkan-ten und Kornecken (n≈4), Wassertransport wahrscheinlich via Korngrenzendiffusion, Aufhärtung der Oberfläche
B 5-40 ma. % m-ZrO2 Keimbildung an den Korngrenzflächen bis zur Keimsättigung (n≈2), monokline Randschicht wächst zeitlich linear, Wassertransport konvektiv über Mikrorisse, deutlicher Härteverlust der Oberfläche
C ≥ 40 ma. % m-ZrO2 Wachstum der monoklinen Kristallite von den Korngrenzflächen in die tetragonalen Kristallite unter starker Verzwillingung (n≈0,5), Abnahme der tetragonalen Kristallitgröße, starke Mikrorissbildung, dramatischer Rückgang der Oberflächenhärte
Die Kristallitgröße der monoklinen Phase verbleibt im ATZ über alle drei Abschnitte bei 30 ±5 nm. Ein Anwachsen der Kristallite ist mechanische behindert. Kleinere monokline Kristallite sind im ATZ thermodynamisch instabil. Die Kristallitgröße der tetragonalen Phase fällt in den Abschnitten A und B sehr langsam und in C sehr schnell bis auf 25 nm ab. Bei dieser Kristallitgröße ist die tetragonale Phase gegenüber der monoklinen Phase thermody-namisch stabil. Diese residualen tetragonalen Kristallite weisen nach vollständigem Reakti-onsablauf einem Anteil von 7 ma. % auf. Der Sättigungsgehalt der monoklinen Phase betrug in beiden Materialen unabhängig von der Temperatur bzw. dem Umgebungsmedium 75 % der ZrO2-Phase.
In Abschnitt C besitzt die residuale tetragonale Phase eine starke Orientierung. Dadurch wird die geometrische Bedingtheit der hydrothermal induzierten Phasenumwandlung ver-deutlicht. Die monokline Phase ist über den gesamten Alterungsprozess stark nach m(1 1 1) orientiert, was mit einer bevorzugten Umklapprichtung der c-Achse zur freien Oberfläche hin verbunden ist.
Mit Hilfe der tiefenaufgelösten Phasenanalyse konnte die Wachstumsgeschwindigkeit der monoklinen Randschicht von der Oberfläche in das Volumen untersucht werden. Die Ge-schwindigkeit des Schichtwachstums ist in Abschnitt B nicht zeit- und tiefenabhängig, son-dern konstant mit ausgeprägtem ARRHENIUS-Verhalten (Temperaturabhängigkeit). Die Akti-vierungsenergie der Schichtwachstumsgeschwindigkeit km liegt in der gleichen Größenord-nung wie die der Transformationskonstante k.
Die Umwandlungszone schreitet also mit konstanter Geschwindigkeit in das Volumen fort und hinterlässt ein verzweigtes Mikro- und Nanoriss-System. FESEM-Aufnahmen bestätigen das Vorhandensein einer porösen Randschicht, durch die das Wasser nahezu ungehindert eindringen kann.
NRA Untersuchungen deuten in Stadium A auf Korngrenzendiffusion hin und bestätigen in Stadium B einen konvektiven Transport des Wassers an die Transformationszone. Eine Dif-fusion über Sauerstoffleerstellen im Gitter konnte anhand von Proben aus 8YSZ nicht nach-gewiesen werden. Dagegen kommt es in dem verzweigten Riss- und Porensystem in der gealterten Randschicht zum Rücktransport des Wassers an die Oberfläche, sobald die Pro-ben aus der hydrothermalen Atmosphäre genommen, an Luft gelagert oder in die Hochva-kuumkammer der NRA-Messapparatur eingeschleust werden.
Mikrostrukturelle Untersuchungen an eigens entwickelten Verschleißpaarungen zeigten nach 720000 Zyklen ähnliche Oberflächeneigenschaften wie im Alterungsstadium A. Man kann daher davon ausgehen, dass die Stadien B und C aus Stabilitätsgründen in der tribolo-gischen Kontaktzone nicht existieren können und es dass sich im Falle einer gleichzeitigen, hydrothermalen und tribologischen Beanspruchung um einen stationären Alterungs- und Verschleißprozess handelt. Durch quasiplastische Deformation der monoklinen und tetra-gonalen Kristallite wird die Verschleißrate und die Abriebpartikel bei einer hart /hart Paa-rung aus ATZ deutlich minimiert, so dass ATZ für die Hüftendoprothetik ein durchaus geeig-neten Werkstoff darstellt, der sich auf der Grundlage der in dieser Arbeit gewonnenen Daten über eine Imlantationsdauer von .mehr als 15 Jahre stabil verhalten kann.
BAUHAUS ISOMETRY AND FIELDS
(2012)
While integration increases by networking, segregation strides ahead too. Most of us fixate our mind on special topics. Yet we are relying on our intuition too. We are sometimes waiting for the inflow of new ideas or valuable information that we hold in high esteem, although we are not entirely conscious of its origin. We may even say the most precious intuitions are rooting in deep subconscious, collective layers of the mind. Take as a simple example the emergence of orientation in paleolithic events and its relation to the dihedral symmetry of the compass. Consider also the extension of this algebraic matter into the operational structures of the mind on the one hand and into the algebra of geometry, Clifford algebra as we use to call it today, on the other. Culture and mind, and even the individual act of creation may be connected with transient events that are subconscious and inaccessible to cognition in principle. Other events causative for our work may be merely invisible too us, though in principle they should turn out attainable. In this case we are just ignorant of the whole creative process. Sometimes we begin to use unusual tools or turn into handicraft enthusiasts. Then our small institutes turn into workshops and factories. All this is indeed joining with the Bauhaus and its spirit. We shall go together into this, and we shall present a record of this session.
The topic of structural robustness is covered extensively in current literature in structural engineering. A few evaluation methods already exist. Since these methods are based on different evaluation approaches, the comparison is difficult. But all the approaches have one in common, they need a structural model which represents the structure to be evaluated. As the structural model is the basis of the robustness evaluation, there is the question if the quality of the chosen structural model is influencing the estimation of the structural robustness index. This paper shows what robustness in structural engineering means and gives an overview of existing assessment methods. One is the reliability based robustness index, which uses the reliability indices of a intact and a damaged structure. The second one is the risk based robustness index, which estimates the structural robustness by the usage of direct and indirect risk. The paper describes how these approaches for the evaluation of structural robustness works and which parameters will be used. Since both approaches needs a structural model for the estimation of the structural behavior and the probability of failure, it is necessary to think about the quality of the chosen structural model. Nevertheless, the chosen model has to represent the structure, the input factors and reflect the damages which occur. On the example of two different model qualities, it will be shown, that the model choice is really influencing the quality of the robustness index.
Texts from the web can be reused individually or in large quantities. The former is called text reuse and the latter language reuse. We first present a comprehensive overview of the different ways in which text and language is reused today, and how exactly information retrieval technologies can be applied in this respect. The remainder of the thesis then deals with specific retrieval tasks. In general, our contributions consist of models and algorithms, their evaluation, and for that purpose, large-scale corpus construction.
The thesis divides into two parts. The first part introduces technologies for text reuse detection, and our contributions are as follows: (1) A unified view of projecting-based and embedding-based fingerprinting for near-duplicate detection and the first time evaluation of fingerprint algorithms on Wikipedia revision histories as a new, large-scale corpus of near-duplicates. (2) A new retrieval model for the quantification of cross-language text similarity, which gets by without parallel corpora. We have evaluated the model in comparison to other models on many different pairs of languages. (3) An evaluation framework for text reuse and particularly plagiarism detectors, which consists of tailored detection performance measures and a large-scale corpus of automatically generated and manually written plagiarism cases. The latter have been obtained via crowdsourcing. This framework has been successfully applied to evaluate many different state-of-the-art plagiarism detection approaches within three international evaluation competitions.
The second part introduces technologies that solve three retrieval tasks based on language reuse, and our contributions are as follows: (4) A new model for the comparison of textual and non-textual web items across media, which exploits web comments as a source of information about the topic of an item. In this connection, we identify web comments as a largely neglected information source and introduce the rationale of comment retrieval. (5) Two new algorithms for query segmentation, which exploit web n-grams and Wikipedia as a means of discerning the user intent of a keyword query. Moreover, we crowdsource a new corpus for the evaluation of query segmentation which surpasses existing corpora by two orders of magnitude. (6) A new writing assistance tool called Netspeak, which is a search engine for commonly used language. Netspeak indexes the web in the form of web n-grams as a source of writing examples and implements a wildcard query processor on top of it.
Die thermochemische Wärmespeicherung über reversible Salzhydratation stellt einen aussichtsreichen Weg zur Speicherung von Niedertemperaturwärme, wie z.B. solarer Energie, dar.
Untersuchungen an Magnesiumsulfat-Hydraten zeigen, dass das bei 130°C entwässerte Magnesiumsulfat-Heptahydrat seinen thermodynamisch stabilen Endzustand während der Reaktion mit gasförmigem Wasser nicht wieder erreicht. Um diese kinetische Hemmung zu überwinden und den Einfluss von unterschiedlichen Porenräumen auf die Hydratation bzw. Sorption des Magnesiumsulfates zu charakterisieren, wurde das Magnesiumsulfat in Trägermaterialien auf Basis offenporiger Gläser mit durchschnittlichen Porendurchmessern von 4 nm bis 1,4 µm eingebracht und diese Kompositmaterialien untersucht. Dabei ist festgestellt worden, dass jede salzbezogene Sorptionswärme im Porenraum höher ist, als die des ungeträgerten Salzes und mit kleiner werdendem Porenradius weiter zunimmt.
Weiterhin wurden Teile des Magnesiumsulfates mit niedrig deliqueszierenden Salzen substituiert, um die Wasseraufnahme und somit die Wärmespeicherkapazität zu erhöhen. Dies stellt einen neuen Weg zur Herstellung von Kompositmaterialien dar, über den man Eigenschaften wie Deliqueszenzfeuchte und Desorptionstemperatur einstellen und an die Sorptionsbedingungen eines Speichers anpassen kann. Als niedrig deliqueszierende Salze wurden Magnesiumchlorid und Lithiumchlorid
als Zusätze untersucht, wobei ein Ansteigen der Sorptionswärme und Wasseraufnahme mit steigendem Chloridanteil festgestellt wurde. Aufgrund der geringeren Deliqueszenzfeuchte des Lithiumchlorides gegenüber dem Magnesiumchlorid wurden bei gleichen Massenverhältnissen höhere Sorptionswärmen erzielt. Untersuchungen zu Zinksulfat in Verbindung mit Chloriden bescheinigen diesem Salz -speziell bei tieferen Entwässerungstemperaturen- eine gute Eignung als Aktivstoff zur Wärmespeicherung.
Zusammenfassend konnte festgestellt werden, dass sich die Wärmespeicherkapazitäten über die Porengröße, in die das Salz eingebracht wird, und die gewählte Mischungszusammensetzung steuern lassen. Die gemessenen Sorptionswärmen ermöglichen insbesondere bei niedrigen Sorptionstemperaturen und hohen Luftfeuchtigkeiten den Schluss, dass die Verwendung von Salzmischungen als Aktivkomponente in Kompositmaterialien einen geeigneten Weg zur thermochemischen Speicherung solarer Wärme (≤130 °C) darstellt.
Metakaolin made from kaolin is used around the world but rarely in Vietnam where abundant deposits of kaolin is found. The first studies of producing metakaolin were conducted with high quality Vietnamese kaolins. The results showed the potential to produce metakaolin, and its effect has on strength development of mortars and concretes. However, utilisation of a low quality kaolin for producing Vietnamese metakaolin has not been studied so far.
The objectives of this study were to produce a good quality metakaolin made from low quality Vietnamese kaolin and to facilitate the utilisation of Vietnamese metakaolin in composite cements.
In order to reach such goals, the optimal thermal conversion of Vietnamese kaolin into metakaolin was carried out by many investigations, and as such the optimal conversion is found using the analysis results of DSC/TGA, XRD and CSI. During the calcination in a range of 500 – 800 oC lasting for 1 – 5 hours, the characterisation of calcinated kaolin was also monitored for mass loss, BET surface, PSD, density as well as the presence of the residual water. It is found to have a well correlation between residual water and BET surface.
The pozzolanic activity of metakaolin was tested by various methods regarding to the saturated lime method, mCh and TGA-CaO method. The results of the study showed which method is the most suitable one to characterise the real activity of metakaolin and can reach the greatest agreement with concrete performance. Furthermore, the pozzolanic activity results tested using methods were also analysed and compared to each other with respect to the BET surface.
The properties of Vietnam metakaolin was established using investigations on water demand, setting time, spread-flowability, and strength. It is concluded that depending on the intended use of composite cement and weather conditions of cure, each Vietnamese metakaolin can be used appropriately to produce (1) a composite cement with a low water demand (2) a high strength of composite cement (3) a composite cement that aims to reduce CO2 emissions and to improve economics of cement products (4) a high performance mortar.
The durability of metakaolin mortar was tested to find the needed metakaolin content against ASR, sulfat and sulfuric acid attacks successfully.
Methods for model quality assessment are aiming to find the most appropriate model with respect to accuracy and computational effort for a structural system under investigation. Model error estimation techniques can be applied for this purpose when kinematical models are investigated. They are counted among the class of white box models, which means that the model hierarchy and therewith the best model is known. This thesis gives an overview of discretisation error estimators. Deduced from these, methods for model error estimation are presented. Their general goal is to make a prediction of the inaccuracies that are introduced using the simpler model without knowing the solution of a more complex model. This information can be used to steer an adaptive process. Techniques for linear and non-linear problems as well as global and goal-oriented errors are introduced. The estimation of the error in local quantities is realised by solving a dual problem, which serves as a weight for the primal error. So far, such techniques have mainly been applied in
material modelling and for dimensional adaptivity. Within the scope of this thesis, available model error estimators are adapted for an application to kinematical models. Their applicability is tested regarding the question of whether a geometrical non-linear calculation is necessary or not. The analysis is limited to non-linear estimators due to the structure of the underlying differential equations. These methods often involve simplification, e.g linearisations. It is investigated to which extent such assumptions lead to meaningful results, when applied to kinematical models.
This paper presents a novel numerical procedure for computing limit and shakedown loads of structures using a node-based smoothed FEM in combination with a primal–dual algorithm. An associated primal–dual form based on the von Mises yield criterion is adopted. The primal-dual algorithm together with a Newton-like iteration are then used to solve this associated primal–dual form to determine simultaneously both approximate upper and quasi-lower bounds of the plastic collapse limit and the shakedown limit. The present formulation uses only linear approximations and its implementation into finite element programs is quite simple. Several numerical examples are given to show the reliability, accuracy, and generality of the present formulation compared with other available methods.
We present an extended finite element formulation for dynamic fracture of piezo-electric materials. The method is developed in the context of linear elastic fracture mechanics. It is applied to mode I and mixed mode-fracture for quasi-steady cracks. An implicit time integration scheme is exploited. The results are compared to results obtained with the boundary element method and show excellent agreement.
Monogenic functions play a role in quaternion analysis similarly to that of holomorphic functions in complex analysis. A holomorphic function with nonvanishing complex derivative is a conformal mapping. It is well-known that in Rn+1, n ≥ 2 the set of conformal mappings is restricted to the set of Möbius transformations only and that the Möbius transformations are not monogenic. The paper deals with a locally geometric mapping property of a subset of monogenic functions with nonvanishing hypercomplex derivatives (named M-conformal mappings). It is proved that M-conformal mappings orthogonal to all monogenic constants admit a certain change of solid angles and vice versa, that change can characterize such mappings. In addition, we determine planes in which those mappings behave like conformal mappings in the complex plane.
Architektonisches Entwerfen ist ein kreativer Prozess, der eine Lösung hervorbringt, die in ihrer Form und ihrer Funktionalität so noch nicht bestand. Resultat eines architektonischen Entwurfes ist ein Original, dessen Entstehen eine schöpferische Komponente erfordert. Dieser kreative Prozess ist nicht systematisierbar und kann auch nicht als Methode wiederholbar gemacht werden. Im Rahmen der architektonischen Lehre ist die Vermittlung von Methoden zur Entwurfsfindung jedoch ein wesentlicher Aspekt. Der hier vorgestellte Entwurf möchte zeigen, dass der Auffassung, allein intuitive Methoden als Entwurfsgrundlage zu nutzen, die Auffassung entgegen steht, eine reglementierte Methode zur Entwurfs- und Formfindung anzuwenden.
Eine solche reglementierte Methode wird hierbei als Entwurfsgrammatik bezeichnet.
In den 1950er Jahren entstehen zwei revolutionäre Werke des Komponisten und Architekten Iannis Xenakis: die Komposition Metastaseis und der Philips-Pavillon für die Weltausstellung in Brüssel. Basierend auf diesen Arbeiten wird eine Methode vorgestellt, welche musikalische Parameter in architektonischen Parameter transformiert.
Diese Methode bildet die Grundlage für ein exaktes räumliches Transformation-Modell, welches aus mathematischen Funktionen abgeleitet ist. Dabei weißt das Transformations-Modell eine starke Ähnlichkeit mit der Architektur des Pavillons auf.
A numerical analysis of the mode of deformation of the main load-bearing components of a typical frame sloping shaft headgear was performed. The analysis was done by a design model consisting of plane and solid finite elements, which were modeled in the program «LIRA». Due to the numerical results, the regularities of local stress distribution under a guide pulley bearing were revealed and parameters of a plane stress under both emergency and normal working loads were determined. In the numerical simulation, the guidelines to improve the construction of the joints of guide pulleys resting on sub-pulley frame-type structures were established. Overall, the results obtained are the basis for improving the engineering procedures of designing steel structures of shaft sloping headgear.
Die besondere Aggressivität von hochkonzentrierten Magnesiumsulfatlösungen bei Einwirkung auf Beton ist seit vielen Jahrzehnten bekannt. Neben dem Sulfat greift zusätzlich auch das Magnesium den Zementstein an. Bei hohen Lösungskonzentrationen nimmt der Magnesiumangriff gegenüber dem Sulfatangriff sogar eine dominante Rolle ein. Magnesiumgehalte unter 300 mg/l im Grundwasser gelten allerdings bislang als nicht angreifend. In Auslagerungs- und Laborversuchen wurde jedoch festgestellt, dass auch bei praxisrelevanten Magnesium- (<300 mg/l) und Sulfatgehalten (1.500 mg/l) das Magnesium zu einer deutlichen Verschärfung des Sulfatangriffes bei niedrigen Temperaturen führte. Diese Verschärfung trat bei Mörteln und Betonen auf, bei denen der erhöhte Sulfatwiderstand durch einen teilweisen Zementersatz mit 20 % Flugasche zu einem CEM II/A-LL erreicht werden sollte, gemäß der Flugascheregelung nach EN 206-1/DIN 1045-2.
Bei einem teilweisen Zementersatz durch 30 % Flugasche konnte auch in magnesiumhaltigen Sulfatlösungen eine deutliche Verbesserung des Sulfatwiderstandes erreicht werden. Mörtel mit HS-Zement als Bindemittel wiesen keinerlei Schäden auf. Schadensverursachend war eine Kombination mehrerer Einflüsse. Zum einen wurde der Sulfatwiderstand des Zement-Flugasche-Systems durch die unzureichende Reaktion der Flugasche infolge der niedrigen Lagerungstemperatur geschwächt. Zum anderen konnte durch die Einwirkung des Magnesiums in der Randzone vermutlich eine Destabilisierung der C-S-H-Phasen erfolgen, wodurch die Thaumasitbildung an dieser Stelle forciert wurde. Zusätzlich wurde durch den Portlanditverbrauch und die pH-Wert-Absenkung in der Randzone die puzzolanische Reaktion der Flugasche behindert.
Der Nachbehandlung eines Fahrbahndeckenbetons kommt zum Erzielen eines hohen Frost-Tausalz-Widerstandes der fertigen Betondecke eine besondere Bedeutung zu. Bei der Waschbetonbauweise erfolgt die Nachbehandlung in mehreren Schritten. Eine erste Nachbehandlung gewährleistet den Verdunstungsschutz des Betons bis zum Zeitpunkt des Ausbürstens des verzögerten Oberflächenmörtels. Daran schließt sich die zweite Nachbehandlung an, in der Regel durch Aufsprühen eines flüssigen Nachbehandlungsmittels.
Der zweite Nachbehandlungsschritt ist entscheidend für den Frost-Tausalz-Widerstand der Betondecke. Im Rahmen eines Forschungsprojektes wurde daher untersucht, inwiefern durch eine Optimierung der zweiten Nachbehandlung der Frost-Tausalz-Widerstand von Waschbetonoberflächen erhöht werden kann, insbesondere bei Verwendung hüttensandhaltiger Zemente. Schon durch eine einmalige Nassnachbehandlung wurde eine deutlich höherer Widerstand der Waschbetons gegen Frost-Tausalz-Angriff erzielt.
MODEL DESCRIBING STATIC AND DYNAMIC DISPLACEMENTS OF SILOS WALL DURING THE FLOW OF LOOSE MATERIAL
(2012)
Correct evaluation of wall displacements is a key matter when designing silos. This issue is important from both the standpoint of design engineer (load-bearing capacity of structures) and end-consumer (durability of structures). Commonplace methods of silo design mainly focus on satisfying limit states of load-bearing capacity. Current standards fail to specify methods of dynamic displacements analysis. Measurements of stressacting on silo walls prove that the actual stress is sum of static and dynamic stresses. Janssen came up with differential equation describing state of static equilibrium in cross-section of a silo. By solving the equation static stress of granular solid on silo walls can be determined. Equations of motion were determined from equilibrium equations of feature objects. General solution, describing dynamic stresses was presented as parametric model. This paper presents particular integrals of differential equation, which enable analysing displacements and vibrations for different rigidities of silo walls, types of granular solid and its flow rate.
In this paper experimental studies and numerical analysis carried out on reinforced concrete beam are partially reported. They aimed to apply the rigid finite element method to calculations for reinforced concrete beams using discrete crack model. Hence rotational ductility resulting from crack occurrence had to be determined. A relationship for calculating it in static equilibrium was proposed. Laboratory experiments proved that dynamic ductility is considerably smaller. Therefore scaling of the empirical parameter was carried out. Consequently a formula for its value depending on reinforcement ratio was obtained.
DISCRETE CRACK MODEL OF BORCZ FOR CALCULATING THE DEFLECTIONS OF BENDING REINFORCED CONCRETE BEAM
(2012)
In the design of the reinforced concrete beams loaded by the bending moment, it is assumed that the structure can be used at a level of load, that there are local discontinuities - cracks. Designing the element demands checking two limit states of construction, load capacity and usability. Limit states usability include also the deflection of the element. Deflections in the reinforced concrete beams with cracks are based on actual rigidity of the element. After cracking there is a local change in rigidity of the beam. The rigidity is variable in the element’s length and due to the heterogeneous structure of concrete, it is not possible to clearly describe those changes. Most standards of testing methods tend to simplify the calculations and take the average value of the beam’s rigidity on its entire length. The rigidity depends on the level of the maximal load of the beam. Experimental researches verify the value by inserting the coefficients into the formulas used in the theory of elasticity. The researches describe the changes in rigidity in the beam’s length more precisely. The authors take into consideration the change of rigidity, depending on the level of maximum load (continuum models), or localize the changes in rigidity in the area of the cracks (discrete models). This paper presents one of the discrete models. It is distinguished by the fact that the left side of the differential equation, that depends on the rigidity, is constant, and all effects associated with the scratches are taken as the external load and placed on the right side of the equation. This allows to generalize the description. The paper presents a particular integral of the differential equation, which allow analyzing the displacement and vibration for different rigidity of the silo’s walls, the flow rate and type of the flowing material.
This paper presents a methodology for uncertainty quantification in cyclic creep analysis. Several models- , namely BP model, Whaley and Neville model, modified MC90 for cyclic loading and modified Hyperbolic function for cyclic loading are used for uncertainty quantification. Three types of uncertainty are included in Uncertainty Quantification (UQ): (i) natural variability in loading and materials properties; (ii) data uncertainty due to measurement errors; and (iii) modelling uncertainty and errors during cyclic creep analysis. Due to the consideration of all type of uncertainties, a measure for the total variation of the model response is achieved. The study finds that the BP, modified Hyperbolic and modified MC90 are best performing models for cyclic creep prediction in that order. Further, global Sensitivity Analysis (SA) considering the uncorrelated and correlated parameters is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identifying the important parameters. The error in determining the input quantities and model itself can produce significant changes in creep prediction values. The variability influence of input random quantities on the cyclic creep was studied by means of the stochastic uncertainty and sensitivity analysis namely the Gartner et al. method and Saltelli et al. method. All input imperfections were considered to be random quantities. The Latin Hypercube Sampling (LHS) numerical simulation method (Monte Carlo type method) was used. It has been found by the stochastic sensitivity analysis that the cyclic creep deformation variability is most sensitive to the Elastic modulus of concrete, compressive strength, mean stress, cyclic stress amplitude, number of cycle, in that order.
In this paper we review two distint complete orthogonal systems of monogenic polynomials over 3D prolate spheroids. The underlying functions take on either values in the reduced and full quaternions (identified, respectively, with R3 and R4), and are generally assumed to be nullsolutions of the well known Riesz and Moisil Théodoresco systems in R3. This will be done in the spaces of square integrable functions over R and H. The representations of these polynomials are explicitly given. Additionally, we show that these polynomial functions play an important role in defining the Szegö kernel function over the surface of 3D spheroids. As a concrete application, we prove the explicit expression of the monogenic Szegö kernel function over 3D prolate spheroids.
Due to the complex interactions between the ground, the driving machine, the lining tube and the built environment, the accurate assignment of in-situ system parameters for numerical simulation in mechanized tunneling is always subject to tremendous difficulties. However, the more accurate these parameters are, the more applicable the responses gained from computations will be. In particular, if the entire length of the tunnel lining is examined, then, the appropriate selection of various kinds of ground parameters is accountable for the success of a tunnel project and, more importantly, will prevent potential casualties. In this context, methods of system identification for the adaptation of numerical simulation of ground models are presented. Hereby, both deterministic and probabilistic approaches are considered for typical scenarios representing notable variations or changes in the ground model.
Civil engineers take advantage of models to design reliable structures. In order to fulfill the design goal with a certain amount of confidence, the utilized models should be able to predict the probable structural behavior under the expected loading schemes. Therefore, a major challenge is to find models which provide less uncertain and more robust responses. The problem gets even twofold when the model to be studied is a global model comprised of different interacting partial models. This study aims at model quality evaluation of global models with a focus on frame-wall systems as the case study. The paper, presents the results of the first step taken toward accomplishing this goal. To start the model quality evaluation of the global frame-wall system, the main element (i.e. the wall) was studied through nonlinear static and dynamic analysis using two different modeling approaches. The two selected models included the fiber section model and the Multiple-Vertical-Line-Element-Model (MVLEM). The influence of the wall aspect ratio (H=L) and the axial load on the response of the models was studied. The results from nonlinear static and dynamic analysis of both models are presented and compared. The models resulted in quite different responses in the range of low aspect ratio walls under large axial loads due to different contribution of the shear deformations to the top displacement. In the studied cases, the results implied that careful attention should be paid to the model quality evaluation of the wall models specifically when they are supposed to be coupled to other partial models such as a moment frame or a soil-footing substructure which their response is sensitive to shear deformations. In this case, even a high quality wall model would not result in a high quality coupled system since it fails to interact properly with the rest of the system.
Experimentelle Untersuchung eines Verfahrens zur optimalen Positionierung von Referenzsensoren bei der experimentellen Modalanalyse mit output-only Methoden nach Brehm (2011). Untersuchung des Einflusses der Referenzsensorpositionierung, -anzahl und der Positionierung der wandernden Sensoren unter Anwendung des Stochastic-Subspace-Verfahrens zur Auswertung der output-only Messdaten.