Refine
Has Fulltext
- yes (2569) (remove)
Document Type
- Conference Proceeding (857)
- Article (830)
- Doctoral Thesis (493)
- Master's Thesis (115)
- Part of a Book (50)
- Book (45)
- Report (43)
- Periodical (28)
- Preprint (27)
- Bachelor Thesis (22)
- Diploma Thesis (13)
- Other (11)
- Study Thesis (10)
- Habilitation (9)
- Review (7)
- Working Paper (5)
- Course Material (1)
- Lecture (1)
- Magister's Thesis (1)
- Sound (1)
Institute
- Professur Theorie und Geschichte der modernen Architektur (493)
- Professur Informatik im Bauwesen (484)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (201)
- Institut für Strukturmechanik (ISM) (164)
- Professur Baubetrieb und Bauverfahren (145)
- Institut für Europäische Urbanistik (71)
- Professur Bauphysik (53)
- Graduiertenkolleg 1462 (42)
- F. A. Finger-Institut für Baustoffkunde (FIB) (38)
- Professur Informatik in der Architektur (37)
Keywords
- Weimar (446)
- Bauhaus-Kolloquium (442)
- Computerunterstütztes Verfahren (289)
- Architektur (246)
- Architektur <Informatik> (201)
- CAD (184)
- Angewandte Informatik (155)
- Angewandte Mathematik (148)
- Bauhaus (125)
- Architekturtheorie (97)
Year of publication
- 2004 (219)
- 2003 (194)
- 2006 (168)
- 1997 (165)
- 2020 (123)
- 2010 (105)
- 2008 (103)
- 2005 (100)
- 2000 (99)
- 2022 (94)
- 2015 (93)
- 2012 (91)
- 2021 (88)
- 2011 (82)
- 2019 (65)
- 2023 (65)
- 1987 (63)
- 1990 (60)
- 2016 (59)
- 2018 (58)
- 2013 (57)
- 2017 (53)
- 1983 (49)
- 2014 (48)
- 2007 (46)
- 2009 (45)
- 1979 (36)
- 1976 (29)
- 1993 (23)
- 2001 (22)
- 2002 (21)
- 1999 (18)
- 1992 (16)
- 1998 (7)
- 2024 (4)
- 1995 (1)
To fulfil safety requirements the changes in the static and/or dynamic behaviour of the structure must be analysed with great care. These changes are often caused by local reduction of the stiffness of the structure caused by the irregularities in the structure, as for example cracks. In simple structures such analysis can be performed directly, by solving equations of motion, but for more complex structures a different approach, usually numerical, must be applied. The problem of crack implementation into the structure behaviour has been studied by many authors who have usually modelled the crack as a massless rotational spring of suitable stiffness placed at the beam at the location where the crack occurs. Recently, the numerical procedure for the computation of the stiffness matrix for a beam element with a single transverse crack has been replaced with the element stiffness matrix written in fully symbolic form. A detailed comparison of the results obtained by using 200 2D finite elements with those obtained with a single cracked beam element has confirmed the usefulness of such element.
As computer programs become ever more complex, software development has shifted from focusing on programming towards focusing on integration. This paper describes a simulation access language (SimAL) that can be used to access and compose software applications over the Internet. Specifically, the framework is developed for the integration of tools for project management applications. The infrastructure allows users to specify and to use existing heterogeneous tools (e.g., Microsoft Project, Microsoft Excel, Primavera Project Planner, and AutoCAD) for simulation of project scenarios. This paper describes the components of the SimAL language and the implementation efforts required in the development of the SimAL framework. An illustration example bringing on-line weather forecasting service for project scheduling and management applications is provided to demonstrate the use of the simulation language and the infrastructure framework.
A four-node quadrilateral shell element with smoothed membrane-bending based on Mindlin-Reissner theory is proposed. The element is a combination of a plate bending and membrane element. It is based on mixed interpolation where the bending and membrane stiffness matrices are calculated on the boundaries of the smoothing cells while the shear terms are approximated by independent interpolation functions in natural coordinates. The proposed element is robust, computationally inexpensive and free of locking. Since the integration is done on the element boundaries for the bending and membrane terms, the element is more accurate than the MITC4 element for distorted meshes. This will be demonstrated for several numerical examples.
This paper presents a new design environment based on Multi-Agents and Virtual Reality (VR). In this research, a design system with a virtual reality function was developed. The virtual world was realized by using GL4Java, liquid crystal shutter glasses, sensor systems, etc. And the Multi-Agent CAD system with product models, which had been developed before, was integrated with the VR design system. A prototype system was developed for highway steel plate girder bridges, and was applied to a design problem. The application verified the effectiveness of the developed system.
Superplasticizers are utilized both to improve the fluidity during the placement and to reduce the water content of concretes. Both effects have also an impact on the properties of the hardened concrete. As a side effect the presence of superplasticizers affects the strength development of concretes that is strongly retarded. This may lead to an ecomomical drawback of the concrete manufacturing. The present work is aimed at gaining insights on the causes of the retarding effect of superplasticizers on the hydration of Portland cement. In order to simplify the complex interactions occurring during the hydration of Portland cement the majority of the work focuses on the interaction of superplasticizer and tricalcium silicate (Ca3SiO5 or C3S, the main compound of Portland cement clinker). The tests are performed in three main parts accompanied by methods as for example isothermal conduction calorimetry, electrical conductivity, Electron Microscopy, ICP-OES, TOC, as well as Analytical Ultracentrifugation.
In the first main part and based on the interaction of cations and anionic charges of polymers, the interactions between calcium ions and superplasticizers are investigated. As a main effect calcium ions are complexed by the functional groups of the polymers (carboxy, sulfonic). Calcium ions may be both dissolved in the aqueous phase and a constitute of particle interfaces. Besides these effects it is furthermore shown that superplasticizers induce the formation of nanoscaled particles which are dispersed in the aqueous phase (cluster formation). Analogous to recent findings in the field of biomineralization, it is reasonable to assume that these nanoparticles influence the crystal growth by their assembly process.
Based on the assumption that superplasticizers hinder either or both dissolution and precipitation and by that retard the cement hydration, the impact on separate reactions is investigated. On experiments that address the solubility of C-S-H phases and portlandite, it is shown that complexation of calcium ions in the aqueous phase by functional groups of polymers increases the solubility of portlandite. Contrary, in case of C-S-H solubility the complexation of calcium ions in solution leads to decrease of the calcium ion concentration in the aqueous phase. These effects are explained by differences in adsorption of polymers on C-S-H phases and portlandite. It is proposed that adsorption is stronger on C-S-H phases compared to portlandite due to the increased specific surface area of C-S-H phases. Following that, it is claimed that before polymers are able to adsorb on C-S-H phases the functional groups must be screened by calcium ions in the aqueous phase. It is further shown that data regarding the impact of superplasticizers on the unconstrained dissolution rate of C3S does not provide a clear relation to the overall retarding effect occurring during the hydration of C3S. Both increased and decreased dissolution rate with respect to the reference sample are detected. If the complexation capability of the superplasticizers is considered then also a reduced dissolution rate of C3S is determined. Despite the fact that the global hydration process is accelerated, the addition of calcite leads to a slower dissolution rate. Thus, a hindered unconstrained dissolution of C3S as possibly cause for the retarding effect still remains open for discussion. In the last section of this part, the pure crystallization of hydrate phases (C-S-H phases, portlandite) is fathomed. Results clearly show that superplasticizers prolong the induction time and modify the rate of crystal growth during pure crystallization in particular due to the complexation of ions in solution. But this effect is insufficient to account for the overall retarding effect. Further important factors are the blocking of crystal growth faces by adsorbed polymers and the dispersion of nanoscaled particles which hinders their agglomeration in order to build up crystals.
In the last main part of the work, the previously gathered results are utilized in order to investigate hydration kinetics. During hydration, dissolution and precipitation occur in parallel. Thereby, special attention is laid on the ion composition of the aqueous phase of C3S pastes and suspensions in order to determine the rate limiting step. All in all it is concluded that the retarding effect of superplasticizers on the hydration of tricalcium silicate is based on the retardation of crystallization of hydrate phases (C-S-H phases and portlandite). Thereby, the two effects complexation of calcium ions on surfaces and stabilization of nanoscaled particles are of major importance. These mechanisms may partly be compensated by template performance and increase in solubility by complexation of ions in solution. The decreased dissolution rate of C3S by the presence of superplasticizers during the in parallel occuring hydration process can only be assessed indirectly by means of the development of the ion concentrations in the aqueous phase (reaction path). Whether this observation is the cause or the consequence within the dissolution-precipitation process and therefore accounts for the retarding effect remains a topic for further investigations.
Besides these results it is shown that superplasticizers can be associated chemically with inhibitors because they reduce the frequency factor to end the induction period. Because the activation energy is widely unaffected it is shown that the basic reaction mechanism sustain. Furthermore, a method was developed which permits for the first time the determination of ion concentrations in the aqueous phase of C3S pastes in-situ. It is shown that during the C3S hydration the ion concentration in the aqueous phase is developed correspondingly to the heat release rate (calorimetry). The method permits the differentiation of the acceleration period in three stages. It is emphasized that crystallization of the product phases of C3S hydration, namely C-S-H phases and portlandite, are responsible for the end of the induction period.
Rice husk ash (RHA) is classified as a highly reactive pozzolan. It has a very high silica content similar to that of silica fume (SF). Using less-expensive and locally available RHA as a mineral admixture in concrete brings ample benefits to the costs, the technical properties of concrete as well as to the environment. An experimental study of the effect of RHA blending on workability, strength and durability of high performance fine-grained concrete (HPFGC) is presented. The results show that the addition of RHA to HPFGC improved significantly compressive strength, splitting tensile strength and chloride penetration resistance. Interestingly, the ratio of compressive strength to splitting tensile strength of HPFGC was lower than that of ordinary concrete, especially for the concrete made with 20 % RHA. Compressive strength and splitting tensile strength of HPFGC containing RHA was similar and slightly higher, respectively, than for HPFGC containing SF. Chloride penetration resistance of HPFGC containing 10–15 % RHA was comparable with that of HPFGC containing 10 % SF.
In this Paper, we explored the relation between the electricity consumption in residential sector and the automobile energy consumption in transportation sector in accordance with the location of city by employing Geographic Information System (GIS). We found in the study that the electricity consumption per capita has a tendency that is higher in city center and lower in suburbs in Utsunomiya city. It is also noted that there is little difference among total consumption between city center and suburbs, despite the fact that the density of electric appliances tends to increase in a small size house of city center and the amount of automobile energy consumption from residence is lower in city center than in suburbs.
The construction industry is a supportive industry in China. IT (information technolgy), including computer technology and communication technology, as a whole is regarded as the most important means to upgrade the construction industry so that research projects were organized by Chinese government to further the application of IT in the construction industry. This study originated from one of the projects and is aimed at grasping the general situation on the application of IT in the construction industry. A questionnaire was designed for the survey, which used stratified proportional sampling method, and was carried out under the help of a government agency. This study can not only provide sound foundation for the government to make relative policies, but also reveal references for the firms in construction industry to apply IT in their business. This paper presents the preliminary result of the survey.
Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses.
The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
In recent decades, a multitude of concepts and models were developed to understand, assess and predict muscular mechanics in the context of physiological and pathological events.
Most of these models are highly specialized and designed to selectively address fields in, e.g., medicine, sports science, forensics, product design or CGI; their data are often not transferable to other ranges of application. A single universal model, which covers the details of biochemical and neural processes, as well as the development of internal and external force and motion patterns and appearance could not be practical with regard to the diversity of the questions to be investigated and the task to find answers efficiently. With reasonable limitations though, a generalized approach is feasible.
The objective of the work at hand was to develop a model for muscle simulation which covers the phenomenological aspects, and thus is universally applicable in domains where up until now specialized models were utilized. This includes investigations on active and passive motion, structural interaction of muscles within the body and with external elements, for example in crash scenarios, but also research topics like the verification of in vivo experiments and parameter identification. For this purpose, elements for the simulation of incompressible deformations were studied, adapted and implemented into the finite element code SLang. Various anisotropic, visco-elastic muscle models were developed or enhanced. The applicability was demonstrated on the base of several examples, and a general base for the implementation of further material models was developed and elaborated.
A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)
Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.
The development of a life cycle structured cooperation platform is described, which is based on an integrated process and goal-oriented project model. Furthermore the structure of a life cycle oriented object structure model and its implementation in the platform are documented. The complete conceptual model is described, which represents the basis of a lifecycle -oriented structuring of the planning object and supports the thematic classification of the object and project management data.
The paper describes a concept for the step-by-step computer-aided capture and representation of geometric building data in the context of planning-oriented building surveying. Selected aspects of the concept have been implemented and tested as prototypes. The process of step-by-step capture and representation is determined by the order in which the user experiences the building. Only the information that the user knows (can see) or can reasonably deduce is represented. In addition approaches to the flexible combination of different measuring techniques and geometric abstractions are described which are based upon geodetic computational adjustment.
This paper is concerned with the numerical treatment of quasilinear elliptic partial differential equations. In order to solve the given equation we propose to use a Galerkin approach, but, in contrast to conventional finite element discretizations, we work with trial spaces that, not only exhibit the usual approximation and good localization properties, but, in addition, lead to expansions of any element in the underlying Hilbert spaces in terms in multiscale or wavelet bases with certain stability properties. Specifically, we select as trial spaces a nested sequence of spaces from an appropriate biorthogonal multiscale analysis. This gives rise to a nonlinear discretized system. To overcome the problems of nonlinearity, we make use of the machinery of interpolating wavelets to obtain knot oriented quadrature rules. Finally, Newton's method is applied to approximate the solution in the given ansatz space. The results of some numerical experiments with different biorthogonal systems, confirming the applicability of our scheme, are presented.
Anlass zum Abriss der Brücke; Das abzubrechende Bauwerk; Vorgaben für den Abbruch (gemäß der Ausschreibung, aus statischer Sicht, aus Arbeitsschutzgründen); Abbruch der Aufbauten und der Fahrbahnplatte (Abbruch der Aufbauten, Problem Festpunkt Bogenscheitel – Fahrbahn, Abbruch Plattenbalken und Pfeilerscheiben); Abbruch der Bögen (Abbruchgerüst, Scheitelöffnung, Bogenrückbau, Kontrollsystem). Noch nie zuvor war in Deutschland eine Bogenbrücke auf diese Art und Weise abgebrochen worden. Maßgeblich für das Gelingen des schwierigen Rückbaus der Teufelstalbrücke war das Verantwortungsbewusstsein der Beteiligten im Zusammenhang mit der notwendigen Risikobereitschaft, aber auch mit der Vermeidung unnötiger Restrisiken schon von vornherein. Zur Risikominimierung trugen die gute Organisation des Vorhabens, regelmäßige gemeinsame Baustellenbegehungen mit Vertretern der für Sicherheit zuständigen Institutionen (Amt für Arbeitsschutz Gera, Bau-Berufsgenossenschaft, Sicher-heits- und Gesundheitsschutzkoordinator) sowie die technische Ausstattung bei. All dies führte letztendlich zum Erfolg einer Maßnahme, bei der man den bisherigen Erfahrungsbereich verlassen musste.
Speziell für die Sandwich-Platten der Außenfassade von DDR-Plattenbauten, die in ihrem Inneren in den meisten Fällen Mineralwolle (Handelsname: Kamilit) enthalten, wurden belastungsarme Abbruch- bzw. Rückbautechnologien untersucht und durch Gefahrstoffmessungen begleitet. Es werden Vorschläge für zukünftig zu bevorzugende Abbruchtechnologien unterbreitet.
Warum werden in aktuellen Diskussionen Wohnungsgenossenschaften immer wieder als zentrale Akteure einer gemeinwohlorientierten Wohnraumversorgung benannt – obwohl sie kaum zur Schaffung neuen bezahlbaren Wohnraums beitragen? Warum wehrt sich die Mehrzahl der Wohnungsgenossenschaften mit Händen und Füßen gegen die Wiedereinführung eines Gesetzes zur Wohnungsgemeinnützigkeit, obwohl es doch gerade dieses Gesetz war, dass sie im 20. Jahrhundert zu im internationalen Vergleich großen Unternehmen wachsen ließ? Sind Wohnungsgenossenschaften nun klientilistische, wenig demokratische und nur halb dekommodifizierte Marktteilnehmer oder wichtiger Teil der Wohnungsversorgung der unteren Mittelschicht? Wer Antworten auf diese und andere Fragen sucht und Differenziertheit in ihrer Beantwortung aushält, lese Joscha Metzers Dissertation „Genossenschaften und die Wohnungsfrage.
Objektorientierte Anwendungen aus dem Ingenieurwesen bestehen aus strukturierten Mengen, deren Elemente Objekte sind. Zwischen den Objekten bestehen vielfältige Abhängigkeiten. Die Beziehungen sind zur Zeit der Entwicklung einer Anwendung nur teilweise bekannt. Beziehungen zwischen Objekten müssen deshalb auch zur Laufzeit der Anwendung erzeugt und gelöscht werden können. Aufgrund des hohen Rechenaufwandes wird die Objektbasis einer Anwendung verzögert aktualisiert. Eine objektorientierte Anwendung wird auf Grundlage der Systemtheorie als System formal beschrieben. Als Elemente des Systems werden Attribute, Objekte und Objektmengen eingeführt. Die in den Methoden der Objekte implementierten Algorithmen bestimmen die Bindungsrelation des Systems. Auf Grundlage der Graphentheorie wird die Reihenfolge der Aktualisierung der Objektbasis berechnet. ...
In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk.
The AEC industry is conscious of the potentials arising from the usage of mobile computer systems to increase productivity by streamlining their business processes. Discussions are no longer on whether or not to use a mobile computer solution, but rather, on how it should be used. However, the implantation process of this new technology in Architecture, Engineering and Construction (AEC) and Facility Management (FM) practise is very slow and should be improved. One way to encourage and ease the usage of mobile computer systems in AEC is a more process-oriented usability and context appropriateness of mobile computer solutions. Context-sensitivity is defined as a crucial feature to be taken into account for further research in the area of Mobile Computing. Context-sensitive, mobile IT-solutions depend on two features: (1) flexible definitions of (construction) processes describing the context and (2) tools for flexible, multi-dimensional information management representing the context. It is on this premise that the authors propose the n-dimensional data management approach for the implementation of mobile computing solutions. In this paper, we analyse working scenarios in the AEC and FM sector, defining context aspects which are transformed and formalized as dimension hierarchies of the envisaged context model.
For decades in Germany, historical research on dictatorial urban design in the first half of the 20th century focused on the National Socialist period. Studies on the urban design practices of other dictatorships remained an exception. This has changed. Meanwhile, the urban production practices of the Mussolini, Stalin, Salazar, Hitler and Franco dictatorships have become the subject of comprehensive research projects. Recently, a research group that studies dictatorial urban design in 20th century Europe has emerged at the Bauhaus-Institut für Geschichte und Theorie der Architektur und der Planung. The group is already able to refer to various research results.
Part of the research group’s self-conception is the assumption that the urban design practices of the named dictatorships can only be properly understood from a European perspective. The dictatorships influenced one another substantially. Furthermore, the specificities of the practices of each dictatorship can only be discerned if one can compare them to those of the other dictatorships. This approach requires strict adherence to the research methods of planning history and urban design theory. Meanwhile, these methods must be opened
to include those of general historical studies.
With this symposium, the research group aims to further qualify this European perspective. The aim is to pursue an inventory of the various national historiographies on the topic of “urban design and dictatorship”. This inventory should offer an overview on the general national level of historical research on urban design as well as on the level of particular urban design projects, persons or topics.
The symposium took place in Weimar, November 21-22, 2013. It was organized by Harald Bodenschatz, Piero Sassi and Max Welch Guerra and funded by the DAAD (German Academic Exchange Service).
In der Arbeit werden Möglichkeiten aufgezeigt, die Tragfähigkeit von Queranschlüssen an Trägern aus Voll- und Brettschichtholz abzuschätzen. Die Tragfähigkeit dieser Anschlüsse wird nicht allein durch die Tragfähigkeit der mechanischen Verbindungsmittel selbst begrenzt. Die Tragfähigkeit der Verbindungsmittel wird in dieser Arbeit a priori als hinreichend betrachtet. Sie kann z. B. nach der Theorie von JOHANSEN bestimmt werden. Insbesondere bei solchen Anschlüssen, welche unterhalb der Schwerachse von Trägern angeordnet sind, erzeugen die durch die Verbindungsmittel eingeleiteten Lasten Beanspruchungen, welche die Tragfähigkeiten dieser Anschlüsse bestimmen. Die Abschätzung der Tragfähigkeit auf der Basis von Spannungen hat bei dieser Problemstellung methodische Schwächen. Bauteile aus Holz können unter Gebrauchsbedingungen rißbehaftet sein. Mit den Methoden der Linear-Elastischen Bruchmechanik kann die Tragfähigkeit von rißbehafteten Bauteilen beurteilt werden. Es werden wegen der Vielzahl möglicher Ausführungvarianten lediglich Anschlüsse betrachtet, welche mit stiftförmigen Verbindungsmitteln hergestellt werden. Zur Bestimmung bruchmechanischer Kennwerte werden numerische Methoden angewendet. Es werden wichtige Parameter dieser Anschlüsse untersucht und hinsichtlich ihrer Berücksichtigung im Rechenmodell bewertet. Zur Verifizierung des Rechenmodells werden Vergleiche mit experimentellen Untersuchungen anderer Wissenschaftler angestellt. Der Einsatz verschiedener Versagenskriterien wird diskutiert. Schließlich wird ein formaler Zusammenhang zur Abschätzung der Tragfähigkeit für einzelne Verbindungsmittel hergestellt. Weiterhin wird die Tragfähigkeit von praxisüblichen Anschlüssen anhand einfacher Zusammenhänge aufgezeigt.
An architecture of a distributed planning system for the building industry has been developed. The emphasis is on highly collaborative environments in steelwork, timber construction etc. where designers concurrently handle 3D models. The overall system connects local design systems by the so-called Design Framework DFW. This framework consists of the definition of distributed components and protocols which make the collaborative design work. The process of collaborative design has been formalized on an abstract level. This paper describes how this has been done. A sample is given to illustrate the mapping of concrete scenarios of the ‘real design world’ to an abstract scenario level. This work is funded by the Deutsche Forschungsgemeinschaft DFG as part of the project SPP1103 (Meißner et al. 2003).
The paper presents the abstraction of process relevant information in order to enable the workflow management based on semantic data. It is shown for three examples, how the standards define the information needed to perform a certain planning activity. Abstraction of process relevant information is discussed for different granularities of the underlying processmodel. As one possible application ProMiSE is introduced, which uses process relevant data in individual tokens in a petri-net based process-model.
Der Beitrag basiert auf den Ansätzen und Ergebnissen des Forschungsprojekts >Prozessorientierte Vernetzung von Ingenieurplanungen am Beispiel der Geotechnik<, das im Rahmen des Schwerpunktprogramms 1103 >Vernetzt-kooperative Planungsprozesse im Konstruktiven Ingenieurbau< von der DFG gefördert wird. Ziel des gemeinsam mit dem Institut für Numerische Methoden und Informatik im Bauwesen an der TU Darmstadt durchgeführten Forschungsprojekts ist die Entwicklung einer netzwerkbasierten Kooperationsplattform zur Unterstützung von geotechnischen Ingenieurplanungen. Daher konzentriert sich das Forschungsprojekt auf die Abbildung und Koordination der Planungsprozesse für Projekte des Konstruktiven Ingenieurbaus vor dem Hintergrund der stark arbeitsteiligen Projektbearbeitung in einer verteilten Rechnerumgebung. Der Beitrag stellt die Abstraktion von Prozessmustern im Bauplanungsprozess als Basis für die dynamische Prozessmodellierung in einem Kooperationsmodell dar. Ziel ist es, durch die Identifikation der mit dem Entwurf und der Dimensionierung eines Bauteils verbundenen Planungs- und Abstimmungsprozesse einen bauteilbezogenen Katalog von Prozessmustern zu abstrahieren. Die einzelnen Prozessmuster werden in jedem Bauplanungsprozess dynamisch über geeignete Kopplungsmechanismen in das aktuelle Prozessmodell integriert, so dass die für den Bauplanungsprozess typischen Veränderungen der Konstruktion und der Zusammensetzung des Planungsteams im Prozessmodell berücksichtigt werden können. Dazu werden im Beitrag die bisherigen Ergebnisse der Analyse des Planungsprozesses eines großen innerstädtischen Bauvorhabens, das als Referenzobjekt dient, sowie typischer Planungsszenarien in der Geotechnik vorgestellt. Anschließend werden Grundlagen und methodische Ansätze zur Modellierung von Prozessen mit der Methode der farbigen Petri-Netze mit individuellen Marken vorgestellt. Anhand von Beispielen für bauteilorientierte Prozessmuster wird die Funktionalität der Prozessmuster in sich und im gegenseitigen Zusammenspiel erläutert
The technique of Acoustic travel-time TOMography (ATOM) allows for measuring the distribution of air temperatures throughout the entire room based on the determined sound-travel-times of early reflections, currently up to second order reflections. The number of detected early reflections in the room impulse response (RIR) which stands for the desired sound paths inside the room, has a significant impact on the resolution of reconstructed temperatures. This study investigates the possibility of utilizing an array of directional sound sources for ATOM measurements instead of a single omnidirectional loudspeaker used in the previous studies [1–3]. The developed measurement setup consists of two directional sound sources placed near the edge of the floor in the climate chamber of the Bauhaus-University Weimar and one omnidirectional receiver at center of the room near the ceiling. In order to compensate for the reduced number of sound paths when using directional sound sources, it is proposed to take high-energy early reflections up to third order into account. For this purpose, the simulated travel times up to third-order image sources were implemented in the image source model (ISM) algorithm, by which these early reflections can be detected effectively for air temperature reconstructions. To minimize the uncertainties of travel-times estimation due to the positioning of the sound transducers inside the room, measurements were conducted to determine the exact emitting point of the utilized sound source i.e. its acoustic center (AC). For these measurements, three types of excitation signals (MLS, linear and logarithmic chirp signals) with various frequency ranges were used considering that the acoustic center of a sound source is a frequency dependent parameter [4]. Furthermore, measurements were conducted to determine an optimum excitation signal based on the given condition of the ATOM measurement set-up which defines an optimum method for the RIR estimation correspondingly. Finally, the uncertainty of the measuring system utilizing an array of directional sound sources was analyzed.
Acoustic travel-time tomography (ATOM) determines the distribution of the temperature in a propagation medium by measuring the travel-time of acoustic signals between transmitters and receivers. To employ ATOM for indoor climate measurements, the impulse responses have been measured in the climate chamber lab of the Bauhaus-University Weimar and compared with the theoretical results of its image source model (ISM). A challenging task is distinguishing the reflections of interest in the reflectogram when the sound rays have similar travel-times. This paper presents a numerical method to address this problem by finding optimal positions of transmitter and receiver, since they have a direct impact on the distribution of travel times. These optimal positions have the minimum number of simultaneous arrival time within a threshold level. Moreover, for the tomographic reconstruction, when some of the voxels remain empty of sound-rays, it leads to inaccurate determination of the air temperature within those voxels. Based on the presented numerical method, the number of empty tomographic voxels are minimized to ensure the best sound-ray coverage of the room. Subsequently, a spatial temperature distribution is estimated by simultaneous iterative reconstruction technique (SIRT). The experimental set-up in the climate chamber verifies the simulation results.
Expert systems integrating fuzzy reasoning techniques represent a powerful tool to support practicing engineers during the early stages of structural design. In this context fuzzy models have proved themselves to be very suitable for the representation of complex design knowledge. However their definition is a laborious task. This paper introduces an approach for the design and the optimization of fuzzy systems based upon Genetic Programming. To keep the emerging fuzzy systems transparent a new framework for the definition of linguistic variables is also introduced.
The uncertainty existing in the construction industry is bigger than in other industries. Consequently, most construction projects do not go totally as planned. The project management plan needs therefore to be adapted repeatedly within the project lifecycle to suit the actual project conditions. Generally, the risks of change in the project management plan are difficult to be identified in advance, especially if these risks are caused by unexpected events such as human errors or changes in the client preferences. The knowledge acquired from different resources is essential to identify the probable deviations as well as to find proper solutions to the faced change risks. Hence, it is necessary to have a knowledge base that contains known solutions for the common exceptional cases that may cause changes in each construction domain. The ongoing research work presented in this paper uses the process modeling technique of Event-driven Process Chains to describe different patterns of structure changes in the schedule networks. This results in several so called “change templates”. Under each template different types of change risk/ response pairs can be categorized and stored in a knowledge base. This knowledge base is described as an ontology model populated with reference construction process data. The implementation of the developed approach can be seen as an iterative scheduling cycle that will be repeated within the project lifecycle as new change risks surface. This can help to check the availability of ready solutions in the knowledge base for the situation at hand. Moreover, if the solution is adopted, CPSP, “Change Project Schedule Plan „a prototype developed for the purpose of this research work, will be used to make the needed structure changes of the schedule network automatically based on the change template. What-If scenarios can be implemented using the CPSP prototype in the planning phase to study the effect of specific situations without endangering the success of the project objectives. Hence, better designed and more maintainable project schedules can be achieved.
The numerical simulation of damage using phenomenological models on the macroscale was state of the art for many decades. However, such models are not able to capture the complex nature of damage, which simultaneously proceeds on multiple length scales. Furthermore, these phenomenological models usually contain damage parameters, which are physically not interpretable. Consequently, a reasonable experimental determination of these parameters is often impossible. In the last twenty years, the ongoing advance in computational capacities provided new opportunities for more and more detailed studies of the microstructural damage behavior. Today, multiphase models with several million degrees of freedom enable for the numerical simulation of micro-damage phenomena in naturally heterogeneous materials. Therewith, the application of multiscale concepts for the numerical investigation of the complex nature of damage can be realized. The presented thesis contributes to a hierarchical multiscale strategy for the simulation of brittle intergranular damage in polycrystalline materials, for example aluminum. The numerical investigation of physical damage phenomena on an atomistic microscale and the integration of these physically based information into damage models on the continuum meso- and macroscale is intended. Therefore, numerical methods for the damage analysis on the micro- and mesoscale including the scale transfer are presented and the transition to the macroscale is discussed. The investigation of brittle intergranular damage on the microscale is realized by the application of the nonlocal Quasicontinuum method, which fully describes the material behavior by atomistic potential functions, but reduces the number of atomic degrees of freedom by introducing kinematic couplings. Since this promising method is applied only by a limited group of researchers for special problems, necessary improvements have been realized in an own parallelized implementation of the 3D nonlocal Quasicontinuum method. The aim of this implementation was to develop and combine robust and efficient algorithms for a general use of the Quasicontinuum method, and therewith to allow for the atomistic damage analysis in arbitrary grain boundary configurations. The implementation is applied in analyses of brittle intergranular damage in ideal and nonideal grain boundary models of FCC aluminum, considering arbitrary misorientations. From the microscale simulations traction separation laws are derived, which describe grain boundary decohesion on the mesoscale. Traction separation laws are part of cohesive zone models to simulate the brittle interface decohesion in heterogeneous polycrystal structures. 2D and 3D mesoscale models are presented, which are able to reproduce crack initiation and propagation along cohesive interfaces in polycrystals. An improved Voronoi algorithm is developed in 2D to generate polycrystal material structures based on arbitrary distribution functions of grain size. The new model is more flexible in representing realistic grain size distributions. Further improvements of the 2D model are realized by the implementation and application of an orthotropic material model with Hill plasticity criterion to grains. The 2D and 3D polycrystal models are applied to analyze crack initiation and propagation in statically loaded samples of aluminum on the mesoscale without the necessity of initial damage definition.
We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.
Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.
The nonlinear behavior of concrete can be attributed to the propagation of microcracks within the heterogeneous internal material structure. In this thesis, a mesoscale model is developed which allows for the explicit simulation of these microcracks. Consequently, the actual physical phenomena causing the complex nonlinear macroscopic behavior of concrete can be represented using rather simple material formulations. On the mesoscale, the numerical model explicitly resolves the components of the internal material structure. For concrete, a three-phase model consisting of aggregates, mortar matrix and interfacial transition zone is proposed. Based on prescribed grading curves, an efficient algorithm for the generation of three-dimensional aggregate distributions using ellipsoids is presented. In the numerical model, tensile failure of the mortar matrix is described using a continuum damage approach. In order to reduce spurious mesh sensitivities, introduced by the softening behavior of the matrix material, nonlocal integral-type material formulations are applied. The propagation of cracks at the interface between aggregates and mortar matrix is represented in a discrete way using a cohesive crack approach. The iterative solution procedure is stabilized using a new path following constraint within the framework of load-displacement-constraint methods which allows for an efficient representation of snap-back phenomena. In several examples, the influence of the randomly generated heterogeneous material structure on the stochastic scatter of the results is analyzed. Furthermore, the ability of mesoscale models to represent size effects is investigated. Mesoscale simulations require the discretization of the internal material structure. Compared to simulations on the macroscale, the numerical effort and the memory demand increases dramatically. Due to the complexity of the numerical model, mesoscale simulations are, in general, limited to small specimens. In this thesis, an adaptive heterogeneous multiscale approach is presented which allows for the incorporation of mesoscale models within nonlinear simulations of concrete structures. In heterogeneous multiscale models, only critical regions, i.e. regions in which damage develops, are resolved on the mesoscale, whereas undamaged or sparsely damage regions are modeled on the macroscale. A crucial point in simulations with heterogeneous multiscale models is the coupling of sub-domains discretized on different length scales. The sub-domains differ not only in the size of the finite elements but also in the constitutive description. In this thesis, different methods for the coupling of non-matching discretizations - constraint equations, the mortar method and the arlequin method - are investigated and the application to heterogeneous multiscale models is presented. Another important point is the detection of critical regions. An adaptive solution procedure allowing the transfer of macroscale sub-domains to the mesoscale is proposed. In this context, several indicators which trigger the model adaptation are introduced. Finally, the application of the proposed adaptive heterogeneous multiscale approach in nonlinear simulations of concrete structures is presented.
The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate.
One major research focus in the Material Science and Engineering Community in the past decade has been to obtain a more fundamental understanding on the phenomenon 'material failure'. Such an understanding is critical for engineers and scientists developing new materials with higher strength and toughness, developing robust designs against failure, or for those concerned with an accurate estimate of a component's design life. Defects like cracks and dislocations evolve at
nano scales and influence the macroscopic properties such as strength, toughness and ductility of a material. In engineering applications, the global response of the system is often governed by the behaviour at the smaller length scales. Hence, the sub-scale behaviour must be computed accurately for good predictions of the full scale behaviour.
Molecular Dynamics (MD) simulations promise to reveal the fundamental mechanics of material failure by modeling the atom to atom interactions. Since the atomistic dimensions are of the order of Angstroms ( A), approximately 85 billion atoms are required to model a 1 micro- m^3 volume of Copper. Therefore, pure atomistic models are prohibitively expensive with everyday engineering computations involving macroscopic cracks and shear bands, which are much larger than the atomistic length and time scales. To reduce the computational effort, multiscale methods are required, which are able to couple a continuum description of the structure with an atomistic description. In such paradigms, cracks and dislocations are explicitly modeled at the atomistic scale, whilst a self-consistent continuum model elsewhere.
Many multiscale methods for fracture are developed for "fictitious" materials based on "simple" potentials such as the Lennard-Jones potential. Moreover, multiscale methods for evolving cracks are rare. Efficient methods to coarse grain the fine scale defects are missing. However, the existing multiscale methods for fracture do not adaptively adjust the fine scale domain as the crack propagates. Most methods, therefore only "enlarge" the fine scale domain and therefore drastically increase computational cost. Adaptive adjustment requires the fine scale domain to be refined and coarsened. One of the major difficulties in multiscale methods for fracture is to up-scale fracture related material information from the fine scale to the coarse scale, in particular for complex crack problems. Most of the existing approaches therefore were applied to examples with comparatively few macroscopic cracks.
Key contributions
The bridging scale method is enhanced using the phantom node method so that cracks can be modeled at the coarse scale. To ensure self-consistency in the bulk, a virtual atom cluster is devised providing the response of the intact material at the coarse scale. A molecular statics model is employed in the fine scale where crack propagation is modeled by naturally breaking the bonds. The fine scale and coarse scale models are coupled by enforcing the displacement boundary conditions on the ghost atoms. An energy criterion is used to detect the crack tip location. Adaptive refinement and coarsening schemes are developed and implemented during the crack propagation. The results were observed to be in excellent agreement with the pure atomistic simulations. The developed multiscale method is one of the first adaptive multiscale method for fracture.
A robust and simple three dimensional coarse graining technique to convert a given atomistic region into an equivalent coarse region, in the context of multiscale fracture has been developed. The developed method is the first of its kind. The developed coarse graining technique can be applied to identify and upscale the defects like: cracks, dislocations and shear bands. The current method has been applied to estimate the equivalent coarse scale models of several complex fracture patterns arrived from the pure atomistic simulations. The upscaled fracture pattern agree well with the actual fracture pattern. The error in the potential energy of the pure atomistic and the coarse grained model was observed to be acceptable.
A first novel meshless adaptive multiscale method for fracture has been developed. The phantom node method is replaced by a meshless differential reproducing kernel particle method. The differential reproducing kernel particle method is comparatively more expensive but allows for a more "natural" coupling between the two scales due to the meshless interpolation functions. The higher order continuity is also beneficial. The centro symmetry parameter is used to detect the crack tip location. The developed multiscale method is employed to study the complex crack propagation. Results based on the meshless adaptive multiscale method were observed to be in excellent agreement with the pure atomistic simulations.
The developed multiscale methods are applied to study the fracture in practical materials like Graphene and Graphene on Silicon surface. The bond stretching and the bond reorientation were observed to be the net mechanisms of the crack growth in Graphene. The influence of time step on the crack propagation was studied using two different time steps. Pure atomistic simulations of fracture in Graphene on Silicon surface are presented. Details of the three dimensional multiscale method to study the fracture in Graphene on Silicon surface are discussed.
In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.
In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.
We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera–equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server–system for improving data acquisition and for supporting scale–invariant object recognition.
Die vorliegende Arbeit fokussiert die Optimierung freigeformter adaptiver Faserverbundflächentragwerke auf Basis einer entwickelten und auf einem parametrischen Gesamtmodell basierenden Entwurfsmethode. Die Übertragung adaptiver, natürlich inspirierter Vorgänge stellt eine weitreichende Inspirationsquelle dar. Adaptive Tragwerke können unter Anwendung von Smart Materials als materialsparende, filigrane Tragwerke ausgeführt werden. Die Erfüllung der Grenzzustände der Tragfähigkeit und der Gebrauchstauglichkeit wird nicht allein über die Querschnittsabmessungen sichergestellt. Die notwendige Bauteilsteifigkeit kann vielmehr durch Eintragung von Aktivierungsenergie (Operational Energy) realisiert werden. Auf diese Weise kann die aufgrund der Bauteilabmessungen gebundene Energie (Embodied Energy) minimiert werden. Die entwickelte Entwurfsmethode ermöglicht die Auslegung und Optimierung materialminimierter Schalentragwerke in einem mehrstufigen Prozess. Hierbei wird aus tragwerksplanerischer Sicht die numerische Formfindung, die statische Berechnung und die Aktor- und Sensorpositionierung berechnet. Zudem werden Analysen hinsichtlich der Nachhaltigkeit auf Basis einer Lebenszyklusanalyse durchgeführt. Aufgrund der unterschiedlichen, sich aber gegenseitig beeinflussenden Kriterien, ist eine Optimierung durchzuführen. In der vorliegenden Arbeit wird ein Ansatz zur Definition zulässiger Ökobilanzkennwerte von Smart Materials auf Basis der Energiedifferenz zwischen einer passiven und einer adaptiven Struktur vorgestellt. Anhand dieser Kennwerte kann die Entwicklung zukünftiger Smart Materials unter dem Aspekt der ganzheitlichen Nachhaltigkeit erfolgen. Die Allgemeingültigkeit und Übertragbarkeit der Entwurfsmethode auf weitere Tragsysteme im Bauwesen und speziell anderer Materialkonstellationen wird anhand verschiedener Beispiele aufgezeigt.
Framed-tube system with multiple internal tubes is analysed using an orthotropic box beam analogy approach in which each tube is individually modelled by a box beam that accounts for the flexural and shear deformations, as well as the shear-lag effects. A simple numerical modeling technique is proposed for estimating the shear-lag phenomenon in tube structures with multiple internal tubes. The proposed method idealizes the framed-tube structures with multiple internal tubes as equivalent multiple tubes, each composed of four equivalent orthotropic plate panels. The numerical analysis is based on the minimum potential energy principle in conjunction with the variational approach. The shear-lag phenomenon of such structures is studied taking into account the additional bending moments in the tubes. A detailed work is carried out through the numerical analysis of the additional bending moment. The moment factor is further introduced to identify the shear lag phenomenon along with the additional moment.
It is widely accepted that most people spend the majority of their lives indoors. Most individuals do not realize that while indoors, roughly half of heat exchange affecting their thermal comfort is in the form of thermal infrared radiation. We show that while researchers have been aware of its thermal comfort significance over the past century, systemic error has crept into the most common evaluation techniques, preventing adequate characterization of the radiant environment. Measuring and characterizing radiant heat transfer is a critical component of both building energy efficiency and occupant thermal comfort and productivity. Globe thermometers are typically used to measure mean radiant temperature (MRT), a commonly used metric for accounting for the radiant effects of an environment at a point in space. In this paper we extend previous field work to a controlled laboratory setting to (1) rigorously demonstrate that existing correction factors used in the American Society of Heating Ventilation and Air-conditioning Engineers (ASHRAE) Standard 55 or ISO7726 for using globe thermometers to quantify MRT are not sufficient; (2) develop a correction to improve the use of globe thermometers to address problems in the current standards; and (3) show that mean radiant temperature measured with ping-pong ball-sized globe thermometers is not reliable due to a stochastic convective bias. We also provide an analysis of the maximum precision of globe sensors themselves, a piece missing from the domain in contemporary literature.
Search engines are very good at answering queries that look for facts. Still, information needs that concern forming opinions on a controversial topic or making a decision remain a challenge for search engines. Since they are optimized to retrieve satisfying answers, search engines might emphasize a specific stance on a controversial topic in their ranking, amplifying bias in society in an undesired way. Argument retrieval systems support users in forming opinions about controversial topics by retrieving arguments for a given query. In this thesis, we address challenges in argument retrieval systems that concern integrating them in search engines, developing generalizable argument mining approaches, and enabling frame-guided delivery of arguments.
Adapting argument retrieval systems to search engines should start by identifying and analyzing information needs that look for arguments. To identify questions that look for arguments we develop a two-step annotation scheme that first identifies whether the context of a question is controversial, and if so, assigns it one of several question types: factual, method, and argumentative. Using this annotation scheme, we create a question dataset from the logs of a major search engine and use it to analyze the characteristics of argumentative questions. The analysis shows that the proportion of argumentative questions on controversial topics is substantial and that they mainly ask for reasons and predictions. The dataset is further used to develop a classifier to uniquely map questions to the question types, reaching a convincing F1-score of 0.78.
While the web offers an invaluable source of argumentative content to respond to argumentative questions, it is characterized by multiple genres (e.g., news articles and social fora). Exploiting the web as a source of arguments relies on developing argument mining approaches that generalize over genre. To this end, we approach the problem of how to extract argument units in a genre-robust way. Our experiments on argument unit segmentation show that transfer across genres is rather hard to achieve using existing sequence-to-sequence models.
Another property of text which argument mining approaches should generalize over is topic. Since new topics appear daily on which argument mining approaches are not trained, argument mining approaches should be developed in a topic-generalizable way. Towards this goal, we analyze the coverage of 31 argument corpora across topics using three topic ontologies. The analysis shows that the topics covered by existing argument corpora are biased toward a small subset of easily accessible controversial topics, hinting at the inability of existing approaches to generalize across topics. In addition to corpus construction standards, fostering topic generalizability requires a careful formulation of argument mining tasks. Same side stance classification is a reformulation of stance classification that makes it less dependent on the topic. First experiments on this task show promising results in generalizing across topics.
To be effective at persuading their audience, users of an argument retrieval system should select arguments from the retrieved results based on what frame they emphasize of a controversial topic. An open challenge is to develop an approach to identify the frames of an argument. To this end, we define a frame as a subset of arguments that share an aspect. We operationalize this model via an approach that identifies and removes the topic of arguments before clustering them into frames. We evaluate the approach on a dataset that covers 12,326 frames and show that identifying the topic of an argument and removing it helps to identify its frames.
The laser beam is a small, flexible and fast polishing tool. With laser radiation it is possible to finish many outlines or geometries on quartz glass surfaces in the shortest possible time. It’s a fact that the temperature developing while polishing determines the reachable surface smoothing and, as a negative result, causes material tensions. To find out which parameters are important for the laser polishing process and the surface roughness respectively and to estimate material tensions, temperature simulations and extensive polishing experiments took place. During these experiments starting and machining parameters were changed and temperatures were measured contact-free. The accuracy of thermal and mechanical simulation was improved in the case of advanced FE-analysis.
Seit die Datenverarbeitung in ihrer Komplexität sich der Thematik des Computer Integrated Manufacturing widmet gehört die Produktionsplanung und Steuerung zu jenen Bereichen, in denen eine Computerunterstützung am vordringlichsten erschien. Später sind betriebswirtschaftliche Gesamtlösungen entstanden, die (bis heute recht unpräzise) als Enterprise Resource Planning (ERP)-Systeme bezeichnet werden und in ihren Logistik-Modulen auch Funktionen der Produktionsplanung abdecken. Alle bekannten MRP-, PPS- und auch ERP-Systeme beruhen auf einer Sukzessivplanung. Advanced Planning and Scheduling (APS) Systems finden seit etwa 1995 zunehmend Interesse. Neben Demand Planning, Production Planning and Scheduling, Distribution Planning, Transportation Planning und Supply Chain Planning werden Lösungen für Anzahl und Standorte von Produktionsstätten und Auslieferungslagern, Zuordnung zu Produktionsstätten, Kapazitätsbestimmung für Arbeitskräfte und Betriebsmittel je Standort, Lagerhaltung je Teil und Lager, Bestimmung benötigter Transportmittel und Häufigkeit ihres Einsatzes, Zuordnung von Lagern zu Produktionsstätten von Märkten zu Lagern u.a.m. von APS-Systemen erwartet. D.h. APS-Systeme ergänzen ERP-Lösungen, nutzen die bereits durch das ERP-System vorhandenen Daten und benötigen neuartige Algorithmen und (Meta-) Heuristiken. Im Rahmen des Vortrages werden Modelle und Echtzeitalgorithmen zur Optimierung der Logistik für Prozesse mit kurzfristigen Anforderungen, geographisch verteilter Produktion, Lagerhaltung der Ausgangs-, Zwischen- und Endprodukte und wechselnden Transport-Bedingungen aus der Sicht der praktischen Umsetzung und Anwendung in Form einer ASP-Lösung aufgezeigt und diskutiert.
This thesis presents new interactive visualization techniques and systems intended to support users with real-world decisions such as selecting a product from a large variety of similar offerings, finding appropriate wording as a non-native speaker, and assessing an alleged case of plagiarism.
The Product Explorer is a significantly improved interactive Parallel Coordinates display for facilitating the product selection process in cases where many attributes and numerous alternatives have to be considered. A novel visual representation for categorical and ordered data with only few occurring values, the so-called extended areas, in combination with cubic curves for connecting the parallel axes, are crucial for providing an effective overview of the entire dataset and to facilitate the tracing of individual products. The visual query interface supports users in quickly narrowing down the product search to a small subset or even a single product. The scalability of the approach towards a large number of attributes and products is enhanced by the possibility of setting some constraints on final attributes and, therefore, reducing the number of considered attributes and data items. Furthermore, an attribute repository allows users to focus on the most important attributes at first and to bring in additional criteria for product selection later in the decision process. A user study confirmed that the Product Explorer is indeed an excellent tool for its intended purpose for casual users.
The Wordgraph is a layered graph visualization for the interactive exploration of search results for complex keywords-in-context queries. The system relies on the Netspeak web service and is designed to support non-native speakers in finding customary phrases. Uncertainties about the commonness of phrases are expressed with the help of wildcard-based queries. The visualization presents the alternatives for the wildcards in a multi-column layout: one column per wildcard with the other query fragments in between. The Wordgraph visualization displays the sorted results for all wildcards at once by appropriately arranging the words of each column. A user study confirmed that this is a significant advantage over simple textual result lists. Furthermore, visual interfaces to filter, navigate, and expand the graph allow interactive refinement and expansion of wildcard-containing queries.
Furthermore, this thesis presents an advanced visual analysis tool for assessing and presenting alleged cases of plagiarism and provides a three-level approach for exploring the so-called finding spots in their context. The overview shows the relationship of the entire suspicious document to the set of source documents. An intermediate glyph-based view reveals the structural and textual differences and similarities of a set of finding spots and their corresponding source text fragments. Eventually, the actual fragments of the finding spot can be shown in a side-by-side view with a novel structured wrapping of both the source, as well as the suspicious text. The three different levels of detail are tied together by versatile navigation and selection operations. Reviews with plagiarism experts confirm that this tool can effectively support their workflow and provides a significant improvement over existing static visualizations for assessing and presenting plagiarism cases.
The three main contributions of this research have a lot in common aside from being carefully designed and scientifically grounded solutions to real-world decision problems. The first two visualizations facilitate the decision for a single possibility out of many alternatives, whereas the latter ones deal with text at varying levels of detail. All visual representations are clearly structured based on horizontal and vertical layers contained in a single view and they all employ edges for depicting the most important relationships between attributes, words, or different levels of detail. A detailed analysis considering the context of the established decision-making literature reveals that important steps of common decision models are well-supported by the three visualization systems presented in this thesis.
For the dynamic behavior of lightweight structures like thin shells and membranes exposed to fluid flow the interaction between the two fields is often essential. Computational fluid-structure interaction provides a tool to predict this interaction and complement or eventually replace expensive experiments. Partitioned analyses techniques enjoy great popularity for the numerical simulation of these interactions. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular analysis software. We use, for the fluid, GLS stabilized finite elements on a moving domain based on the incompressible instationary Navier-Stokes equations, where the formulation guarantees geometric conservation on the deforming domain. The structure is discretized by nonlinear, three-dimensional shell elements.
Commonly used sequential staggered coupling schemes may exhibit instabilities due to the so-called artificial added mass effect. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. Since iterative coupling algorithms are computationally very costly, their convergence rate is very decisive for their usability. To ensure and accelerate the convergence of this iteration the updates of the interface position are relaxed. The time dependent, 'optimal' relaxation parameter is determined automatically without any user-input via exploiting a gradient method or applying an Aitken iteration scheme.
This ethnographic study reports on emerging work processes and practices observed in the AEC (Architecture/Engineering/Construction) Global Teamwork program, i.e., what people experience when interacting with and through collaboration technologies, why people practice in the way they do, how the practice fits into the environment and changes the work patterns. It presents the experience of two high-performance typical but extreme AEC teamwork cases adopting and adapting to collaboration technologies and how these technologies in practice impact their work processes. The findings illustrate the importance of collaboration technologies in cross-disciplinary, global teamwork. Observations indicate that high performance teams that use the collaboration technologies effectively exhibit collaboration readiness at an early stage and manage to define a “third way” to meet the demands of the cross-disciplinary, multi cultural and geographically distributed AEC workspace. The observations and implications represent the blueprint for yearly innovations and improvements to the design of the AEC Global Teamwork program.
Aerodynamic Analysis of Slender Vertical Structure and Response Control with Tuned Mass Damper
(2015)
Analysis of vortex induced vibration has gained more interest in practical held of civil engineering. The phenomenon often occurs in long and slender vertical structure like high rise building, tower, chimney or bridge pylon, which resulting in unfavorable responses and might lead to the collapse of the structures. The phenomenon appears when frequency of vortex shedding produced in the wake area of body meet the natural frequency of the structure. Even though this phenomenon does not necessarily generate a divergent amplitude response, the structure still may fail due to fatigue damage.
To reduce the effect of vortex induced vibration, engineers widely use passive vibration response control system. In this case, the thesis studies the effect of tuned mass damper. The objective of this thesis is to simulate the effect of tuned mass damper in reducing unfavorable responses due to vortex induced vibration and initiated by numerical model validation with respect to wind tunnel test report. The reference structure that being used inside the thesis is Stonecutter Bridge, Hongkong.
A numerical solver for computational uid dynamics named VX ow which developed by Morgenthal [6] is utilized for wind and structure simulation. The comparison between numerical model and wind tunnel result shows 10% maximum tip displacement diference in the model of full erection freestanding tower. The tuned mass damper (TMD) model itself built separately in finite element software SOFiSTiK, and the efective damping obtained from this model then applied inside input modal data of VX ow simulation. A single TMD with mass ratio of TMD 0.5% to the mass of first bending frequency, the maximum tip displacement is measured to be average 67% reduced.
Considering construction limitation and robustness of TMD, the effects of multiple TMD inside a structure are also studied. An uncoupled procedure of applying aeroelastic loads obtained from VX
ow inside finite element software SOFiSTiK is also done to observe the optimum distribution and optimum mass ratio of multiple tuned mass damper. The rest of the properties of TMD are calculated with Den Hartog's formula. The results are as follows: peak displacement in the case of multiple TMD that distributed with polynomial spacing achieve 7.8% more reduction performance than
the one that distributed with equal spacing. Optimum mass of tuned mass damper achieved with ratio 1.25% mass of first bending frequency corresponds to across wind direction.
Stonecutters and Sutong Bridge have pushed the world record for main span length of cable-stayed bridges to over 1000m. The design of these bridges, both located in typhoon prone regions, is strongly influenced by wind effects during their erection. Rigorous wind tunnel test programmes have been devised and executed to determine the aerodynamic behaviour of the structures in the most critical erection conditions. Testing was augmented by analytical and numerical analyses to verify the safety of the structures throughout construction and to ensure that no serviceability problems would affect the erection process. This paper outlines the wind properties assumed for the bridge sites, the experimental test programme with some of its results, the dynamic properties of the bridges during free cantilevering erection and the assessment of their aerodynamic performance. Along the way, it discusses the similarities and some revealing differences between the two bridges in terms of their dynamic response to wind action.
The accurate representation of aerodynamic forces is essential for a safe, yet reasonable design of long-span bridges subjected to wind effects. In this paper, a novel extension of the Pseudo-three-dimensional Vortex Particle Method (Pseudo-3D VPM) is presented for Computational Fluid Dynamics (CFD) buffeting analysis of line-like structures. This extension entails an introduction of free-stream turbulent fluctuations, based on the velocity-based turbulence generation. The aerodynamic response of a long-span bridge is obtained by subjecting the 3D dynamic representation of the structure to correlated free-stream turbulence in two-dimensional (2D) fluid planes, which are positioned along the bridge deck. The span-wise correlation of the free-stream turbulence between the 2D fluid planes is established based on Taylor's hypothesis of frozen turbulence. Moreover, the application of the laminar Pseudo-3D VPM is extended to a multimode flutter analysis. Finally, the structural response from the Pseudo-3D flutter and buffeting analyses is verified with the response, computed using the semi-analytical linear unsteady model in the time-domain. Meaningful merits of the turbulent Pseudo-3D VPM with respect to the linear unsteady model are the consideration of the 2D aerodynamic nonlinearity, nonlinear fluid memory, vortex shedding and local non-stationary turbulence effects in the aerodynamic forces. The good agreement of the responses for the two models in the 3D analyses demonstrates the applicability of the Pseudo-3D VPM for aeroelastic analyses of line-like structures under turbulent and laminar free-stream conditions.
In this work, practice-based research is conducted to rethink the understanding of aesthetics, especially in relation to current media art. Granted, we live in times when technologies merge with living organisms, but we also live in times that provide unlimited resources of knowledge and maker tools. I raise the question: In what way does the hybridization of living organisms and non-living technologies affect art audiences in the culture that may be defined as Maker culture? My hypothesis is that active participation of an audience in an artwork is inevitable for experiencing the artwork itself, while also suggesting that the impact of the umwelt changes the perception of an artwork. I emphasize artistic projects that unfold through mutual interaction among diverse peers, including humans, non-human organisms, and machines. In my thesis, I pursue collaborative scenarios that lead to the realization of artistic ideas: (1) the development of ideas by others influenced by me and (2) the materialization of my own ideas influenced by others. By developing the scenarios of collaborative work as an artistic experience, I conclude that the role of an artist in Maker culture is to mediate different types of knowledge and different positions, whereas the role of the audience is to actively engage in the artwork itself. At the same time, aesthetics as experience is triggered by the other, including living and non-living actors. It is intended that the developed methodologies could be further adapted in artistic practices, philosophy, anthropology, and environmental studies.
Wissenschaftliches Kolloquium vom 27. bis 30. Juni 1996 in Weimar an der Bauhaus-Universität zum Thema: ‚Techno-Fiction. Zur Kritik der technologischen Utopien'
Die heutige Situation in der Tragwerksplanung ist durch das kooperative Zusammenwirken einer größeren Anzahl von Fachleuten verschiedener Disziplinen (Architektur, Tragwerksplanung, etc.) in zeitlich befristeten Projektgemeinschaften gekennzeichnet. Bei der Abstimmung der hierdurch bedingten komplexen, dynamischen und vernetzten Planungsprozesse kommt es dabei häufig zu Planungsmängeln und Qualitätseinbußen. Dieser Artikel zeigt auf, wie mit Hilfe der Agententechnologie Lösungsansätze zur Verbesserung der Planungssituation erreicht werden können. Hierzu wird ein Agentenmodell für die vernetzt-kooperative Tragwerksplanung vorgestellt und anhand der Planung einer Fußgängerbogenbrücke anschaulich demonstriert. Das Agentenmodell erfasst (1) die beteiligten Fachplaner und Organisationen, (2) die tragwerksspezifischen Planungsprozesse, (3) die zugehörigen (Teil-)Produktmodelle und (4) die genutzte (Ingenieur-)Software. Hieraus leiten sich die drei Teilmodelle (1) agentenbasiertes Kooperationsmodell, (2) agentenbasierte Produktmodellintegration und (3) Modell zur agentenbasierten Software-Integration ab. Der Fokus des Artikels liegt auf der Darstellung des agentenbasierten Kooperationsmodells.
Aktionsräume in Dresden
(2012)
In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten.
Im Rahmen des Sonderforschungsbereiches 524 <Werkstoffe und Konstruktionen für die Revitalisierung von Bauwerken 1> ist das primäre Anliegen des Teilprojektes D2 <Bauplanungsrelevantes digitales Gebäudeaufnahme- und Informationssystem> die Entwicklung von Methoden und Techniken zur Aufnahme von Bestandsdaten vor Ort oder durch Auswertung vorhandener Dokumentationen und deren direkte Integration in ein Bauwerksmodell. [15] Das Vorhaben erarbeitet Grundlagen zu Aspekten der fachplanerischen Nutzung und der wissenschaftlichen Auswertungen arbeitsmethodischer Vorgehensweisen in der Bestandsaufnahme unter Einbeziehung softwaretechnischer Methoden. Dabei finden Sachverhalte der Strukturierung, die Herausarbeitung von Systematiken der wesentlichen Informations-/Datenmengen, die Ableitung von Methoden zur zerstörungsfreien Erfassung und die Darstellung planungsrelevanter Gebäudeinformationen in digitalen Systemen Berücksichtigung. Beim Bauaufmaß werden neben traditionellen Methoden und Techniken längst geodätische Verfahren wie die Tachymetrie, die Photogrammetrie und die Handlaserentfernungsmessung einbezogen. In der Praxis des Bestandsaufmaßes repräsentiert gegenwärtig die Tachymetrie, das am häufigsten zur Innen- und Außenaufnahme von Gebäuden eingesetzte geodätische Vermessungsverfahren. [9] [3] Ausgehend von der heutigen Situation in der Bestandsaufnahme wird aufgezeigt, inwieweit es nach dem gegenwärtigen Stand der Technik möglich ist, die in der Geodäsie verwendeten Tachymeter direkt in der Bestandsaufnahme einzusetzen. In einem weiteren Schwerpunkt wird die Konzeption eines rechnergestützten Bauaufnahmesystems basierend auf reflektorlos messenden tachymetrischen Geräten beschrieben. Das Konzept berücksichtigt nicht nur das Bauaufmaß, sondern unterstützt adäquat den gesamten Prozeß der Bauaufnahme – von der Erstbegehung bis hin zur konstruktiven Gliederung. Abschließend werden tendenzielle Möglichkeiten in der Bauaufnahme diskutiert.
We present an algebraically extended 2D image representation in this paper. In order to obtain more degrees of freedom, a 2D image is embedded into a certain geometric algebra. Combining methods of differential geometry, tensor algebra, monogenic signal and quadrature filter, the novel 2D image representation can be derived as the monogenic extension of a curvature tensor. The 2D spherical harmonics are employed as basis functions to construct the algebraically extended 2D image representation. From this representation, the monogenic signal and the monogenic curvature signal for modeling intrinsically one and two dimensional (i1D/i2D) structures are obtained as special cases. Local features of amplitude, phase and orientation can be extracted at the same time in this unique framework. Compared with the related work, our approach has the advantage of simultaneous estimation of local phase and orientation. The main contribution is the rotationally invariant phase estimation, which enables phase-based processing in many computer vision tasks.
Entwicklung eines Algorithmus für ein nichtlineares Materialgesetz für die vollautomatische Rissentwicklungssimulation unter Verwendung der am Institut für Strukturmechanik entwickelten netzfreien Verfahren. In Anlehnung an die Kontinuumsplastizität wird unter Verwendung einer arbeitsbasierten Formulierung mit Kombination der Mode I und Mode IIa Bruchenergien für sensitive Strukturen und eines nicht-assoziierten Fließgesetzes werden die Rissweggrößen (Rissöffnungsweite und Rissgleitungen) iterativ ermittelt. Dadurch ist es möglich, den Dilatanzeffekt sowie die verzahnte Kontaktfläche und die daraus resultierenden erhöhten Schubwiderstände abzubilden. Umsetzung mit Hilfe des sehr effizienten impliziten Closest Point Projection Iterationsverfahren auf Basis einer 3-D Kontaktformulierung (Kontakt-Elemente). 2-D Implementation in die Forschungssoftware SLang des Instituts für Strukturmechanik der Bauhaus-Universität Weimar. Verifikation der Modellcharakteristik mit signifikanten Belastungszuständen. Zwei Anwendungsbeispiele zur Rissfortschrittsberechnung sind unter Verwendung des umgesetzten Materialgesetzes zum Einsatz gekommen. Untersuchungen hinsichtlich der Materialparameter wurden vorgenommen.
Es ist ein Bild aus alten Tagen: ein wissbegieriger Student, auf der Suche nach fundierter wissenschaftlicher Information, begibt sich an den heiligsten Ort aller Bücher – die Universitätsbibliothek. Doch seit einiger Zeit tummeln sich Studierende nicht mehr nur in Bibliotheken, sondern auch immer häufiger im Internet. Sie suchen und finden dort digitale Bücher, sogenannte E-Books.
Wie lässt sich der Wandel durch den Einzug des E-Books in das etablierte Forschungssystem beschreiben, welche Konsequenzen lassen sich daraus ablesen und wird schließlich alles digital, sogar die Bibliothek? Diesen Fragen geht ein elfköpfiges Expertenteam aus Deutschland und der Schweiz während der zweitägigen Konferenz auf den Grund.
Bei den Weimarer E-DOC-Tagen geht es nun um die Veränderung des institutionellen Gefüges rund um das digitale Buch. Denn traditionell sind Verlage und Bibliotheken wichtige Bestandteile der Wissensversorgung in Studium und Lehre. Doch mit dem Aufkommen des E-Books verlagert sich die Recherche mehr und mehr ins Internet. Die Suchmaschine Google tritt als neuer Konkurrent der klassischen Bibliotheksrecherche auf. Aber auch Verlage müssen verstärkt auf die neuen Herausforderungen eines digitalen Buchmarktes reagieren.
In Kooperation mit der Universitätsbibliothek und dem Master-Studiengang Medienmanagement diskutieren Studierende, Wissenschaftler, Bibliothekare und Verleger, wie das E-Book unseren Umgang mit Literatur verändert. Der Tagungsband stellt alle Perspektiven und Ergebnisse zum Nachlesen zusammen.
Alles Heritage?
(2016)
Die Erweiterung des Denkmalbegriffs hat zu einer Expansion des Erinnerns, Schützens, Bewahrens und Tradierens auf alle Bereiche des Lebens geführt. Heute werden nicht nur Scheunen, Tankstellen und Großwohnsiedlungen als Teil des historischen Erbes unter Denkmalschutz gestellt, sondern auch kulturelle Praktiken und Bräuche zum „immateriellen“ Weltkulturerbe erklärt. Die Folge dieser als „Denkmal-Inflation“ kritisierten Entwicklung ist eine verschärfte Konkurrenz um Aufmerksamkeit und finanzielle Zuwendungen. Letzteres spiegelt sich nicht zuletzt in einer zunehmenden, maßgeblich von der Tourismusindustrie geförderten publikumswirksamen Inszenierung des Erbes.
Im Zeitalter der „Heritage Industry“ (Robert Hewison, 1987) bilden Kulturgüter aber nicht nur einen wichtigen Standortfaktor, sondern wird das „Erbe“ selbst zunehmend mittels internationaler Charten, Deklarationen, Plaketten und Social Media-Kampagnen konstruiert. Dies geschieht vorwiegend innerhalb eines anglophonen Diskurses, der aber an die deutschsprachigen begriffs- und ideengeschichtlich geprägten Diskussionen strenggenommen nicht anschlussfähig ist. Dort lässt sich ein – in einem ähnlichen Sinne umfassend zu nennender – Erbe-Begriff zwar bereits für die Heimatschutzbewegung konstatieren, eine fachlich ausdifferenzierte Denkmalpflege, wie wir sie heute
kennen, tut sich jedoch schwer, ein solches universelles Konzept zu integrieren. Während die „Heritagisierung“ durch internationale Organisationen zu einer Verschiebung des Fokus von Baudenkmalen hin zur allgemeinen Bewahrung von Kulturerbe führt (das immaterielle eingeschlossen, siehe etwa die Burra Charter), bleibt der Denkmal- und Erbe-Diskurs in den deutschsprachigen Ländern bislang klar auf Baudenkmale und städtebauliche Ensembles konzentriert. Letzteres zeigt sich auch im Vorfeld des European Cultural Heritage Year 2018, das in Deutschland im Gegensatz zu anderen europäischen Ländern maßgeblich von Denkmalschutzorganisationen getragen wird.
Die Wende hin zum Heritage lässt sich gleichermaßen bei neuen Forschungsfeldern und Ausbildungswegen der Denkmalpflege beobachten. So werden heute „Heritage Tourism“ und „Dark Heritage“ als spezifische Formen der „Denkmalnutzung“ untersucht und bilden – in Ergänzung zu den klassischen Disziplinen Kunstgeschichte, Architektur und Planung – „Heritage Management“ und „Heritage Studies“ grundständige Studiengänge. Letzteres gilt inzwischen auch für die deutschsprachigen Länder. Der Weg führt damit weg von der spezialisierten Kennerschaft zum Allrounder mit neuen Schwerpunkten auf Marketing, Verwaltung und Vermittlung. Mit Blick auf sozio-kulturelle Entwicklungen erweist sich, dass der Heritage-Begriff vor allem im ökonomischen und politischen Diskurs weitgehend affirmativ gebraucht wird. Heritage geht demnach mit einem gewissen moralischen wie missionarischen Impetus einher, verbunden mit einer (Kultur-)Politik der „Identitätsstiftung“. In Zeiten, in denen „Identität“ wieder als politisches Schlagwort im gesellschaftlichen Diskurs fungiert, scheint es um so wichtiger, die wissenschaftliche Beschäftigung mit Heritage, die zugrunde liegenden begrifflichen Konzepte und präskriptiven Programme, kritisch zu reflektieren.
Generell hat sich im Forschungsprojekt insbesondere durch die Gespräche mit den Hochschulvertretern bestätigt, dass für qualitativ hochwertige Lehre und Forschung qualitativ hochwertige Flächen in ausreichendem Umfang notwendig sind.
Ein Ziel der Forschungsarbeit ist die Entwicklung von Modellen zur Allokation und Steuerung von Flächenressourcen in Hochschulen. Ausgehend von Darstellungen und Erfahrungen für die Flächensteuerung aus Unternehmen, anderen Bereichen der öffentlichen Verwaltung und Forschungseinrichtungen wurden mögliche Steuerungsverfahren für Hochschulen untersucht. Es wurde ein Steuerungsmodell für Hochschulen entwickelt, das auf die hochschulinternen und die extern wirksamen Rahmenbedingungen reagiert.
Die hochschulinterne Flächenallokation wird zum einen maßgeblich von externen Rahmen-bedingungen und zum zweiten von internen Prozessen, Abläufen und Strukturen beeinflusst. Die Kenntnis dieser Bedingungen wird als Voraussetzung für die Benennung von Erfolgsfak-toren für die Implementation neuer Steuerungsmodelle angenommen. Analysiert wurden daher die liegenschaftspolitischen und die organisatorischen Rahmenbedingungen sowie die steuerungsrelevanten Eigenschaften der Flächen selber.
Einleitung:
Die Kunst und der Kunstbetrieb haben sich in den letzten Jahrzehnten stark verändert und werden sich aller Voraussicht nach in Zukunft noch weit rascher und durchgreifender ändern. In meiner Dissertation geht es um eine Analyse des Jetzt-Zustandes des Kunstbetriebs und um die Konsequenzen die daraus für die zu erwartende Entwicklung zu ziehen sind, insbesondere bezüglich der Ausbildung von Künstlern an Kunsthochschulen. Dort sollten meines Erachtens die beruflichen Aspekte des künstlerischen Feldes (in und außerhalb der Akademie) verstärkt erläutert und vermittelt werden.
Der Fokus der Arbeit liegt auf den folgenden 4 Aspekten: Der Künstler, die Arbeitswelt, die Ausbildung und das Netz und die Vernetzung und ihren Zusammenhängen.
Diese Feststellungen basieren auf meinen Recherchen zu den vier Hauptthemen im Rahmen meiner Arbeit in der Lehre und der eigenen künstlerischen Praxis der letzten Jahre und spiegeln diese wider und sollen gleichzeitig als Beispiel für ihre Anwendung dienen und bieten einen Überblick in deren Ausführung in der Praxis.
Hinweis
Die hier vorliegende Dateien (in 5 Teilen) sind die digitale Veröffentlichung meiner Dissertation im Rahmen der Promotion im Studiengang "Kunst und Design" an der Bauhaus-Universität Weimar.
Diese Publikation ist open source und wird in einem offenen und kollaborativen Prozess weiterentwickelt werden. Die jeweils aktuelle Version wird hier zu finden sein: http://phd.nts.is Dort befinden sich auch weitere Formate zum Download, ebenso wie der vollständige (markdown-formatierte) Quelltext.
(Aus urheber- und lizenzrechtlichen Gründen sind in dieser Version der Bildtafeln einige Bilder ausgelassen. Die gedruckte Ausgabe enthält alle Bildtafeln, diese liegt in der Bibliothek der Bauhaus-Universität aus.)
Teile:
- Thesenpapier
- PhD Dissertation
- Bildtafeln
- Der 5-Jahres-Plan
- KIOSK09-Katalog
Die vorliegende Arbeit befasst sich mit der verkehrsplanerischen Untersuchung der Verkehrsanbindung des Klinikums Bad Hersfeld. Auf der theoretischen Grundlage des Verkehrsplanungsprozesses wurde der Planungsablauf methodisch beschrieben. Im Anschluss erfolgte die Zustandsanalyse am konkreten Beispiel -Verkehrsanbindung des Klinikums Bad Hersfeld - in enger Zusammenarbeit mit den Anwohnern des Wohngebietes, der Stadt Bad Hersfeld, der Polizei und des Krankenhauses. Im Rahmen der Analyse wurde ein Zielsystem aufgestellt und eine Mängelanalyse durchgeführt. Auf Basis der gewonnenen Erkenntnisse wurden anschließend verschiedene Lösungsvorschläge erörtert. Anschließend wurde anhand eines Bewertungssystems und eines einfachen Rangordnungsverfahrens eine Vorzugsvariante benannt und deren Vor- und Nachteile beschrieben.
Die Arbeit zeigt die wesentlichen Gründe auf, warum betahalbhydratreiche Niederbranntgipsbinder (industriell als Stuckgips bezeichnet) oft sehr unterschiedliche Eigenschaften aufweisen.
Der Anteil an Halbhydrat, welches aus dem stark hygroskopischen Anhydrit III (A III) durch die Reaktion mit Luftfeuchtigkeit entsteht, stellt einen erheblichen, bislang vollkommen unbeachteten Einfluss dar. Dieses Halbhydrat aus A III zeigt andere Oberflächeneigenschaften und ein Reaktionsverhalten, das von frisch gebranntem Betahalbhydrat abweicht.
Es zeigt sich, wie weitreichend der Einfluss physiko-chemischer Oberflächenprozesse wie Adsorption und Kondensation ist. Hierdurch wird nicht nur die Oberflächenenergie der Partikel abgebaut, sondern auch eine Verminderung der Hydratationswärme verursacht. Somit wirken sich physikalische Vorgänge thermodynamisch aus. Einwirkende und resultierende Parameter einer Alterung wirken wie folgt äußerst komplex zusammen:
Die dominierenden Bindemitteleigenschaften Abbindeverhalten und Wasseranspruch verändern sich durch eine Alterung sowohl aufgrund der Phasenumwandlungen als auch infolge der Veränderungen der Kristallite. Ebenso einflussreich ist die Veränderung der Oberflächencharakteristik. Die Auswirkung der Alterung auf die Reaktivität geht deutlich über den Abbau von Anhydrit III, die Dezimierung von abbindefähigem Material und die beschleunigende Wirkung von Alterungsdihydrat hinaus. Das Wachstum der Kristallite von Halbhydrat und die Verringerung der inneren Energie sowie die energetisch günstige spontane Beladung der Kristallgitterkanäle kleinster Anhydrit III-Kristallite mit dampfförmigem Wasser müssen als maßgebliche Ursachen für die Abnahme der Reaktivität infolge der Alterung herausgestellt werden. Die Abnahme der spezifischen Oberfläche und der Oberflächenenergie wirken sich außerdem auf den Lösungs- und den Hydratationsprozess aus. Der auf der Oberfläche von Anhydrit III kristallisierte Anhydrit II wirkt sich auch nach der Umwandlung von A III in Halbhydrat lösungshemmend aus. Infolge der alterungsbedingten Dihydratbildung, die bei anhaltender Feuchteeinwirkung einsetzt, wird diese Wirkung aufgehoben bzw. vermindert. Obgleich Dihydrat für seinen Beschleunigungseffekt bekannt ist, entfaltet Alterungsdihydrat infolge seiner besonderen Ausbildung innerhalb der wenige Moleküllagen umfassenden Kondenswasserschicht nur eine geringe keimbildende Wirkung.
Eine wesentliche Erkenntnis betrifft den Bindungscharakter des Überstöchiometrischen Wassers. Diesbezüglich ist eine rein physikalische Bindung nachweisbar. Das in der Arbeit als stärker adsorptiv gebunden bezeichnete Wasser kommt neben der Freien Feuchte ausschließlich bei Anwesenheit von Halbhydrat vor. Dieser Zusammenhang wird erstmalig hergestellt und mit Hilfe der kristallchemisch bedingten höheren Oberflächenenergie von Halbhydrat erklärt.
Geld ist ein Thema, das keinen von uns gleichgültig lässt, hat es doch Auswirkungen auf unser alltägliches praktisches Leben wie auch auf das gesamtgesellschaftliche. Von verschiedenen Aspekten ausgehend wird in dieser Veröffentlichung der Vorlesungen der Erfurter Universität in Zusammenarbeit mit der Fachhochschule Erfurt der ganze Facettenreichtum des Geldes beleuchtet. So kommen Ökonomen genauso zu Wort wie Ökologen, Historiker geben einen Einblick in die Geschichte der Geldwirtschaft und Soziologen zeigen die Aus- bzw. Einwirkungen des Geldes auf Lebensstile. Welche Rolle spielt Geld im Mathematikunterricht und was hat es mit dem Bürgergeld auf sich? Einen Einblick in die Geschichte und die politischen Zusammenhänge der europäischen Währungsunion gewährt uns Hans Tietmeyer am Schluss des Bandes, der dem Leser interessante und überraschende Einsichten in ein nicht nur pekuniäres Gebiet vermittelt.
Mikroelektronik und Mikrosystemtechnik in Kombination mit Informations- und Kommunikations-technik erlauben es mittlerweile, Rechenleistung und Kommunikationsfähigkeit in kleinsten Formaten, mit geringsten Energien und zu günstigen Preisen nutzbringend in unser privates und berufliches Umfeld einzubringen. Beispiele sind Notebook-PC, PDA, Handy und das Navigationßystem im Auto. Aber auch eingebettete Elektronik in Komponenten, Geräten und Systemen ist nunmehr zur Selbstverständlichkeit geworden. Bekannte Beispiele aus der Haustechnik sind Mikroprozeßoren in Heizungs- und Alarmanlagen und aber auch in Komponenten wie Brand- und Bewegungsmelder. Wir nähern uns dem vor einigen Jahren noch als Vision bezeichneten Zustand der überall vorhandenen elektronischen Rechenleistung (engl. ubiquitous computing) bzw. des von Informationsverarbeitung durchdrungenen täglichen Umfelds (engl. pervasive computing). Werden die TGA-Komponenten genau wie die größeren Computerkomponenten (z.B. PCs, Server) über Datenschnittstellen zu räumlich verteilten Netzwerken verknüpft (z.B. Internet, Intranet) und mit einer systemübergreifenden und adäquaten Intelligenz (Software) programmiert, so können neuartige Funktionalitäten im jeweiligen Anwendungsumfeld (engl. ambient intelligence, kurz AmI, [1]) entstehen. Hier liegt bei Gebäuden und Räumen speziell eine große Chance, die bislang einer ganzheitlichen Systemkonzeption unter Einschluß von Architektur, Gebäudephysik, technischer Gebäudeausrüstung (TGA) und Gebäudeautomation (GA) im Wege stehende Gewerketrennung zu überwinden. Es entstehen für div. Anwendungszwecke systemisch integrierte >smart areas< (nach Prof. Becker, FH Biberach). Im vorliegenden Beitrag erläuterte Beispiele für AmI-Lösungen im Immobilienbereich sind Raumsysteme zur automatischen und sicheren Erkennung von Notfällen, z.B. in Pflegeheimen; sich automatisch an die Nutzung und den Nutzer bzgl. Klima und Beleuchtung adaptierende Raumsysteme im Büro- oder Hotelbereich und die elektronische Aßistenz des Bau- und Betriebsprozeßes von Gebäuden. Im Duisburger inHaus-Innovationszentrum für Intelligente Raum- und Gebäudesysteme der Fraunhofer-Gesellschaft wurden in den letzten Jahren erste Lösungen mit diesem neuartigen Ansatz konzipiert, entwickelt und erprobt. Der Beitrag beschreibt nach einer kurzen Skizzierung des Ambient-Intelligence-Ansatzes an Beispielen Möglichkeiten für den Transfer dieser neuen Technologie in den Raum- und Gebäudebereich. Es folgt eine abschließende Zusammenfaßung und eine Einschätzung der Zukunftspotenziale der Ambient Intelligence in Raum und Bau.
Die Dissertation über „Ambiguität im zeitgenössischen Film – Flugversuche“ folgt der Spur einer populären narrativen Tendenz im Kino – nämlich der Mehrdeutigkeit – und zeichnet ihr dramaturgisches Potential, wie ihre ethischen (bzw. mikropolitischen) Implikationen nach. Um typische Muster in der Wahrnehmung mehrdeutiger Filmerzählungen zu beschreiben, die bereits auf der vorbewussten Ebene der Affekte wirksam sind, greife ich auf Begriffe der Prozessphilosophie Alfred North Whitehead’s zurück und auf ihre neueren Reformulierungen bei Gilles Deleuze und Brian Massumi. Ausgehend von Alejandro González Iñárritu’s "Babel" (2006) begibt sich der Leser im ersten Teil auf einen virtuellen Rundflug durch ausgewählte Filmbeispiele mit einem kulturellen Ankerpunkt im heutigen Japan. Im zweiten Teil beschreibe und reflektiere ich mein methodisches Vorgehen in den ersten Phasen der Stoffentwicklung zu einem suggestiven Spielfilmprojekt, und kontextualisiere es mit Interviews zeitgenössischer Autorenfilmer, die ähnliche Erzählweisen entwickeln.
Low-skilled labor makes a significant part of the construction sector, performing daily production tasks that do not require specific technical knowledge or confirmed skills. Today, construction market demands increasing skill levels. Many jobs that were once considered to be undertaken by low or un-skilled labor, now demand some kind of formal skills. The jobs that require low skilled labor are continually decreasing due to technological advancement and globalization. Jobs that previously required little or no training now require skilful people to perform the tasks appropriately. The study aims at ameliorating employability of less skilled manpower by finding ways to instruct them for performing constructions tasks. A review of exiting task instruction methodologies in construction and the underlying gaps within them warrants an appropriate way to train and instruct low skilled workers for the tasks in construction. The idea is to ensure the required quality of construction with technological and didactic aids seeming particularly purposeful to prepare potential workers for the tasks in construction without exposing them to existing communication barriers. A BIM based technology is considered promising along with the integration of visual directives/animations to elaborate the construction tasks scheduled to be carried on site.
American images of Utopia
(1997)
Wissenschaftliches Kolloquium vom 27. - 30. Juni 1996 in Weimar an der Bauhaus-Universität zum Thema: ‚ Techno-Fiction. Zur Kritik der technologischen Utopien'
This cumulative dissertation discusses - by the example of four subsequent publications - the various layers of a tangible interaction framework, which has been developed in conjunction with an electronic musical instrument with a tabletop tangible user interface. Based on the experiences that have been collected during the design and implementation of that particular musical application, this research mainly concentrates on the definition of a general-purpose abstraction model for the encapsulation of physical interface components that are commonly employed in the context of an interactive surface environment. Along with a detailed description of the underlying abstraction model, this dissertation also describes an actual implementation in the form of a detailed protocol syntax, which constitutes the common element of a distributed architecture for the construction of surface-based tangible user interfaces. The initial implementation of the presented abstraction model within an actual application toolkit is comprised of the TUIO protocol and the related computer-vision based object and multi-touch tracking software reacTIVision, along with its principal application within the Reactable synthesizer. The dissertation concludes with an evaluation and extension of the initial TUIO model, by presenting TUIO2 - a next generation abstraction model designed for a more comprehensive range of tangible interaction platforms and related application scenarios.
Numerical simulation of physical phenomena, like electro-magnetics, structural and fluid mechanics is essential for the cost- and time-efficient development of mechanical products at high quality. It allows to investigate the behavior of a product or a system far before the first prototype of a product is manufactured.
This thesis addresses the simulation of contact mechanics. Mechanical contacts appear in nearly every product of mechanical engineering. Gearboxes, roller bearings, valves and pumps are only some examples. Simulating these systems not only for the maximal/minimal stresses and strains but for the stress-distribution in case of tribo-contacts is a challenging task from a numerical point of view.
Classical procedures like the Finite Element Method suffer from the nonsmooth representation of contact surfaces with discrete Lagrange elements. On the one hand, an error due to the approximate description of the surface is introduced. On the other hand it is difficult to attain a robust contact search because surface normals can not be described in a unique form at element edges.
This thesis introduces therefore a novel approach, the adaptive isogeometric contact formulation based on polynomial Splines over hierarchical T-meshes (PHT-Splines), for the approximate solution of the non-linear contact problem. It provides a more accurate, robust and efficient solution compared to conventional methods. During the development of this method the focus was laid on the solution of static contact problems without friction in 2D and 3D in which the structures undergo small deformations.
The mathematical description of the problem entails a system of partial differential equations and boundary conditions which model the linear elastic behaviour of continua. Additionally, it comprises side conditions, the Karush-Kuhn-Tuckerconditions, to prevent the contacting structures from non-physical penetration. The mathematical model must be transformed into its integral form for approximation of the solution. Employing a penalty method, contact constraints are incorporated by adding the resulting equations in weak form to the overall set of equations. For an efficient space discretization of the bulk and especially the contact boundary of the structures, the principle of Isogeometric Analysis (IGA) is applied. Isogeometric Finite Element Methods provide several advantages over conventional Finite Element discretization. Surface approximation with Non-Uniform Rational B-Splines (NURBS) allow a robust numerical solution of the contact problem with high accuracy in terms of an exact geometry description including the surface smoothness.
The numerical evaluation of the contact integral is challenging due to generally non-conforming meshes of the contacting structures. In this work the highly accurate Mortar Method is applied in the isogeometric setting for the evaluation of contact contributions. This leads to an algebraic system of equations that is linearized and solved in sequential steps. This procedure is known as the Newton Raphson Method. Based on numerical examples, the advantages of the isogeometric approach
with classical refinement strategies, like the p- and h-refinement, are shown and the influence of relevant algorithmic parameters on the approximate solution of the contact problem is verified. One drawback of the Spline approximations of stresses though is that they lack accuracy at the contact edge where the structures change their boundary from contact to no contact and where the solution features a kink. The approximation with smooth Spline functions yields numerical artefacts in the form of non-physical oscillations.
This property of the numerical solution is not only a drawback for the
simulation of e.g. tribological contacts, it also influences the convergence properties of iterative solution procedures negatively. Hence, the NURBS discretized geometries are transformed to Polynomial Splines over Hierarchical T-meshes (PHT-Splines), for the local refinement along contact edges to reduce the artefact of pressure oscillations. NURBS have a tensor product structure which does not allow to refine only certain parts of the geometrical domain while leaving other parts unchanged. Due to the Bézier Extraction, lying behind the transformation from NURBS to PHT-Splines, the connected mesh structure is broken up into separate elements. This allows an efficient local refinement along the contact edge.
Before single elements are refined in a hierarchical form with cross-insertion, existing basis functions must be modified or eliminated. This process of truncation assures local and global linear independence of the refined basis which is needed for a unique approximate solution. The contact boundary is a priori unknown. Local refinement along the contact edge, especially for 3D problems, is for this reason not straight forward. In this work the use of an a posteriori error estimation procedure, the Super Convergent Recovery Solution Based Error Estimation Scheme, together with the Dörfler Marking Method is suggested for the spatial search of the contact edge.
Numerical examples show that the developed method improves the quality of solutions along the contact edge significantly compared to NURBS based approximate solutions. Also, the error in maximum contact pressures, which correlates with the pressure artefacts, is minimized by the adaptive local refinement.
In a final step the practicability of the developed solution algorithm is verified by an industrial application: The highly loaded mechanical contact between roller and cam in the drive train of a high-pressure fuel pump is considered.
In the field of engineering, surrogate models are commonly used for approximating the behavior of a physical phenomenon in order to reduce the computational costs. Generally, a surrogate model is created based on a set of training data, where a typical method for the statistical design is the Latin hypercube sampling (LHS). Even though a space filling distribution of the training data is reached, the sampling process takes no information on the underlying behavior of the physical phenomenon into account and new data cannot be sampled in the same distribution if the approximation quality is not sufficient. Therefore, in this study we present a novel adaptive sampling method based on a specific surrogate model, the least-squares support vector regresson. The adaptive sampling method generates training data based on the uncertainty in local prognosis capabilities of the surrogate model - areas of higher uncertainty require more sample data. The approach offers a cost efficient calculation due to the properties of the least-squares support vector regression. The opportunities of the adaptive sampling method are proven in comparison with the LHS on different analytical examples. Furthermore, the adaptive sampling method is applied to the calculation of global sensitivity values according to Sobol, where it shows faster convergence than the LHS method. With the applications in this paper it is shown that the presented adaptive sampling method improves the estimation of global sensitivity values, hence reducing the overall computational costs visibly.
Authors' own research in applied unicriterial and multicriterial optimisation of bar structures, and also an analysis of accessible bibliography on structural synthesis allows to present herein an attempt to define a general algorithm for proceeding in formulation of a structural optimisation problem. A practical aspect of such an algorithm consists, in author's opinion, in enabling a designer a correct creation of a mathematical model of synthesis problems, independently of known mathematical methods employed to looking for an unconditional extremum of function of several variables. A proposed algorithm is not a ready-for-use tool for solving all the optimisation problems, but it constitutes an easy-to-expand theoretical basis. This basis should allow a designer to create a proper set of compromises on the way to construct a mathematical model of a specific optimisation problem. The algorithm, presented in the paper, is constructed as a sequence of the one-after-another problem questions, on which the designer answers: yes or no, and a set of selections from the knowledge base consisting of the elements of an optimisation problem components. The order of making questions adopted by the authors in the algorithm is subjective, however it is supported by their experience, both in applied optimisation and in designing of structures like trusses or frames.
In a historical perspective, the relationship between digital media and the museum environment is marked by the role of museums as example use cases for the appli- cation of digital media. Today, this exceptional use as an often technology oriented application has changed and instead digital media have turned into an integral part of mediation strategies in the museum environment. Alongside with this shift not only an increasing professionalization of application development but also a grow- ing demand for new content can be observed. Comparable to its role as the main cost factor in the media industry, the production of content rises to a challenge for museums. In particular small and medium scale european museums with limited funding and an often low level of staff coverage face this new demand and strive therefore for alternative production resources. While productive user contributions can be seen as such an alternative resource, user contributions are at the same time a manifestation for a different mode of in- teracting with content. In contrast to the dominantly passive role of audiences as re- ceivers of information, productive contributions emerge as a mode of content ex- ploration and become in this regard influential for museum mediation strategies. As applications of user contributions in museums and cultural heritage are currently rather seldom, a broader perspective towards user contributions becomes necessary to understand its specific challenges, opportunities and limitations. Productive user contributions can be found in a growing number of applications on the Internet where they either complement or fully substitute corporate content production processes. While the Wikipedia1, an online encyclopedia written entirely by a group of users and open to contributions by all its users, is one of the most prominent examples for this practice, several more applications emerged or are be- ing developed. In consequence user contributions are about to become a powerful source for the production of content in digital media environments.
The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.
The conceptual structure of an application that can support the structural analysis task in a distributed collaboratory is described in (van Rooyen and Olivier 2004). The application described there has a standalone component for executing the finite element method on a local workstation in the absence of network access. This application is comparable to current, local workstation based finite element packages. However, it differs fundamentally from standard packages since the application itself, and its objects, are adapted to support distributed execution of the analysis task. Basic aspects of an object-oriented framework for the development of applications which can be used in similar distributed collaboratories are described in this paper. An important feature of this framework is its application-centred design. This means that an application can contain any number of engineering models, where the models are formed by the collection of objects according to semantic views within the application. This is achieved through very flexible classes Application and Model, which are described in detail. The advantages of the application-centred design approach is demonstrated with reference to the design of steel structures, where the finite element analysis model, member design model and connection design model interact to provide the required functionality.
The promise of lower costs for sensors that can be used for construction inspection means that inspectors will continue to have new choices to consider in creating inspection plans. However, these emerging inspection methods can require different activities, resources, and decisions such that it can be difficult to compare the emerging methods with other methods that satisfy the same inspection needs. Furthermore, the context in which inspection is performed can significantly influence how well certain inspection methods are suited for a given set of goals for inspection. Context information, such as weather, security, and the regulatory environment, can be used to understand what information about a component should be collected and how an inspection should be performed. The research described in this paper is aimed at developing an approach for comparing and selecting inspection plans. This approach consists of (1) refinement of given goals for inspection, if necessary, in order to address any additional information needs due to a given context and in order to reach a level of detail that can be addressed by an inspection activity; (2) development of constraints to describe how an inspection should be achieved; (3) matching of goals to available inspection methods, and generation of activities and resource plans in order to address the goals; and (4) selection of an inspection plan from among the possible plans that have been identified. The authors illustrate this approach with observations made at a local construction site.
Recent research shows that current learning strategies in construction industry have not been effective in implementing lean principles in construction. With that in mind the researchers set to investigate an alternative learning strategy in order to promote learning at the international level. A web-based environment, was developed for this project with the intent of promoting learning and knowledge exchange on the theory and practice of "process transparency" across different countries.
There are many construction projects in China and mass documents are exchanged among the multi-party, including the owner, the contractor and the engineer in the projects. Based on previous studies, an approach to the utilization of the exchanged documents is established by using data warehouse technology and a prototype system called EXPLYZER is developed. The approach and the prototype system are verified through their application in a construction project. It is concluded that the approach can support the decision-making in project management.
In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.
The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridy- namic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dy- namic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena.
This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature.
New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification
will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three dis- tinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions.
The computational costs of newly developed numerical simulation play a critical role in their acceptance within both academic use and industrial employment. Normally, the refinement of a method in the area of interest reduces the computational cost. This is unfortunately not true for most nonlocal simulation, since refinement typically increases the size of the material point neighborhood. Reducing the discretization size while keep- ing the neighborhood size will often require extra consideration. Peridynamic (PD) is a newly developed numerical method with nonlocal nature. Its straightforward integral form equation of motion allows simulating dynamic problems without any extra consideration required. The formation of crack and its propagation is known as natural to peridynamic. This means that discontinuity is a result of the simulation and does not demand any post-processing. As with other nonlocal methods, PD is considered an expensive method. The refinement of the nodal spacing while keeping the neighborhood size (i.e., horizon radius) constant, emerges to several nonphysical phenomena.
This research aims to reduce the peridynamic computational and imple- mentation costs. A novel refinement approach is introduced. The pro- posed approach takes advantage of the PD flexibility in choosing the shape of the horizon by introducing multiple domains (with no intersections) to the nodes of the refinement zone. It will be shown that no ghost forces will be created when changing the horizon sizes in both subdomains. The approach is applied to both bond-based and state-based peridynamic and verified for a simple wave propagation refinement problem illustrating the efficiency of the method. Further development of the method for higher dimensions proves to have a direct relationship with the mesh sensitivity of the PD. A method for solving the mesh sensitivity of the PD is intro- duced. The application of the method will be examined by solving a crack propagation problem similar to those reported in the literature.
New software architecture is proposed considering both academic and in- dustrial use. The available simulation tools for employing PD will be collected, and their advantages and drawbacks will be addressed. The challenges of implementing any node base nonlocal methods while max- imizing the software flexibility to further development and modification will be discussed and addressed. A software named Relation-Based Sim- ulator (RBS) is developed for examining the proposed architecture. The exceptional capabilities of RBS will be explored by simulating three distinguished models. RBS is available publicly and open to further develop- ment. The industrial acceptance of the RBS will be tested by targeting its performance on one Mac and two Linux distributions.
This paper examines the impact of information technology (IT) utilization on construction firm performance. Based on empirical data collected from 74 US construction firms, the analyses provide evidence that IT has a positive impact on overall firm performance, schedule performance, and cost performance. Firm performance is a composite score of several metrics of performance: schedule performance, cost performance, customer satisfaction, safety performance, and profit. No relationship is found between IT utilization and customer satisfaction, safety, or profit, although this may be due to limitations of the study given strong correlations between IT utilization and cost and schedule performnance. The empirical evidence of positive association between performance and IT use provided by this research is significant to both construction practice and research literature. This evidence should encourage firms to adopt and invest in IT tools.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Since the Industrial Revolution in the 1700s, the high emission of gaseous wastes into the atmosphere from the usage of fossil fuels has caused a general increase in temperatures globally. To combat the environmental imbalance, there is an increase in the demand for renewable energy sources. Dams play a major role in the generation of “green" energy. However, these structures require frequent and strict monitoring to ensure safe and efficient operation. To tackle the challenges faced in the application of convention dam monitoring techniques, this work proposes the inverse analysis of numerical models to identify damaged regions in the dam. Using a dynamic coupled hydro-mechanical Extended Finite Element Method (XFEM) model and a global optimization strategy, damage (crack) in the dam is identified. By employing seismic waves to probe the dam structure, a more detailed information on the distribution of heterogeneous materials and damaged regions are obtained by the application of the Full Waveform Inversion (FWI) method. The FWI is based on a local optimization strategy and thus it is highly dependent on the starting model. A variety of data acquisition setups are investigated, and an optimal setup is proposed. The effect of different starting models and noise in the measured data on the damage identification is considered. Combining the non-dependence of a starting model of the global optimization strategy based dynamic coupled hydro-mechanical XFEM method and the detailed output of the local optimization strategy based FWI method, an enhanced Full Waveform Inversion is proposed for the structural analysis of dams.
This paper presents an evaluation system for steel structures of hydroelectric power stations, including hydraulic gates and penstocks, based on Fault Tree Analyasis (FTA) and performance maps. This system consists of fault tree diagrams of FTA, performance maps, design and analysis systems, and engineerin databases. These four modules are integrated by appropriate hyperlinks so that the user of this system can use it easily and seamlessly. A well developed system was applied to some illustrative example cases, and they showed that the developed methodology and system worked well and the users found the system useful and effective for their maintenance tasks at powerstations.
An Experimental Study on Hydro-Mechanical Characteristics of Compacted Bentonite-Sand Mixtures
(2005)
Nuclear and hazardous waste disposal issues have become universal issues and problems related to the final disposal of these waste including finding a suitable site, natural and engineered barriers used, construction of the repository, long-term performance assessment have gained increasing attention all over the world. High-level radioactive and hazardous waste are required to be buried in deep geological repositories. In Germany, the ongoing researches have assessed the suitability of salt-stone formation in the country as a host rock candidate for its nuclear waste repository. Bentonite-based materials have been proposed to be used as sealing and buffer elements for the nuclear waste repository. Several hydro-mechanical processes will take place in the field and influence behaviour of the sealing and buffer elements of the repository. In this dissertation, a study on the hydro-mechanical characterisation of bentonite-sand mixtures is presented. Mixtures of a calcium-type bentonite, named Calcigel, and quartz sand were used in the investigation. Series of experiments including basic and physico-chemical characterisation, microstructure and fabric studies, suction and swelling pressure measurements, wetting and drying test, one-dimensional compression-rebound test, one-dimensional cyclic wetting-drying test under constant vertical stress, and saturated permeability test were conducted. The experimental data obtained are analysed and several characteristics of the material are brought out in this dissertation. Conclusions regarding basic behaviour of the materials are drawn based on the results of microstructure and fabric studies. Factors influencing the magnitude of suction and swelling pressure of the materials are outlined and discussed. The suction-induced compression and rebound characteristics of the material are described. The wetting and drying behaviour as influenced by the material boundary conditions are discussed. Permeability characteristics of the materials are examined based on several available permeability models. Several mechanical and hydraulic parameters that can be used in modelling using some available constitutive modelling approaches are derived based on the experimental data. At the end, conclusions regarding the hydro-mechanical characteristics of the materials are drawn and suggestions for future studies are made.
The initial shear modulus, Gmax, of soil is an important parameter for a variety of geotechnical design applications. This modulus is typically associated with shear strain levels about 5*10^-3% and below. The critical role of soil stiffness at small-strains in the design and analysis of geotechnical infrastructure is now widely accepted.
Gmax is a key parameter in small-strain dynamic analyses such as those to predict soil behavior or soil-structure interaction during earthquake, explosions, machine or traffic vibration where it is necessary to know how the shear modulus degrades from its small-strain value as the level of shear strain increases. Gmax can be equally important for small-strain cyclic situations such as those caused by wind or wave loading and for small-strain static situations as well. Gmax may also be used as an indirect indication of various soil parameters, as it, in many cases, correlates well to other soil properties such as density and sample disturbance. In recent years, a technique using bender elements was developed to investigate the small-strain shear modulus Gmax.
The objective of this thesis is to study the initial shear stiffness for various sands with different void ratios, densities, grain size distribution under dry and saturated conditions, then to compare empirical equations to predict Gmax and results from other testing devices with results of bender elements from this study.
The modeling of crack propagation in plain and reinforced concrete structures is still a field for many researchers. If a macroscopic description of the cohesive cracking process of concrete is applied, generally the Fictitious Crack Model is utilized, where a force transmission over micro cracks is assumed. In the most applications of this concept the cohesive model represents the relation between the normal crack opening and the normal stress, which is mostly defined as an exponential softening function, independently from the shear stresses in tangential direction. The cohesive forces are then calculated only from the normal stresses. By Carol et al. 1997 an improved model was developed using a coupled relation between the normal and shear damage based on an elasto-plastic constitutive formulation. This model is based on a hyperbolic yield surface depending on the normal and the shear stresses and on the tensile and shear strength. This model also represents the effect of shear traction induced crack opening. Due to the elasto-plastic formulation, where the inelastic crack opening is represented by plastic strains, this model is limited for applications with monotonic loading. In order to enable the application for cases with un- and reloading the existing model is extended in this study using a combined plastic-damage formulation, which enables the modeling of crack opening and crack closure. Furthermore the corresponding algorithmic implementation using a return mapping approach is presented and the model is verified by means of several numerical examples. Finally an investigation concerning the identification of the model parameters by means of neural networks is presented. In this analysis an inverse approximation of the model parameters is performed by using a given set of points of the load displacement curves as input values and the model parameters as output terms. It will be shown, that the elasto-plastic model parameters could be identified well with this approach, but require a huge number of simulations.