Refine
Document Type
- Conference Proceeding (76)
- Doctoral Thesis (17)
- Article (12)
- Master's Thesis (3)
- Periodical (2)
- Study Thesis (2)
- Diploma Thesis (1)
- Report (1)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (50)
- Institut für Strukturmechanik (ISM) (17)
- Graduiertenkolleg 1462 (12)
- Professur Angewandte Mathematik (4)
- Junior-Professur Augmented Reality (3)
- Professur Informatik im Bauwesen (3)
- Professur Stochastik und Optimierung (3)
- An-Institute (2)
- Institut für Europäische Urbanistik (2)
- Institut für Konstruktiven Ingenieurbau (IKI) (2)
Keywords
- Angewandte Mathematik (83)
- Angewandte Informatik (74)
- Architektur <Informatik> (74)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (74)
- Computerunterstütztes Verfahren (74)
- Strukturmechanik (9)
- Computersimulation (3)
- Stochastik (3)
- CAD (2)
- IFC (2)
- Kulturwissenschaft (2)
- Maschinelles Sehen (2)
- Medienwissenschaft (2)
- Public Private Partnership (2)
- Visualisierung (2)
- AES-192 (1)
- AES-256 (1)
- ARIA (1)
- Advanced Encryption Standard (1)
- BIM (1)
- BOT projects (1)
- Bauchemie (1)
- Baugrube (1)
- Bauplanung (1)
- Bauprozess (1)
- Bauökologie (1)
- Betonzusatzmittel (1)
- Bibliothek (1)
- Bilderkennung (1)
- Blockchiffre (1)
- Bluetooth tracking (1)
- Boolean Operations (1)
- Boolesche Operationen (1)
- Brasilien (1)
- Brazil (1)
- China (1)
- City Marketing (1)
- Clifford-Analysis (1)
- Collaboration (1)
- Computer Graphics (1)
- Computergraphik (1)
- Conformity Assessment (1)
- Darstellungssatz von Goursat (1)
- Decision Making (1)
- Differentielle Kryptoanalyse (1)
- Digital video compositing ; video projection ; digital video composition techniques (1)
- Direkte numerische Simul (1)
- E-Book (1)
- Ebener Dehnungszustand (1)
- Effizienz (1)
- Eignungstest (1)
- Elektronisches Buch (1)
- Emotion (1)
- Entscheidungsverhalten (1)
- Etabs (1)
- Experimentalplattform (1)
- Facility-Management (1)
- Filmproduktion (1)
- Florianópolis (1)
- Fourier (1)
- Fourier-Reihe (1)
- Freac (1)
- Funktionale Privatisierung (1)
- Funktionentheorie (1)
- Fügen (1)
- GTDS (1)
- Gefühl (1)
- Generative Entwurfsmethoden (1)
- Grindahl (1)
- Großtafelbau (1)
- HAS-160 (1)
- Handy (1)
- Hanoi (1)
- Hash-Algorithmus (1)
- Hedonic Consumption (1)
- Hochhaus (1)
- Hubbard (1)
- Hydraulischer Grundbruch (1)
- Hyperkomplexe Funktion (1)
- Image (1)
- Immersion <Virtuelle Realität> (1)
- Immobilien-Lebenszyklusansatz (1)
- Immobilienmanagement (1)
- Interaction Techniques (1)
- Internalisierung <Wirtschaft> (1)
- Jena-Winzerla (1)
- Jena-Winzerla / Stadtteilbüro Winzerla (1)
- Karl (1)
- Kleben (1)
- Klebtechnologie (1)
- Konformitätsbewertung (1)
- Kontextbezogenes System (1)
- Konzessionen (1)
- Kraftschluss (1)
- Krankenhaus-Immobilien (1)
- Krankenhausfinanzierung (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Kunststoffkleben (1)
- Laidlaw (1)
- Landnutzung (1)
- Laurent (1)
- Laurent-Reihe (1)
- Lebensdauer (1)
- Leseverhalten (1)
- Lineare Elastizitätstheorie (1)
- Lumineszenzdiode (1)
- MFPA Weimar (1)
- MIPS-Konzept (1)
- Make-up Urbanism (1)
- Make-up-Urbanistik (1)
- Marketing (1)
- Marktstruktur (1)
- Mechanik (1)
- Media Goods (1)
- Medienphilosophie (1)
- Meso-Scale (1)
- Metallkleben (1)
- Mikro-Scale (1)
- Mobile Computing (1)
- Model Predictive Control (1)
- Modellierung (1)
- Museumsführer (1)
- Nachfragerisiko (1)
- Oberflächenvorbereitung (1)
- Perkolationstheorie (1)
- Philosophie (1)
- PhoneGuide (1)
- Phung Khoang (1)
- Planungstheorie (1)
- Polaritätsprofil (1)
- Polykristall (1)
- Positionsbestimmung (1)
- Produktmodell (1)
- Projector Camera System (1)
- Public Private Partnership (PPP) (1)
- RadioGatun (1)
- Ressourcenanalyse (1)
- Risikoallokation (1)
- Risikomanagement (1)
- Risikoverteilung (1)
- SHACAL-2 (1)
- Schwinden (1)
- Schwindisotherme (1)
- Schwindmechanismus (1)
- Schwindreduktion (1)
- Schwingungsdämpfer (1)
- Siedlung (1)
- Siedlungsbau (1)
- Simulation (1)
- Softwareentwicklung (1)
- Sorptionsisotherme (1)
- Soziale Kosten (1)
- Soziale Stadt <Förderungsprogramm> (1)
- Stadtmarketing (1)
- Stadtplanung (1)
- Standsicherheit (1)
- Straßenbenutzungsgebühr (1)
- Straßensektor (1)
- Strukturalismus (1)
- Städtebau (1)
- Suffosion (1)
- TMD (1)
- Taylor (1)
- Taylor-Reihe (1)
- Technikphilosophie (1)
- Terzaghi (1)
- Tiefe Baugruben (1)
- Tiger (1)
- Trabantenstadt (1)
- Traffic revenue risk (1)
- Tragsystem (1)
- Tuned Mass Damper (1)
- Ubiquitous Computing (1)
- Ungesättigte Zone (1)
- Unsaturated soil (1)
- Verbindungstechnik (1)
- Verbraucherverhalten (1)
- Verkehrsmengenrisiko (1)
- Verkehrspolitik (1)
- Verkehrswirtschaft (1)
- Verlag (1)
- Virtual Reality (1)
- Virtuelle Realität (1)
- Visually Impaired (1)
- Visuelle Effekte ; virtuelles Studio ; digitale Videokomposition (1)
- Wegrouten (1)
- Weimar / Bauhaus-Universität / Professur Baubetrieb und Bauverfahren (1)
- Wertschöpfung (1)
- Zementhydratation (1)
- Zugversuch (1)
- adhesive bonding (1)
- atomistic simulation methods (1)
- biaxial – sand (1)
- biaxial-sand (1)
- block ciphers (1)
- computerbasiertes Entwerfen (1)
- construction chemicals (1)
- continuum mechanics (1)
- crack mitigation (1)
- demand risk (1)
- differential cryptanalysis (1)
- digitales Gebäudemodell (1)
- ecological architecture (1)
- external costs (1)
- fiber reinforced polymer (1)
- finite element method (1)
- force-fit (1)
- fracture behavior (1)
- generalized Kolosov-Muskhelishvili formulae (1)
- generalized theorem of Goursat (1)
- hash function (1)
- intergranular damage (1)
- joining (1)
- land use management (1)
- life cycle management (1)
- lösbare Verbindung (1)
- material properties (1)
- materialgerecht (1)
- modelling (1)
- modulares Bauwerk (1)
- monogene Orthogonalreihenentwicklungen (1)
- monogenic orthogonal series expansions Fourier (1)
- museum guidance system (1)
- numeric analysis (1)
- pathway awareness (1)
- percolation theory (1)
- plane-strain conditions (1)
- public space (1)
- puplic hospital (1)
- quasicontinuum method (1)
- risk allocation (1)
- road pricing (1)
- scale transition (1)
- schwindreduzierendes Zusatzmittel (1)
- shrinkage (1)
- shrinkage reducing admixtures (1)
- shrinkage reducing agents (1)
- shrinkage reduction (1)
- simulation (1)
- slide attacks (1)
- space syntax (1)
- suffusion (1)
- toll road concessions (1)
- urban building (1)
- urban design (1)
- urban planning (1)
- verallgemeinerte Kolosov-Muskhelishvili Formeln (1)
- Öffentlich Private Partnerschaft (ÖPP) (1)
- Öffentlicher Raum (1)
Year of publication
- 2010 (114) (remove)
Schwerpunkt Kulturtechnik
(2010)
Medientheorie und historische Medienwissenschaft sind seit geraumer Zeit dabei, einen Schritt zu tun, der sie hierzulande zumindest teilweise in historische und systematische Kulturtechnikforschung überführt. Die Möglichkeit existiert, dass die Medien als Referenz eines Wissenschaftsparadigmas, das gerade dabei ist, die Forschungs- und Lehrstrukturen dieses Landes zu erobern, sich bereits im Zustand bloßen Nachlebens befinden. Damit kommen mindestens jene Teile der Medienforschung zu sich, die seit der Institutionalisierung von Medienwissenschaft realisieren mussten, dass jene Medien, mit denen sie es seit den 1980er Jahren zu tun hatten, sich nur schwer in den Rahmen der Medien der Medienwissenschaft fügen wollen. Es scheint daher so, als ließe sich mit dem Begriff der Kulturtechniken etwas fassen, das schon seit den 80er Jahren eine Spezifik der entstehenden deutschen Medienwissenschaft gewesen ist, eine Spezifik, die sie den angloamerikanischen media studies ebenso entfremdete wie der Kommunikationswissenschaft oder gar der Soziologie, die, im Banne der Aufklärung und des Gesellschaftsbegriff s stehend, über Medien grundsätzlich nur unter dem Aspekt der Öff entlichkeit nachdenken wollte. Was sich in den 80er Jahren des letzten Jahrhunderts etwa unter dem Titel einer Diskurs- und Medienanalyse formierte, zielte nicht primär auf eine Medientheorie oder die Geschichte von Einzelmedien ab, die längst identitätsstiftend für je eigene Forschungsdisziplinen geworden waren (Fotografie, Film, Fernsehen, Rundfunk), sondern auf eine Geschichte der Literatur, des Geistes, der Seele und der Sinne, die man der Literaturwissenschaft, der Philosophie, der Psychologie und der Ästhetik wegzunehmen gedachte, um sie auf einem anderen Schauplatz aufzuführen: dem der Medien – und gegenwärtig der Kulturtechniken. Weil aber gar nicht die Medien im Fokus der Entdeckung standen, sondern eine Rekontextualisierung der traditionellen Gegenstände der Geisteswissenschaften, genauer eine »Austreibung des Geistes aus den Geisteswissenschaften« (Friedrich Kittler), kam von vornherein anderes in den Blick als diejenigen Medien, die die Publizistik- und Kommunikationswissenschaft, die Massenmedienforschung oder die Einzelmedienwissenschaften als ihre primären Untersuchungsfelder auswiesen.
Schwerpunkt Mediephilosophie
(2010)
Die prominent und polemisch geäusserte Ansicht, bei der Medienphilosophie handele es sich um eine vorübergehende Angelegenheit, ist vermutlich sehr zutreff end. Medienphilosophie selbst hat nie etwas anderes behauptet. Und genau aus diesem Grund, also eben wegen ihrer Vorläufigkeit, ist Medienphilosophie so wichtig. Sie tritt vielleicht tatsächlich als neue, modische Unterdisziplin der Philosophie auf. Aber sie tut dies, weil sie eine sehr ernsthafte Herausforderung an die Philosophie darstellt. Wie und wann sie wieder vergeht, das hängt davon ab, was sie ausrichtet. Medienphilosophie ist nämlich in ihrem Selbstverständnis ein grundlegend operatives und operationales Unternehmen. Daher rührt ihre große Nähe zu und ihr vitales Interesse an den Kulturtechniken und ihrer Erforschung.
Sie interessiert sich für Eingriff e aller Art – und ist selbst einer. Sie hat – und zwar keineswegs nur metaphorisch – Anteil am materiellen Körper der Philosophie, für den Philosophie selbst, immer hart am Begriff , sich gar nicht interessiert und dies auch nicht tun muss. Zum materiellen Körper der Philosophie zählten bereits die schreibende Hand, vielleicht das vorrangige Medium des philosophischen Eingriffs, und ihr Werkzeug, das Schreibzeug, das sie führt. Als Medienphilosophie widmet sich die Philosophie den Gesten, die sie in der Welt ausführt, und den Operationen, die sie an den Dingen und mit ihrer Hilfe vornimmt.
Fuzzy functions are suitable to deal with uncertainties and fuzziness in a closed form maintaining the informational content. This paper tries to understand, elaborate, and explain the problem of interpolating crisp and fuzzy data using continuous fuzzy valued functions. Two main issues are addressed here. The first covers how the fuzziness, induced by the reduction and deficit of information i.e. the discontinuity of the interpolated points, can be evaluated considering the used interpolation method and the density of the data. The second issue deals with the need to differentiate between impreciseness and hence fuzziness only in the interpolated quantity, impreciseness only in the location of the interpolated points and impreciseness in both the quantity and the location. In this paper, a brief background of the concept of fuzzy numbers and of fuzzy functions is presented. The numerical side of computing with fuzzy numbers is concisely demonstrated. The problem of fuzzy polynomial interpolation, the interpolation on meshes and mesh free fuzzy interpolation is investigated. The integration of the previously noted uncertainty into a coherent fuzzy valued function is discussed. Several sets of artificial and original measured data are used to examine the mentioned fuzzy interpolations.
We give a sufficient and a necessary condition for an analytic function "f" on the unit disk "D" with Hadamard gap to belong to a class of weighted logarithmic Bloch space as well as to the corresponding little weighted logarithmic Bloch space under some conditions posed on the defined weight function. Also, we study the relations between the class of weighted logarithmic Bloch functions and some other classes of analytic functions by the help of analytic functions in the Hadamard gap class.
In this dissertation, a new, unique and original biaxial device for testing unsaturated soil was designed and developed. A study on the mechanical behaviour of unsaturated sand in plane-strain conditions using the new device is presented. The tests were mainly conducted on Hostun sand specimens. A series of experiments including basic characterisation, soil water characteristic curves, and compression biaxial tests on dry, saturated, and unsaturated sand were conducted. A set of bearing capacity tests of strip model footing on unsaturated sand were performed. Additionally, since the presence of fine content (i.e., clay) influences the behavior of soils, soil water characteristic tests were also performed for sand-kaolin mixtures specimens.
Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing.
Digitale Lesezeichen, Volltextsuche und Multimedia-Inhalte – die Ende des 20. Jahrhunderts durch das Internet ausgelöste Medienrevolution ließ auch das Buch nicht unberührt. Die Verbreitung des World Wide Webs parallel zur rasanten Entwicklung der Computertechnologie ermöglichte die Digitalisierung des Buches und bildete das E-Book als neue Publikationsform heraus. Seit etwa zehn Jahren können Bücher nicht mehr nur gedruckt, sondern auch elektronisch zur Verfügung gestellt werden, was für die Buchbranche und den Leser einige Veränderungen bedeutet. Moderne Lesegeräte, auch E-Reader genannt, erlauben die Speicherung einer ganzen Bibliothek auf einem einzigen mobilen Endgerät. Dabei steht das einzelne E-Book dem gedruckten Buch in seiner Lesequalität in nichts nach und ermöglicht zudem das Einfügen elektronischer Notizen und Lesezeichen, die Volltextsuche nach bestimmten Wörtern und die Verbindung von Text mit Bild, Ton und Video. Dennoch kann das E-Book seit seinem Aufkommen in Deutschland noch keine Erfolgsgeschichte schreiben. Insbesondere hohe Preise für die Lesegeräte halten immer noch viele Leser vom Nutzen der E-Books ab. Zu sehr ist das gedruckte Buch für zahlreiche Menschen noch fester Bestandteil ihres alltäglichen Lebens, als das sie es bereits durch das E-Book austauschen würden. Eine Situation, die einige Fragen aufwirft: Wird sich das EBook als Medium durchsetzen und das gedruckte Buch langfristig ablösen? Kann das EBook neben Zeitung, Radio, Fernsehen und Buch überhaupt als ein neues Medium verstanden werden? Und welche Veränderungen würde die massenhafte Verbreitung elektronischer Bücher mit sich bringen?
Am 25. März 2010 veranstaltete die Professur Baubetrieb und Bauverfahren im Rahmen der jährlich stattfindenden baubetrieblichen Tagungsreihe gemeinsam mit der Arbeitsgruppe „Unikatprozesse“ in der Fachgruppe „Simulation in Produktion und Logistik“ (SPL) im Rahmen der Arbeitsgemeinschaft Simulation – ASIM einen ganztägigen Workshop mit dem Titel: „Modellierung von Prozessen zur Fertigung von Unikaten“. Viele Bauprozesse sind dadurch gekennzeichnet, dass sie Unikatcharakter besitzen. Unikate sind durch prototypische Einmaligkeit, Individualität, vielfältige Randbedingungen, einen geringen Grad an Standardisierung und Wiederholungen gekennzeichnet. Das erschwert die realitätsnahe Modellierung zur Simulation sogenannter Unikatprozesse. Dieser Besonderheit widmet sich die überwiegende Zahl der Tagungsbeiträge, die in diesem Band widergegeben sind.
We investigate aspects of tram-network section reliability, which operates as a part of the model of whole city tram-network reliability. Here, one of the main points of interest is the character of the chronological development of the disturbances (namely the differences between time of departure provided in schedule and real time of departure) on subsequent sections during tram line operation. These developments were observed in comprehensive measurements done in Krakow, during one of the main transportation nodes (Rondo Mogilskie) rebuilding. All taken building activities cause big disturbances in tram lines operation with effects extended to neighboring sections. In a second part, the stochastic character of section running time will be analyzed more detailed. There will be taken into consideration sections with only one beginning stop and also with two or three beginning stops located at different streets at an intersection. Possibility of adding results from sections with two beginning stops to one set will be checked with suitable statistical tests which are used to compare the means of the two samples. Section running time may depend on the value of gap between two following trams and from the value of deviation from schedule. This dependence will be described by a multi regression formula. The main measurements were done in the city center of Krakow in two stages: before and after big changes in tramway infrastructure.
From passenger’s perspective, punctuality is one of the most important features of tram route operation. We present a stochastic simulation model with special focus on determining important factors of influence. The statistical analysis bases on large samples (sample size is nearly 2000) accumulated from comprehensive measurements on eight tram routes in Cracow. For the simulation, we are not only interested in average values but also in stochastic characteristics like the variance and other properties of the distribution. A realization of trams operations is assumed to be a sequence of running times between successive stops and times spent by tram at the stops divided in passengers alighting and boarding times and times waiting for possibility of departure . The running time depends on the kind of track separation including the priorities in traffic lights, the length of the section and the number of intersections. For every type of section, a linear mixed regression model describes the average running time and its variance as functions of the length of the section and the number of intersections. The regression coefficients are estimated by the iterative re-weighted least square method. Alighting and boarding time mainly depends on type of vehicle, number of passengers alighting and boarding and occupancy of vehicle. For the distribution of the time waiting for possibility of departure suitable distributions like Gamma distribution and Lognormal distribution are fitted.
Models in the context of engineering can be classified in process based and data based models. Whereas the process based model describes the problem by an explicit formulation, the data based model is often used, where no such mapping can be found due to the high complexity of the problem. Artificial Neuronal Networks (ANN) is a data based model, which is able to “learn“ a mapping from a set of training patterns. This paper deals with the application of ANN in time dependent bathymetric models. A bathymetric model is a geometric representation of the sea bed. Typically, a bathymetry is been measured and afterwards described by a finite set of measured data. Measuring at different time steps leads to a time dependent bathymetric model. To obtain a continuous surface, the measured data has to be interpolated by some interpolation method. Unlike the explicitly given interpolation methods, the presented time dependent bathymetric model using an ANN trains the approximated surface in space and time in an implicit way. The ANN is trained by topographic measured data, which consists of the location (x,y) and time t. In other words the ANN is trained to reproduce the mapping h = f(x,y,t) and afterwards it is able to approximate the topographic height for a given location and date. In a further step, this model is extended to take meteorological parameters into account. This leads to a model of more predictive character.
Die Behandlung von geometrischen Singularitäten bei der Lösung von Randwertaufgaben der Elastostatik stellt erhöhte Anforderungen an die mathematische Modellierung des Randwertproblems und erfordert für eine effiziente Auswertung speziell angepasste Berechnungsverfahren. Diese Arbeit beschäftigt sich mit der systematischen Verallgemeinerung der Methode der komplexen Spannungsfunktionen auf den Raum, wobei der Schwerpunkt in erster Linie auf der Begründung des mathematischen Verfahrens unter besonderer Berücksichtigung der praktischen Anwendbarkeit liegt. Den theoretischen Rahmen hierfür bildet die Theorie quaternionenwertiger Funktionen. Dementsprechend wird die Klasse der monogenen Funktionen als Grundlage verwendet, um im ersten Teil der Arbeit ein räumliches Analogon zum Darstellungssatz von Goursat zu beweisen und verallgemeinerte Kolosov-Muskhelishvili Formeln zu konstruieren. Im Hinblick auf die vielfältigen Anwendungsbereiche der Methode beschäftigt sich der zweite Teil der Arbeit mit der lokalen und globalen Approximation von monogenen Funktionen. Hierzu werden vollständige Orthogonalsysteme monogener Kugelfunktionen konstruiert, infolge dessen neuartige Darstellungen der kanonischen Reihenentwicklungen (Taylor, Fourier, Laurent) definiert werden. In Analogie zu den komplexen Potenz- und Laurentreihen auf der Grundlage der holomorphen z-Potenzen werden durch diese monogenen Orthogonalreihen alle wesentlichen Eigenschaften bezüglich der hyperkomplexen Ableitung und der monogenen Stammfunktion verallgemeinert. Anhand repräsentativer Beispiele werden die qualitativen und numerischen Eigenschaften der entwickelten funktionentheoretischen Verfahren abschließend evaluiert. In diesem Kontext werden ferner einige weiterführende Anwendungsbereiche im Rahmen der räumlichen Funktionentheorie betrachtet, welche die speziellen Struktureigenschaften der monogenen Potenz- und Laurentreihenentwicklungen benötigen.
In this paper the influence of changes in the mean wind velocity, the wind profile power-law coefficient, the drag coefficient of the terrain and the structural stiffness are investigated on different complex structural models. This paper gives a short introduction to wind profile models and to the approach by Davenport A. G. to compute the structural reaction of wind induced vibrations. Firstly with help of a simple example (a skyscraper) this approach is shown. Using this simple example gives the reader the possibility to study the variance differences when changing one of the above mentioned parameters on this very easy example and see the influence of different complex structural models on the result. Furthermore an approach for estimation of the needed discretization level is given. With the help of this knowledge the structural model design methodology can be base on deeper understanding of the different behavior of the single models.
Euclidean Clifford analysis is a higher dimensional function theory offering a refinement of classical harmonic analysis. The theory is centered around the concept of monogenic functions, i.e. null solutions of a first order vector valued rotation invariant differential operator called the Dirac operator, which factorizes the Laplacian. More recently, Hermitean Clifford analysis has emerged as a new and successful branch of Clifford analysis, offering yet a refinement of the Euclidean case; it focusses on the simultaneous null solutions, called Hermitean (or h-) monogenic functions, of two Hermitean Dirac operators which are invariant under the action of the unitary group. In Euclidean Clifford analysis, the Clifford-Cauchy integral formula has proven to be a corner stone of the function theory, as is the case for the traditional Cauchy formula for holomorphic functions in the complex plane. Previously, a Hermitean Clifford-Cauchy integral formula has been established by means of a matrix approach. This formula reduces to the traditional Martinelli-Bochner formula for holomorphic functions of several complex variables when taking functions with values in an appropriate part of complex spinor space. This means that the theory of Hermitean monogenic functions should encompass also other results of several variable complex analysis as special cases. At present we will elaborate further on the obtained results and refine them, considering fundamental solutions, Borel-Pompeiu representations and the Teoderescu inversion, each of them being developed at different levels, including the global level, handling vector variables, vector differential operators and the Clifford geometric product as well as the blade level were variables and differential operators act by means of the dot and wedge products. A rich world of results reveals itself, indeed including well-known formulae from the theory of several complex variables.
In the context of finite element model updating using output-only vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the correct pairing of experimentally obtained and numerically derived natural frequencies and mode shapes is important. In many cases, only limited spatial information is available and noise is present in the measurements. Therefore, the automatic selection of the most likely numerical mode shape corresponding to a particular experimentally identified mode shape can be a difficult task. The most common criterion for indicating corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases and is not reliable for automatic approaches. In this paper, the purely mathematical modal assurance criterion will be enhanced by additional physical information from the numerical model in terms of modal strain energies. A numerical example and a benchmark study with experimental data are presented to show the advantages of the proposed energy-based criterion in comparison to the traditional modal assurance criterion.
In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.
The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate.
A UNIFIED APPROACH FOR THE TREATMENT OF SOME HIGHER DIMENSIONAL DIRAC TYPE EQUATIONS ON SPHERES
(2010)
Using Clifford analysis methods, we provide a unified approach to obtain explicit solutions of some partial differential equations combining the n-dimensional Dirac and Euler operators, including generalizations of the classical time-harmonic Maxwell equations. The obtained regular solutions show strong connections between hypergeometric functions and homogeneous polynomials in the kernel of the Dirac operator.
The application of a recent method using formal power series is proposed. It is based on a new representation for solutions of Sturm-Liouville equations. This method is used to calculate the transmittance and reflectance coefficients of finite inhomogeneous layers with high accuracy and efficiency. Tailoring the refraction index profile defining the inhomogeneous media it is possible to develop very important applications such as optical filters. A number of profiles were evaluated and then some of them selected in order to perform an improvement of their characteristics via the modification of their profiles.
MICROPLANE MODEL WITH INITIAL AND DAMAGE-INDUCED ANISOTROPY APPLIED TO TEXTILE-REINFORCED CONCRETE
(2010)
The presented material model reproduces the anisotropic characteristics of textile reinforced concrete in a smeared manner. This includes both the initial anisotropy introduced by the textile reinforcement, as well as the anisotropic damage evolution reflecting fine patterns of crack bridges. The model is based on the microplane approach. The direction-dependent representation of the material structure into oriented microplanes provides a flexible way to introduce the initial anisotropy. The microplanes oriented in a yarn direction are associated with modified damage laws that reflect the tension-stiffening effect due to the multiple cracking of the matrix along the yarn.
In this paper we consider the time independent Klein-Gordon equation on some conformally flat 3-tori with given boundary data. We set up an explicit formula for the fundamental solution. We show that we can represent any solution to the homogeneous Klein-Gordon equation on the torus as finite sum over generalized 3-fold periodic elliptic functions that are in the kernel of the Klein-Gordon operator. Furthermore we prove Cauchy and Green type integral formulas and set up a Teodorescu and Cauchy transform for the toroidal Klein-Gordon operator. These in turn are used to set up explicit formulas for the solution to the inhomogeneous version of the Klein-Gordon equation on the 3-torus.
Seit 1969 werden für die Bundesrepublik kontinuierlich Berechnungen zu den Gesamtkosten des Straßenverkehrs der Bundesfernstraßen und deren Verteilung auf die Verkehrsteilnehmer durchgeführt. Die Ergebnisse der Wegekostenrechnungen der Jahre 2002 und 2007 sind die Grundlage für die mittlerweile für das deutsche Autobahnnetz eingeführte fahrleistungsbezogene Benutzungsgebühr für Lkw mit einem zulässigen Gesamtgewicht von mindestens zwölf Tonnen. Damit wird die Forderung der EU-Richtlinie 1999/62/EG umgesetzt, nach der sich die durchschnittlichen Straßenbenutzungsgebühren an den Kosten für den Bau, den Betrieb und den Ausbau des betreffenden Verkehrswegenetzes orientieren sollen. Mit der EU-Richtlinie 2006/38/EG kündigt sich die weitere Entwicklung bei der Berechnung von Straßenbenutzungsgebühren an. Zukünftig sollen auch externe Kosten in die Berechnung einfließen. Ein erster Schritt zur Berücksichtigung dieser externen Kosten erfolgte mit Erstellung eines Handbuchs im Rahmen eines EU-Forschungsprojektes. Das Handbuch enthält aufgrund der unterschiedlichen Rahmenbedingungen in den Mitgliedsstaaten der EU keine exakten Berechnungsvorschriften, sondern stellt verschiedene methodische Ansätze bisher durchgeführter Studien zu externen Kosten vor, gibt Empfehlungen hinsichtlich der Methodenwahl und beinhaltet Schätzungen über die Höhe der externen Kosten. Die im europäischen Raum in den vergangenen Jahren durchgeführten Studien zur Ermittlung externer Kosten des Verkehrs zeichnen sich durch einander ähnelnde Vorgehensweisen aus, die aber vor allem hinsichtlich der Kostenrechnungsart und der verwendeten Kostensätze aus Sicht des Verfassers der vorliegenden Arbeit kritische Aspekte aufweisen. In der vorliegenden Dissertationsschrift wird daher eine alternative Berechnungsmethodik zur Ermittlung abschnitts-, fahrzeugklassen- und fahrleistungsbezogener externer Kosten für Autobahnen entwickelt und an einem ausgewählten Beispielnetz zur Anwendung gebracht. Dabei wird in einigen wesentlichen Punkten von der in aktuellen Studien überwiegend gewählten Vorgehensweise abgewichen, um eine andere Sichtweise darzustellen. Damit trägt die vorliegende Arbeit substanziell zur Erweiterung des Erkenntnisstands zu Berechnungsmethoden externer Kosten des Straßenverkehrs bei. Die hier entwickelte Berechnungsmethodik ist außerdem als Grundlage für ein in der Praxis anwendbares Verfahren zu verstehen und zeichnet sich auch daher durch eine einfach zu handhabende Übertragbarkeit auf das gesamte Autobahnnetz Deutschlands aus. Die Abschnitte entsprechen den Teilstrecken zwischen zwei Autobahnanschlussstellen. Es wird zwischen den beiden Fahrzeugklassen "Lkw ab 12 t zulässigem Gesamtgewicht" und "Sonstigen Fahrzeugen" unterschieden. Obwohl momentan nur eine Benutzungsgebühr für Lkw ab 12 t zulässigem Gesamtgewicht erhoben wird, ist es mit der entwickelten Methodik möglich, fahrleistungsbezogene externe Kosten für alle Kfz angeben zu können. Die Einbeziehung externer Nutzen wird in diesem Zusammenhang andiskutiert; der Schwerpunkt liegt allerdings auf den externen Kosten. Im Rahmen der Arbeit werden zunächst Definitionen wesentlicher Terminologien dargestellt, soweit diese für das Verständnis der sich anschließenden Diskussion und Festlegung der Grundlagen der entwickelten Berechnungsmethodik notwendig erscheinen. Diese Diskussion und Festlegung umfasst die Bereiche Kostenrechnungsart, Bewertungsverfahren zur Ermittlung des Wertegerüsts, Diskontrate, zu betrachtende Kostenbereiche, Mengengerüst und Allokationsrechnung. Darauf folgend werden die betrachteten Kostenbereiche anhand vorliegender Studien und eigener Überlegungen detailliert dargestellt und das Wertegerüst bestimmt. Außerdem wird die Allokationsrechnung und das für die Berechnung heranzuziehende Mengengerüst für jeden Bereich separat vorgestellt. Anschließend wird die entwickelte Berechnungsmethodik auf ein Beispielnetz (Autobahnnetz Thüringen) angewendet. Neben der Vorstellung des Untersuchungsgebiets, der Berechnung der externen Kosten und der disaggregierten Ergebnisdarstellung wird die Einteilung des Beispielnetzes in unterschiedliche Preiskategorien auf der Grundlage der abschnittsbezogen vorliegenden Ergebnisse diskutiert, auf deren Basis die externen Kosten über Straßenbenutzungsgebühren internalisiert werden könnten. Im Rahmen einer Sensitivitätsanalyse werden einzelne Annahmen der Berechnungsmethodik bzw. Kostensätze des Wertegerüsts variiert. Die Auswirkungen dieser Variationen werden wiederum am Beispielnetz, für das erneute Kostenberechnungen vorgenommen werden, dargelegt. Abschließend werden offen gebliebene Fragestellungen und Empfehlungen für weitere Untersuchungen benannt.
Public Private Partnership (PPP) setzt sich zunehmend als alternative Beschaffungsvariante für die öffentliche Hand durch. Im Krankenhausbereich bestehen erste Erfahrungen mit PPP, allerdings kann hier im Gegensatz zu anderen öffentlichen Bereichen noch nicht von einer Etablierung gesprochen werden. In vielen Krankenhäusern besteht Unklarheit über dieses neue Organisationskonzept. Was steckt hinter diesem Begriff, der teilweise synonym zur „Privatisierung“ verwendet wird? Ausgehend von dieser Fragestellung wird in der vorliegenden Arbeit gezeigt, dass PPP bei richtiger Anwendung eine Alternative zum Verkauf eines öffentlichen Krankenhauses darstellt. PPP ist ein Instrument, mit dem privates Know-how und Kapital für den öffentlichen Krankenhausträger nutzbar gemacht wird. Die öffentliche Trägerschaft des Krankenhauses bleibt dabei, im Gegensatz zu einer materiellen Privatisierung, erhalten. Die Rahmenbedingungen des Gesundheitswesens stellen insbesondere die öffentlichen Krankenhäuser vor große Herausforderungen. Die Lage ist zunehmend geprägt von Mittelknappheit, Sanierungsstau und stetig steigendem Wettbewerbsdruck um die Patienten. Die Reformbemühungen der Bundesregierung zur Senkung der Gesundheitsausgaben haben in den letzten Jahrzehnten zu immer neuen Gesetzesregelungen in immer kürzeren Zeitabständen geführt. Den bisher letzten großen Schritt in dieser Entwicklung stellt die Umstellung der Krankenhausvergütung auf DRG-Fallpauschalen dar. Die Auswirkungen sind insbesondere in den öffentlichen Krankenhäusern zu spüren. Defizitäre Einrichtungen, die bisher durch Subventionen gestützt wurden, werden nun nicht mehr „künstlich am Leben“ erhalten. Alle Krankenhäuser erhalten eine leistungsorientierte Vergütung, weitgehend unabhängig von den krankenhausspezifisch anfallenden Kosten. Durch diese Entwicklungen wurde das Bestreben in den Krankenhäuser, die internen Leistungsprozesse zu optimieren, weiter forciert. Dabei kommt den mit der Gebäudesubstanz verbundenen Leistungen eine besondere Bedeutung zu. Aufgrund hoher Investitionskosten und bedeutender Aufwendungen in der Nutzungsphase erreichen die nicht-medizinischen Leistungen in einem Krankenhaus einen beachtlichen Anteil an den Gesamtkosten. Fast ein Drittel der Krankenhaus-Kosten steht nicht in direkter Beziehung zum Heilungsprozess. In Deutschland macht dieser Anteil der nicht-medizinischen Abläufe jährlich rd. 18 Mrd. Euro aus. Das Optimierungspotenzial des nicht-medizinischen Leistungsbereichs, der auch die bau- und immobilienwirtschaftlichen Leistungen umfasst, wird bisher oft noch unterschätzt und ist in den meisten Fällen noch nicht ausgeschöpft. Allein schon aufgrund dessen finanzieller Bedeutung bedarf es einer verstärkten wissenschaftlichen Auseinandersetzung. Dieser Notwendigkeit ist bisher noch unzureichend Rechnung getragen wurden. Die vorliegende Arbeit will mit der Erforschung der Anwendbarkeit von PPP für Krankenaus-Immobilien einen Beitrag dazu leisten, diese Lücke zu schließen. Mit dieser für den deutschen Krankenhausbereich neuartigen Beschaffungsvariante wird ein Weg aufgezeigt, wie bei den nicht-medizinischen Leistungen nachhaltig Effizienzpotenziale erschlossen werden können und auf diese Weise ein Beitrag zum wirtschaftlichen Erfolg des gesamten Krankenhauses erzielt werden kann.
In the past, several types of Fourier transforms in Clifford analysis have been studied. In this paper, first an overview of these different transforms is given. Next, a new equation in a Clifford algebra is proposed, the solutions of which will act as kernels of a new class of generalized Fourier transforms. Two solutions of this equation are studied in more detail, namely a vector-valued solution and a bivector-valued solution, as well as the associated integral transforms.
THE FOURIER-BESSEL TRANSFORM
(2010)
In this paper we devise a new multi-dimensional integral transform within the Clifford analysis setting, the so-called Fourier-Bessel transform. It appears that in the two-dimensional case, it coincides with the Clifford-Fourier and cylindrical Fourier transforms introduced earlier. We show that this new integral transform satisfies operational formulae which are similar to those of the classical tensorial Fourier transform. Moreover the L2-basis elements consisting of generalized Clifford-Hermite functions appear to be eigenfunctions of the Fourier-Bessel transform.
This paper describes the application of interval calculus to calculation of plate deflection, taking in account inevitable and acceptable tolerance of input data (input parameters). The simply supported reinforced concrete plate was taken as an example. The plate was loaded by uniformly distributed loads. Several parameters that influence the plate deflection are given as certain closed intervals. Accordingly, the results are obtained as intervals so it was possible to follow the direct influence of a change of one or more input parameters on output (in our example, deflection) values by using one model and one computing procedure. The described procedure could be applied to any FEM calculation in order to keep calculation tolerances, ISO-tolerances, and production tolerances in close limits (admissible limits). The Wolfram Mathematica has been used as tool for interval calculation.
One of the main focuses of recent Chinese urban development is the creation and retrofitting of public spaces driven by the market force and demand. However, researches concerning human and cultural influences on shaping public spaces have been scanty. There still exist many undefined ambiguous planning aspects institutionally and legislatively. This is an explanatory research to address interactions, incorporations and interrelationship between the lived environment and its peoples. It is knowledge-seeking and normative. Theoretically, public space in a Chinese context is conceptualized; empirically, a selected case is inquired. The research has unfolded a comparatively complete understanding of China’s planning evolution and on-going practices. Data collection emphasizes the concept of ‘people’ and ‘space’. First-hand data is derived from the intensive fieldwork and observatory and participatory documentations. The ample detailed authentic empirical data empowers space syntax as a strong analysis tool in decoding how human’s activities influence the public space. Findings fall into two categories but interdependent. Firstly, it discloses the studied settlement as a generic, organic and incremental development model. Its growth and established environment is evolutionary and incremental, based on its intrinsic traditions, life values and available resources. As a self-sustaining settlement, it highlights certain vernacular traits of spatial development out of lifestyles and cultural practices. Its spatial articulation appears as a process parallel to socio-economic transitions. Secondly, crucial planning aspects are theoretically summarized to address the existing gap between current planning methodology and practicalities. It pinpoints several most significant and particular issues, namely, disintegrated land use system and urban planning; missing of urban design in the planning system, loss of a human-responsive environment resulted from standardized planning and under-estimation of heritage in urban development. The research challenges present Chinese planning laws and regulations through urban public space study; and pinpoints to yield certain growth leverage for planning and development. Thus, planning is able to empower inhabitants to make decisions along the process of shaping and sustaining their space. Therefore, it discusses not only legislative issues, concerning land use planning, urban design and heritage conservation. It leads to a pivotal proposal, i.e., the integration of human and their social spaces in formulating a new spatial strategy. It expects to inform policymakers of underpinning social values and cultural practices in reconfiguring postmodern Chinese spatiality. It propounds that social context endemic to communities shall be integrated as a crucial tool in spatial strategy design, hence to strengthen spatial attributes and improve life quality.
On the mechanisms of shrinkage reducing admixtures in self con-solidating mortars and concretes
(2010)
Self Consolidating Concrete – a dream has come true!(?) Self Consolidating Concrete (SCC) is mainly characterised by its special rheological properties. With-out any vibration this concrete can be placed and compacted under its own weight, without segrega-tion or bleeding. The use of such concrete can increase the productivity on construction sites and en-able the use of a higher degree of well distributed reinforcement for thin walled structural members. This new technology also reduces health risks since in contrast to the traditional handling of concrete, the emission of noise and vibration are substantially decreased. The specific mix design for self consolidating concretes was introduced around the 1980s in Japan. In comparison to normal vibrated concrete an increased paste volume enables a good distribution of aggregates within the paste matrix, minimising the influence of aggregates friction on the concrete flow property. The introduction of inert and/or pozzolanic additives as part of the paste provides the required excess paste volume without using disproportionally high amounts of plain cement. Due to further developments of concrete admixtures such as superplasticizers, the cement paste can gain self levelling properties without causing segregation of aggregates. Whereas SCC differs from normal vibrated concrete in its fresh attributes, it should reach similar properties in the hardened state. Due to the increased paste volume it usually shows higher shrinkage. Furthermore, owing to strength requirements, SCC is often produced at low water to cement ratios and hence may additionally suffer from autogenous shrinkage. This means that cracking caused by drying or autogenous shrinkage is a real risk for SCC and can compromise its durability as cracks may serve as ingression paths for gases and salts or might permit leaching. For the time being SCC still exhibits increased shrinkage and cracking probability and hence may be discarded in many practical applications. This can be overcome by a better understanding of those mechanisms and the ways to mitigate them. It is a target of this thesis to contribute to this. How to cope with increased shrinkage of SCC? In general, engineers are facing severe problems related to shrinkage and cracking. Even for normal and high performance concrete, containing moderate amounts of binder, a lot of effort was put on counteracting shrinkage and avoiding cracking. For the time being these efforts resulted in the knowledge of how to distribute cracks rather to avoid them. The most efficient way to decrease shrinkage turned out to be to decrease the cement content of concrete down to a minimum but still sufficient amount. For SCC this obviously seems to be contradictory with the requirement of a high paste volume. Indeed, the potential for shrinkage reduction is limited to some small range modifications in the mix design following two major concepts. The first one is the reduction of the required paste volume by optimising the aggregate grading curve. The second one involves high volume substitution of cement, preferentially using inert mineral additives. The optimization of grading curves is limited by several severe practical issues. Problems start with the availability of sufficiently fractionated aggregates. Usually attempts fail because of the enormous effort in composing application-optimized grading curves or mix designs. Due to durability reasons, the substitution rate for cement is limited depending on the application purpose and on environmental exposure of the hardened concrete. In the early 1980s Shrinkage Reducing Admixtures (SRA) were introduced to counteract drying shrinkage of concrete. The first publications explicitly dealing with SRA go back to Goto and Sato (Japan). They were published in 1983, which is also the time when the SCC concept was introduced. SRA modified concretes showed a substantial reduction of free drying shrinkage contributing to crack prevention or at least a significant decrease of crack width in situations of restrained drying shrinkage. Will shrinkage reducing admixtures contribute to a broader application of SCC? Within the last three decades performance tests on several types of concrete proved the efficiency of shrinkage reducing admixtures. So, at least in terms of shrinkage and cracking, concretes in general and SCC in particular can benefit from SRA application. But "One man's meat is another man's poison" and with respect to long term performance of SRA modified concretes there are still several issues to be clarified. One of these concerns the impact of SRAs on cement hydration. It is therefore an issue to know if changes in the hydrated phase composition, induced by SRA, result in undesired properties or decreased durability. Another issue is that the long term shrinkage reduction has to be evaluated. For example, one can wonder if SRA leaching may diminish or even eliminate long term shrinkage reduction and if the release of admixtures could be a severe environmental issue. It should also be noted that the basic mechanism or physical impact of SRA as well as its implementation in recent models for shrinkage of concrete is still being discussed. The present thesis tries to shed light on the role of SRA in self consolidating concrete focusing on the three questions outlined above: basic mechanisms of cement hydration, physical impact on shrinkage and the sustainability of SRA-application. Which contributions result from this study? Based on an extensive patent search, commercial SRAs could be identified to be synergistic mixtures of non-ionic surfactants and glycols. This turns out to be most important information for more than one reason and is the subject of chapter 4. An abundant literature focuses on properties of these non-ionic surfactants. Moreover, from this rich pool of information, the behaviour of SRAs and their interactions in cementitious systems were better understood through this thesis. For example, it could be anticipated how SRAs behave in strong electrolytes and how surface activity, i.e. surface tension, and interparticle forces might be affected. The synergy effect regarding enhanced performance induced by the presence of additional glycol in SRAs could be derived from the literature on the co-surfactant nature of glycols. Generally it now can be said that glycols ensure that the non-ionic surfactant is properly distributed onto the paste interfaces to efficiently reduce surface tension. In literature, the impact of organic matter on cement hydration was extensively studied for other admixtures like superplasticizer. From there, main impact factors related to the nature of these molecules could be identified. In addition, here again, the literature on non-ionic surfactants provides sufficient information to anticipate possible interactions of SRA with cement hydration based on the nature of non-ionic surfactants. All in all, the extensive study on the nature of non-ionic surfactants, presented in chapter 4, provides fundamental understanding of the behaviour of SRAs in cement paste. Taking a step further to relate this to the impact on drying and shrinkage required to review recent models for drying and shrinkage of cement paste as presented in chapter 3. There, it is shown that macroscopic thermodynamics of the open pore systems can be successfully applied to predict drying induced deformation, but that surface activity of SRA still has to be implemented to explain the shrinkage reduction it causes. Because of severe issues concerning the importance of capillary pressure on shrinkage, a new macroscopic thermodynamic model was derived in a way that meets requirements to properly incorporate surface activity of SRA. This is the subject of chapter 5. Based on theoretical considerations, in chapter 5 the broader impact of SRA on drying cementitious matter could be outlined. In a next step, cement paste was treated as a deformable, open drying pore system. Thereby, the drying phenomena of SRA modified mortars and concrete observed by other authors could be retrieved. This phenomenological consistency of the model constitutes an important contribution towards the understanding of SRA mechanisms. Another main contribution of this work came from introducing an artificial pore system, denominated the normcube. Using this model system, it could be shown how the evolution of interfacial area and its properties interact in presence of SRAs and how this impacts drying characteristics. In chapter 7, the surface activity of commercial SRAs in aqueous solution and synthetic pore solution was investigated. This shows how the electrolyte concentration of synthetic pore solution impacts the phase behaviour of SRA and conversely, how the presence of SRA impacts the aqueous electrolyte solution. Whilst electrolytes enhance self-aggregation of SRAs into micelles and liquid crystals, the presence of SRAs leads to precipitation of minerals as syngenite and mirabilite. Moreover, electrolyte solutions containing SRAs comprise limited miscibility or rather show miscibility gaps, where the liquid separates into isotropic micellar solutions and surfactant rich reverse micellar solutions. The investigation of surface activity and phase behaviour of SRA unravelled another important contribution. From macroscopic surface tension measurements, a relationship between excess surface concentration of SRA, bulk concentration of SRA and exposed interfacial area could be derived. Based on this, it is now possible to predict the actual surface tension of the pore fluid in the course of drying once the evolution of internal interfacial area is known. This is used later in this thesis to describe the specific drying and shrinkage behaviour of SRA modified pastes and mortars. Calorimetric studies on normal Portland cement and composite binders revealed that SRA alone show only minor impact on hydration kinetics. In presence of superplasticizer however the cement hydration can be significantly decelerated. The delaying impact of SRA could be related to a selective deceleration of silicate phase hydration. Moreover, it could be shown that portlandite precipitation in presence of SRA is changed, turning the compact habitus into more or less layered structures. Thereby, the specific surface increases, causing the amount of physically bound water to increase, which in turn reduces the maximum degree of hydration achievable for sealed systems. Extensive phase analysis shows that the hydrated phase composition of SRA modified binders re-mains almost unaffected. The appearance of a temporary mineral phase could be detected by environmental scanning electron microscopy. As could be shown for synthetic pore solutions, syngenite precipitates during early hydration stages and is later consumed in the course of aluminate hydration, i.e. when sulphates are depleted. Moreover, for some SRAs, the salting out phenomena supposed to be enhanced in strong electrolytes could also be shown to take place. The resulting organic precipitates could be identified by SEM-EDX in cement paste and by X-ray diffraction on solid residues of synthetic pore solution. The presence of SRAs could also be identified to impact microstructure of well cured cement paste. Based on nitrogen adsorption measurements and mercury intrusion porosimetry the amount of small pores is seen to increase with SRA dosage, whilst the overall porosity remains unchanged. The question regarding sustainability of SRA application is the subject of chapter 10. By means of leaching studies it could be shown that SRA can be leached significantly. The mechanism could be identified as a diffusion process and a range of effective diffusion coefficients could be estimated. Thereby, the leaching of SRA can now be estimated for real structural members. However, while the admixture can be leached to high extents in tank tests, the leaching rates in practical applications can be assumed to be low because of much reduced contact with water. This could be proven by quantifying admixture loss during long term drying and rewetting cycles. Despite a loss of admixture shrinkage reduction is hardly impacted. Moreover, the cyclic tests revealed that the total deformations in presence of SRA remain low due to a lower extent of irreversibly shrinkage deformations. Another important contribution towards the better understanding of the working mechanism of SRA for drying and shrinkage came from the same leaching tests. A significant fraction of SRA is found to be immobile and does not diffuse in leaching. This fraction of SRA is probably strongly associated to cement phases as the calcium-silicate-hydrates or portlandite. Based on these findings, it is now also possible to quantify the amount of admixture active at the interfaces. This means that, the evolution of surface tension in the course of drying can be approximated, which is a fundamental requirement for modeling shrinkage in presence of SRA. The last experimental chapter of this study focuses on the working mechanism and impact of SRA on drying and shrinkage. Based on the thermodynamics of the open deformable pore system introduced in chapter 5, energy balances are set up using desorption and shrinkage isotherms of actual samples. Information on distribution of SRA in the hydrated paste is used to estimate the actual surface tensions of the pore solution. In other words, this is the first time that the surface activity of the SRA in the course of the drying is fully accounted for. From the energy balances the evolution and properties of the internal interface are then obtained. This made it possible to explain why SRAs impact drying and shrinkage and in what specific range of relative humidity they are active. Summarising the findings of this thesis it can be said that the understanding of the impact of SRAs on hydration, drying and shrinkage was brought forward. Many of the new insights came from the careful investigation of the theory of non-ionic surfactants, something that the cement community had generally overlooked up to now.
NONZONAL WAVELETS ON S^N
(2010)
In the present article we will construct wavelets on an arbitrary dimensional sphere S^n due the approach of approximate Identities. There are two equivalently approaches to wavelets. The group theoretical approach formulates a square integrability condition for a group acting via unitary, irreducible representation on the sphere. The connection to the group theoretical approach will be sketched. The concept of approximate identities uses the same constructions in the background, here we select an appropriate section of dilations and translations in the group acting on the sphere in two steps. At First we will formulate dilations in terms of approximate identities and than we call in translations on the sphere as rotations. This leads to the construction of an orthogonal polynomial system in L²(SO(n+1)). That approach is convenient to construct concrete wavelets, since the appropriate kernels can be constructed form the heat kernel leading to the approximate Identity of Gauss-Weierstra\ss. We will work out conditions to functions forming a family of wavelets, subsequently we formulate how we can construct zonal wavelets from a approximate Identity and the relation to admissibility of nonzonal wavelets. Eventually we will give an example of a nonzonal Wavelet on $S^n$, which we obtain from the approximate identity of Gauss-Weierstraß.
In nonlinear simulations the loading is, in general, applied in an incremental way. Path-following algorithms are used to trace the equilibrium path during the failure process. Standard displacement controlled solution strategies fail if snap-back phenomena occur. In this contribution, a path-following algorithm based on the dissipation of the inelastic energy is presented which allows for the simulation of snap-backs. Since the constraint is defined in terms of the internal energy, the algorithm is not restricted to continuum damage models. Furthermore, no a priori knowledge about the final damage distribution is required. The performance of the proposed algorithm is illustrated using nonlinear mesoscale simulations.
SIMULATION AND MATHEMATICAL OPTIMIZATION OF THE HYDRATION OF CONCRETE FOR AVOIDING THERMAL CRACKS
(2010)
After mixing of concrete, the hardening starts by an exothermic chemical reaction known as hydration. As the reaction rate depends on the temperature the time in the description of the hydration is replaced by the maturity which is defined as an integral over a certain function depending on the temperature. The temperature distribution is governed by the heat equation with a right hand side depending on the maturity and the temperature itself. We compare of the performance of different time integration schemes of higher order with an automatic time step control. The simulation of the heat distribution is of importance as the development of mechanical properties is driven by the hydration. During this process it is possible that the tensile stresses exceed the tensile strength and cracks occur. The goal is to produce cheap concrete without cracks. Simple crack-criterions use only temperature differences, more involved ones are based on thermal stresses. If the criterion predicts cracks some changes in the input data are needed. This can be interpreted as optimization. The final goal will be to adopt model based optimization (in contrast to simulation based optimization) to the problem of the hydration of young concrete and the avoidance of cracks. The first step is the simulation of the hydration, which we focus in this paper.
Verkehrsmengenrisiko bei PPP-Projekten im Straßensektor - Determinanten effizienter Risikoallokation
(2010)
Trotz weltweit umfangreichen Erfahrungen mit Public Private Partnership Projekten im Straßensektor bleibt der Umgang mit dem Verkehrsmengenrisiko für die Projektbeteiligten eine Herausforderung. Die Arbeit widmet sich daher der wesentlichen Fragestellung nach einer effizienten Allokation dieses Risikos, dem nicht weniger Bedeutung zukommt als für den gesamtwirtschaftlichen Erfolg eines Straßenkonzessionsprojektes eine entscheidende Rolle zu spielen. Untersucht werden zunächst die Charakteristika des Verkehrsmengenrisikos mit seinen umfänglichen Einflussfaktoren. Anschließend werden die in der Praxis zur Anwendung kommenden Vertragsmodelle zur Bewirtschaftung von Straßeninfrastruktur dargestellt und analysiert, wie in den einzelnen Modellen Verkehrsmengenrisiko auf die verschiedenen Vertragspartner verteilt wird. Auf Basis dieser Grundlagen wird ein kriteriengestützter Analyserahmen entwickelt, der die Effizienz unterschiedlicher Risikoallokationen zwischen den Vertragspartner bewertet. Dabei werden einerseits die effizienzbeeinflussenden Eigenschaften der potentiellen Risikoträger eines PPP-Projektes berücksichtigt als auch die die effizienzbeeinflussenden Wirkungen der unterschiedlichen Vertragsmodelle. Aus den Erkenntnissen dieser Analyse werden letztlich Handlungs- und Gestaltungsempfehlungen zum Umgang mit dem Verkehrsmengenrisiko abgeleitet.
An introduction is given to Clifford Analysis over pseudo-Euclidean space of arbitrary signature, called for short Ultrahyperbolic Clifford Analysis (UCA). UCA is regarded as a function theory of Clifford-valued functions, satisfying a first order partial differential equation involving a vector-valued differential operator, called a Dirac operator. The formulation of UCA presented here pays special attention to its geometrical setting. This permits to identify tensors which qualify as geometrically invariant Dirac operators and to take a position on the naturalness of contravariant and covariant versions of such a theory. In addition, a formal method is described to construct the general solution to the aforementioned equation in the context of covariant UCA.
Buildings can be divided into various types and described by a huge number of parameters. Within the life cycle of a building, especially during the design and construction phases, a lot of engineers with different points of view, proprietary applications and data formats are involved. The collaboration of all participating engineers is characterised by a high amount of communication. Due to these aspects, a homogeneous building model for all engineers is not feasible. The status quo of civil engineering is the segmentation of the complete model into partial models. Currently, the interdependencies of these partial models are not in the focus of available engineering solutions. This paper addresses the problem of coupling partial models in civil engineering. According to the state-of-the-art, applications and partial models are formulated by the object-oriented method. Although this method solves basic communication problems like subclass coupling directly it was found that many relevant coupling problems remain to be solved. Therefore, it is necessary to analyse and classify the relevant coupling types in building modelling. Coupling in computer science refers to the relationship between modules and their mutual interaction and can be divided into different coupling types. The coupling types differ on the degree by which the coupled modules rely upon each other. This is exemplified by a general reference example from civil engineering. A uniform formulation of coupling patterns is described analogously to design patterns, which are a common methodology in software engineering. Design patterns are templates for describing a general reusable solution to a commonly occurring problem. A template is independent of the programming language and the operating system. These coupling patterns are selected according to the specific problems of building modelling. A specific meta-model for coupling problems in civil engineering is introduced. In our meta-model the coupling patterns are a semantic description of a specific coupling design.
Reducing energy consumption is one of the major challenges for present day and will continue for future generations. The emerging EU directives relating to energy (EU EPBD and the EU Directive on Emissions Trading) now place demands on building owners to rate the energy performance of their buildings for efficient energy management. Moreover European Legislation (Directive 2006/32/EC) requires Facility Managers to reduce building energy consumption and operational costs. Currently sophisticated building services systems are available integrating off-the-shelf building management components. However this ad-hoc combination presents many difficulties to building owners in the management and upgrade of these systems. This paper addresses the need for integration concepts, holistic monitoring and analysis methodologies, life-cycle oriented decision support and sophisticated control strategies through the seamless integration of people, ICT-devices and computational resources via introducing the newly developed integrated system architecture. The first concept was applied to a residential building and the results were elaborated to improve current building conditions.
In the last two decades, many cities have faced changes in their economic basis and therefore adopted an entrepreneurial approach in the municipal administration accompanied by city marketing strategies. Brazilian cities have also adopted this approach, like the case of Florianópolis. Florianópolis has promoted advertising campaigns on the natural resources of the Island of Santa Catarina as well as on its quality of life in comparison to other cities. However, due also to such campaigns, it has experienced a great demographic growth and, consequently, infrastructural and social problems. Nevertheless, it seems to have a good image within the national urban scenario and has been commonly considered an “urban consumption dream” for many Brazilians. This paradoxical situation is the reason why it has been chosen as the research object in this dissertation. Thus, the questions of this research are: is there a gap between the promise and the performance of the city of Florianópolis? If so, can tourists and residents recognize it? And finally, how can this gap be demonstrated? Accordingly, the main objective of this research is to propose a conformity assessment approach applicable to cities, by which the content of city advertisement campaigns can be compared to its performance indicators and satisfaction degree of its consumers. Therefore, this approach is composed by different methods: literature and legislation reviews, semi-structured and structured interviews with experts and inhabitants, an urban centrality development analysis, a qualitative discourse analysis of advertising material (including images), a qualitative content analysis of newspaper reports and a questionnaire survey. Finally, the theses are: yes, there is a gap between promise and performance of Florianópolis; this promise is a result of city marketing campaigns which advertise its natural features and at the same time hiding its urban aspects, supported by some political and private actors, mainly interested in the development of tourism and real estate market in the city; this gap has been already recognized by tourists and more intensively by residents; the selected methods worked as a kind of conformity assessment for cities and tourist destinations; and last but not least, since there is a gap, it designates the practice of “make-up urbanism”. Research limitations are the short time frame covered by this analysis and small and non-representative samples. However, its relevance lies in the attempt to fill in two disciplinary lacunas: a conformity assessment approach for cities and the creation of knowledge about Florianópolis and its further presentation at an international level, on the one hand. On the other hand, the transfer of this approach to other cities would help explaining a (common) contemporary urban phenomenon and appeal for more ethical conduct and transparency in the practices of city marketing.
This thesis focuses on the cryptanalysis and the design of block ciphers and hash func- tions. The thesis starts with an overview of methods for cryptanalysis of block ciphers which are based on differential cryptanalysis. We explain these concepts and also sev- eral combinations of these attacks. We propose new attacks on reduced versions of ARIA and AES. Furthermore, we analyze the strength of the internal block ciphers of hash functions. We propose the first attacks that break the internal block ciphers of Tiger, HAS-160, and a reduced round version of SHACAL-2. The last part of the thesis is concerned with the analysis and the design of cryptographic hash functions. We adopt a block cipher attack called slide attack into the scenario of hash function cryptanalysis. We then use this new method to attack different variants of GRINDAHL and RADIOGATUN. Finally, we propose a new hash function called TWISTER which was designed and pro- posed for the SHA-3 competition. TWISTER was accepted for round one of this com- petition. Our approach follows a new strategy to design a cryptographic hash function. We also describe several attacks on TWISTER and discuss the security issues concern- ing these attack on TWISTER.
In this paper we present rudiments of a higher dimensional analogue of the Szegö kernel method to compute 3D mappings from elementary domains onto the unit sphere. This is a formal construction which provides us with a good substitution of the classical conformal Riemann mapping. We give explicit numerical examples and discuss a comparison of the results with those obtained alternatively by the Bergman kernel method.
Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ...
Using a quaternionic reformulation of the electrical impedance equation, we consider a two-dimensional separable-variables conductivity function and, posing two different techniques, we obtain a special class of Vekua equation, whose general solution can be approach by virtue of Taylor series in formal powers, for which is possible to introduce an explicit Bers generating sequence.
Within the scheduling of construction projects, different, partly conflicting objectives have to be considered. The specification of an efficient construction schedule is a challenging task, which leads to a NP-hard multi-criteria optimization problem. In the past decades, so-called metaheuristics have been developed for scheduling problems to find near-optimal solutions in reasonable time. This paper presents a Simulated Annealing concept to determine near-optimal construction schedules. Simulated Annealing is a well-known metaheuristic optimization approach for solving complex combinatorial problems. To enable dealing with several optimization objectives the Pareto optimization concept is applied. Thus, the optimization result is a set of Pareto-optimal schedules, which can be analyzed for selecting exactly one practicable and reasonable schedule. A flexible constraint-based simulation approach is used to generate possible neighboring solutions very quickly during the optimization process. The essential aspects of the developed Pareto Simulated Annealing concept are presented in detail.
In this paper three different formulations of a Bernoulli type free boundary problem are discussed. By analyzing the shape Hessian in case of matching data it is distinguished between well-posed and ill-posed formulations. A nonlinear Ritz-Galerkin method is applied for discretizing the shape optimization problem. In case of well-posedness existence and convergence of the approximate shapes is proven. In combination with a fast boundary element method efficient first and second order shape optimization algorithms are obtained.
Nach dem aufgeregten Palaver um den Computer als 'Medium' und die akademische Begleitrhetorik zum Internet wird erneut die Frage nach der Leistung von Medienphilosophie gestellt - in diesem Beitrag als medienanthropologische Vergewisserung: welche technischen Überschreitungen definieren das Neue unserer Lage?
This cumulative dissertation investigates aspects of consumer decision making in hedonic contexts and its implications for the marketing of media goods through a series of three empirical studies. All three studies take place within a common theoretical framework of decision making models, applying parts of the framework in novel ways to solve real-world marketing research problems (study 1 and 2), and examining theoretical relationships between variables within of the framework (study 3). One notable way in which the studies differ is their theoretical treatment of the hedonic component of decision making, i.e. the role and conceptualization of emotions.
Planning and construction processes are characterized by the peculiarity that they need to be designed individually for each project. It is necessary to set up an individual schedule for each project. As a basis for a new project, schedules from already finished projects are used, but adaptions are always necessary. In practice, scheduling tools only document a process. Schedules cover a set of activities, their duration and a set of interdependencies between activities. The design of a process is up to the user. It is not necessary to specify each interdependency, and completeness and correctness need to be checked manually. No methodologies are available to guarantee properties such as correctness or completeness. The considerations presented in the paper are based on an approach where a planning and a construction process including the interdependencies between planning and construction activities are regarded as a result. Selected information need to be specified by a user, and a proposal for an order of planning and construction activities is computed. As a consequence, process properties such as correctness and completeness can be guaranteed with respect to user input. Especially in Germany, clients are allowed to modify their requirements at any time. This leads to modifications in the planning and construction processes. This paper covers a mathematical formulation for this problem based on set theory. A complex structure is set up covering objects and relations; and operations are defined that guarantee consistency in the underlying and versioned process description. The presented considerations are based on previous work. This paper can be regarded as the next step in a series of previous work describing how a suitable concept for handling, planning and construction processes in civil engineering can be formed.
The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions.
NUMERICAL SIMULATION OF THERMO-HYGRAL ALKALI-SILICA REACTION MODEL IN CONCRETE AT THE MESOSCALE
(2010)
This research aims to model Alkali-Silica Reaction gel expansion in concrete under the influence of hygral and thermal loading, based on experimental results. ASR provokes a heterogeneous expansion in concrete leading to dimensional changes and eventually the premature failure of the concrete structure. This can result in map cracking on the concrete surface which will decrease the concrete stiffness. Factors that influence ASR are parameters such as the cement alkalinity, the number of deleterious silica from the aggregate used, concrete porosity, and external factors like temperature, humidity and external source of alkali from ingression of deicing salts. Uncertainties of the influential factors make ASR a difficult phenomenon to solve; hence my approach to this matter is to solve the problem using stochastic modelling, where a numerical simulation of concrete cross-section with integration of experimental results from Finger-Institute for Building Materials Science at the Bauhaus-Universität Weimar. The problem is formulated as a multi-field problem, combining heat transfer, fluid transfer and the reaction rate model with the mechanical stress field. Simulation is performed as a mesoscale model considering aggregates and mortar matrix. The reaction rate model will be conducted using experimental results from concrete expansions due to ASR gained from concrete prism tests. Expansive strains values for transient environmental conditions due to the reaction rate will be determined from calculation based on the reaction rate model. Results from these models will be able to predict the rate of ASR expansion and the cracking propagation that may arise.
The evident advances of the computational power of the digital computers enable the modeling of the total system of structures. Such modeling demands compatible representations of the couplings of different structural subsystems. Therefore, models of dynamic interaction between the vehicle and the bridge and models of a bridge bearing, a coupling element between the bridge's superstructure and substructure, are of interest and discussed within this paper. The vehicle-bridge interaction may be described as a function connecting two sets of behavior. In this case, the coupling is embodied by mutual parameters that affect both systems, such as the frequency content of the bridge and the vehicle. Whereas the bridge bearings are elements used specifically to couple, in such elements the deformation and the transferred loads are used in characterizing the coupling The nature of these couplings and their influence on the bridge response is different. However, the need to assess the amount of dynamic response transferred by or within these couplings is a common argument.
Tests on Polymer Modified Cement Concrete (PCC) have shown significant large creep deformation. The reasons for that as well as additional material phenomena are explained in the following paper. Existing creep models developed for standard concrete are studied to determine the time-dependent deformations of PCC. These models are: model B3 by Bažant and Bajewa, the models according to Model Code 90 and ACI 209 as well as model GL2000 by Gardner and Lockman. The calculated creep strains are compared to existing experimental data of PCC and the differences are pointed out. Furthermore, an optimization of the model parameters is performed to fit the models to the experimental data to achieve a better model prognosis.
In order to make control decisions, Smart Buildings need to collect data from multiple sources and bring it to a central location, such as the Building Management System (BMS). This needs to be done in a timely and automated fashion. Besides data being gathered from different energy using elements, information of occupant behaviour is also important for a building’s requirement analysis. In this paper, the parameter of Occupant Density was considered to help find behaviour of occupants towards a building space. Through this parameter, support for building energy consumption and requirements based on occupant need and demands was provided. The demonstrator presented provides information on the number of people present in a particular building space at any time, giving the space density. Such collections of density data made over a certain period of time represents occupant behaviour towards the building space, giving its usage patterns. Similarly, inventory items were tracked and monitored for moving out or being brought into a particular read zone. For both, people and inventory items, this was achieved using small, low-cost, passive Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) tags. Occupants were given the tags in a form factor of a credit card to be possessed at all times. A central database was built where occupant and inventory information for a particular building space was maintained for monitoring and providing a central data access.