Refine
Has Fulltext
- yes (123) (remove)
Document Type
- Article (57)
- Doctoral Thesis (26)
- Part of a Book (16)
- Master's Thesis (7)
- Book (4)
- Preprint (3)
- Conference Proceeding (2)
- Habilitation (2)
- Report (2)
- Bachelor Thesis (1)
- Other (1)
- Periodical (1)
- Review (1)
Institute
- Institut für Strukturmechanik (ISM) (42)
- Junior-Professur Bildtheorie (17)
- Professur Bauphysik (8)
- Professur Sozialwissenschaftliche Stadtforschung (7)
- Junior-Professur Organisation und vernetzte Medien (6)
- Institut für Europäische Urbanistik (5)
- Professur Bauchemie und Polymere Werkstoffe (3)
- Professur Denkmalpflege und Baugeschichte (3)
- Professur Modellierung und Simulation - Konstruktion (3)
- Bauhaus-Institut für zukunftsweisende Infrastruktursysteme (b.is) (2)
Keywords
- OA-Publikationsfonds2020 (27)
- Maschinelles Lernen (17)
- Machine learning (12)
- Künstlerische Forschung (10)
- Erdbeben (7)
- Deep learning (5)
- Theater (5)
- big data (5)
- Medien (4)
- Raumklima (4)
- machine learning (4)
- rapid visual screening (4)
- Aerodynamik (3)
- Architektur (3)
- Beton (3)
- Bridge (3)
- Brücke (3)
- Denkmalpflege (3)
- Intelligente Stadt (3)
- Journalismus (3)
- Pädagogik (3)
- Stadtplanung (3)
- Wohnen (3)
- Wohnungspolitik (3)
- computational fluid dynamics (3)
- earthquake (3)
- earthquake safety assessment (3)
- random forest (3)
- smart cities (3)
- Aktivismus (2)
- Antirassismus (2)
- Artificial neural network (2)
- Batterie (2)
- Behaglichkeit (2)
- Concrete (2)
- Deutschland (2)
- Dissipation (2)
- Europa (2)
- Fluid (2)
- Fotovoltaik (2)
- Fuzzy-Logik (2)
- Geopolymere (2)
- Heritage (2)
- Industriekomplexe (2)
- Ingenieurwissenschaften (2)
- Internet of things (2)
- Kunst (2)
- Mensch-Maschine-Kommunikation (2)
- Modellierung (2)
- Montage (2)
- Nachhaltigkeit (2)
- Nanostrukturiertes Material (2)
- Neuronales Netz (2)
- Performance (2)
- Performativität (2)
- Planung (2)
- Raumordnung (2)
- Schreiben (2)
- Simulation (2)
- Stadtbaugeschichte (2)
- Stadtentwicklung (2)
- Strömungsmechanik (2)
- Undercommons (2)
- Verbundwerkstoff (2)
- Vermittlung (2)
- Wohnungsfrage (2)
- Workshop (2)
- Zement (2)
- artificial intelligence (2)
- artificial neural networks (2)
- buildings (2)
- cement (2)
- cyclic load (2)
- damaged buildings (2)
- data science (2)
- ductless personalized ventilation (2)
- reinforcement learning (2)
- soft computing techniques (2)
- support vector machine (2)
- thermal comfort (2)
- urban morphology (2)
- vulnerability assessment (2)
- 2D/3D Adaptive Mesh Refinement (1)
- 3D Interaction Techniques (1)
- Abwasserwirtschaft (1)
- Aerodynamic derivatives (1)
- Aerodynamics (1)
- Aeroelasticity (1)
- Aeroelastizität (1)
- African Revolution (1)
- Ahnenkultur (1)
- Akkumulator (1)
- Akustische Laufzeit-Tomographie (1)
- Algorithmus (1)
- Allyship (1)
- Anthropozän (1)
- Antikolonialismus (1)
- Architecture (1)
- Arman (1)
- Artefakt (1)
- Artistic Research (1)
- Asynchronität (1)
- Audiovision (1)
- Augmented Audio Reality (1)
- Autogenous (1)
- Autokratie (1)
- Autonomous (1)
- BIM (1)
- Background-oriented schlieren (1)
- Balkan (1)
- Balkanroute (1)
- Battery (1)
- Battery development (1)
- Baudenkmal (1)
- Bauklimatik (1)
- Bauphysik (1)
- Bayes-Verfahren (1)
- Belüftung (1)
- Berlin (1)
- Beschädigung (1)
- Bestattung (1)
- Bildanalyse (1)
- Biodiversität (1)
- Biogas (1)
- Biogasproduktion (1)
- Biomechanics (1)
- Biomechanik (1)
- Bodentemperatur (1)
- Brasilien (1)
- Brazil (1)
- Brazilian Music (1)
- Bridge aerodynamics (1)
- Bridges (1)
- Bruchmechanik (1)
- Brustkorb (1)
- Brückenbau (1)
- Bubble column reactor (1)
- Building Information Modeling (1)
- Candomblé (1)
- Category Theory (1)
- Charles Sanders Peirce (1)
- Christo (1)
- City marketing (1)
- Club (1)
- Collage (1)
- Composite (1)
- Computational Fluid Dynamics (1)
- ContikiMAC (1)
- Convective indoor air flow (1)
- Counter-ethnography (1)
- Cross-correlation (1)
- Cultural tourism (1)
- Damage (1)
- Daseinsvorsorge (1)
- Datenmodell (1)
- Deal ii C++ code (1)
- Deutschland <Östliche Länder> (1)
- Dictatorship (1)
- Didaktik (1)
- Digital image correlation (1)
- Digitaler Journalismus (1)
- Digitalisierung (1)
- Digitaljournalismus (1)
- Ding (1)
- Dingdiskurs (1)
- Dirac-Operator (1)
- Display (1)
- Drohne (1)
- Earthquake (1)
- Electrochemical properties (1)
- Elektrochemische Eigenschaft (1)
- Elektrode (1)
- Elektrodenmaterial (1)
- Energieeffizienz (1)
- Energiespeichersystem (1)
- Entrepreneurship (1)
- Erbe (1)
- Erdbebensicherheit (1)
- Ermüdung (1)
- Erneuerbare Energien (1)
- Erweiterte Realität <Informatik> (1)
- Ethnologie (1)
- European city-making process (1)
- Europäische Union (1)
- Experimente (1)
- FEM (1)
- Femimismus (1)
- Fernerkung (1)
- Film (1)
- Filmessay (1)
- Finite-Elemente-Methode (1)
- Flight path planning (1)
- Flow visualization (1)
- Flussgebiet (1)
- Flächengerechtigkeit (1)
- Flächenverbrauch (1)
- Flüchtlingspolitik (1)
- Format (1)
- Forschungsbericht (1)
- Fotografie (1)
- Fotografie als Handlung (1)
- Funktionentheorie (1)
- Fuzzy Logic (1)
- Fuzzy-Regelung (1)
- Gabe (1)
- Gaussian process regression (1)
- Gebäude (1)
- Geoinformatik (1)
- Geometric Modeling (1)
- Geometrie (1)
- Gerechtigkeit (1)
- Geschichte (1)
- Geschichtsbewusstsein (1)
- Geschichtswissenschaft (1)
- Geschäftsmodell (1)
- Gesundheit (1)
- Gesundheitsinformationssystem (1)
- Gesundheitswesen (1)
- Gewebeverbundwerkstoff (1)
- Gleichwertigkeit der Lebensverhältnisse (1)
- Goal-oriented A Posteriori Error Estimation (1)
- Graffiti (1)
- Graffito (1)
- Grundwasser (1)
- Größenverhältnis (1)
- Hans Ruin (1)
- Hausarzt (1)
- Healing (1)
- Heritage management (1)
- Hochschullehre (1)
- Housing (1)
- Housing Policy (1)
- Human thermal plume (1)
- Human-Computer Interaction (1)
- Hydrological drought (1)
- IAQ (1)
- IFC (1)
- IOT (1)
- Incompressibility (1)
- Industrie 4.0 (1)
- Infrastructures (1)
- Innovation (1)
- Innovationsforschung (1)
- Innovationsfähigkeit (1)
- Inspektion (1)
- Instrument (1)
- Inszenierte Fotografie (1)
- Internet (1)
- Internet der Dinge (1)
- Internet der dinge (1)
- Interoperabilität (1)
- Intersektionalität (1)
- Iran (1)
- Isogeometric Analysis (1)
- Isogeometrische Analyse (1)
- K-nearest neighbors (1)
- KNN (1)
- Klimapolitik (1)
- Kontamination (1)
- Konzeptkunst (1)
- Kulturwissenschaft (1)
- Kunsthochschule (1)
- Kybernetik (1)
- Körper (1)
- Kühlkörper (1)
- Künste (1)
- Künstlerischer Aktivismus (1)
- Künstliche Intelligenz (1)
- Land surface temperature (1)
- Lassen (1)
- Leistungsverhalten (1)
- Literaturrecherche (1)
- Localised Sound (1)
- Luftqualität (1)
- Ländlicher Raum (1)
- Lüftung (1)
- M5 model tree (1)
- Machine Learning (1)
- Macht (1)
- Mahlaggregat (1)
- Mahlung (1)
- Makroalgen (1)
- Mangement (1)
- Marmara Region (1)
- Mathematik (1)
- Mechanical properties (1)
- Mechanische Eigenschaft (1)
- Medienphilosophie (1)
- Medienphilosophie des Formats (1)
- Medientechnik (1)
- Medientheorie (1)
- Medienwirtschaft (1)
- Medienwissenschaft (1)
- Medienökonomie (1)
- Mensch (1)
- Mensch-Maschine-Interaktion (1)
- Mesh Refinement (1)
- Meso-Scale (1)
- Metakaolin (1)
- Metamodell (1)
- Methodologie (1)
- Mieten (1)
- Modernisierung (1)
- Modernization (1)
- Mongolei (1)
- Monitoring (1)
- Morphologie (1)
- Motion-induced forces (1)
- Multi-User Virtual Reality (1)
- Multi-criteria decision making (1)
- Multi-scale modeling (1)
- Muscle model (1)
- Museology (1)
- Museumskunde (1)
- Music (1)
- Musicology (1)
- Musik (1)
- Muskel (1)
- NURBS (1)
- Nanofluid (1)
- Nanomaterial (1)
- Nanomaterials (1)
- Nasskühlung (1)
- Nation-building (1)
- Naturkatastrophe (1)
- Navigation (1)
- Netzwerk (1)
- Neuartige Sanitärsysteme (1)
- Neue Medien (1)
- Neurodiversität (1)
- Nichtlineare Finite-Elemente-Methode (1)
- Nitratbelastung (1)
- North Korea (1)
- Nouveau Realisme (1)
- Näherungsverfahren (1)
- OA-Publikationsfonds2019 (1)
- OWL <Informatik> (1)
- Oberflächentemperatur (1)
- Oldenburg (1)
- Ontologie (1)
- Operante Konditionierung (1)
- Ostdeutschland (1)
- Pahlavi (1)
- Paid Content (1)
- Pan-Africanism (1)
- Performative Fotografie (1)
- Performative Strategien (1)
- Peripherisierungsforschung (1)
- Philosophie (1)
- Photobioreaktor (1)
- Photobioreaktorsystem (1)
- Plattformmodell (1)
- Polymere (1)
- Pop-Art (1)
- Post-colonial studies (1)
- Postkommunismus (1)
- Postpolitik (1)
- Postsozialismus (1)
- Postwachstumsstadt (1)
- Postwachstumsökonomie (1)
- Projektmanagement (1)
- Protestbewegung (1)
- Public-Private Partnerships (1)
- Pyongyang (1)
- RSSI (1)
- Radikales Lernen (1)
- Randwertproblem (1)
- Rapid Visual Screening (1)
- Raum (1)
- Raumklang (1)
- Raumluftströmungen (1)
- Raumplanung (1)
- Rauschenberg (1)
- Renewable energy (1)
- Resilience (1)
- Responsibilisierung (1)
- Rezension (1)
- Rumänien (1)
- Sakralbau (1)
- Saz (1)
- Schaden (1)
- Schadensanalyse (1)
- Schlierenspiegel (1)
- Schreiblabor (1)
- Segregation (1)
- Selbstbildnis (1)
- Selbstgenutztes Wohneigentum (1)
- Selbstheilung (1)
- Selbstportrait (1)
- Seminar (1)
- Siedlungswasserwirtschaft (1)
- Sinne (1)
- Sinnlichkeit (1)
- Smarter Together (1)
- Smog (1)
- Social Housing (1)
- Social movement (1)
- Solar (1)
- Soziale Bewegung (1)
- Soziale Mobilität (1)
- Soziale Sicherheit (1)
- Sozialer Muskel (1)
- Sozialer Wohnungsbau (1)
- Sozialismus (1)
- Speed Dating (1)
- Spekulative Didaktik (1)
- Spoerri (1)
- Stadt (1)
- Stadtbild (1)
- Stadtgeschichte <Fach> (1)
- Stadtmarketing (1)
- Stereofonie (1)
- Stereophonie (1)
- Steuerungsansätze (1)
- Strukturanalyse (1)
- Strukturmechanik (1)
- Strömung (1)
- Städtebau (1)
- Städtischer Wohnungsmarkt (1)
- Super Healing (1)
- Superplasticizer (1)
- Sustainability (1)
- Synchronisation (1)
- Synchronisierung (1)
- Technik (1)
- Technisches Denkmal (1)
- Tehran (1)
- Temperatur (1)
- Theatermaschine (1)
- Thermal conductivity (1)
- Thermoelasticity (1)
- Thermoelastizität (1)
- Thorax (1)
- Thüringen (1)
- Tod (1)
- Ton <Geologie> (1)
- Tonfilm (1)
- Tracht (1)
- Transdisziplinarität (1)
- Transformation (1)
- Transformation risks (1)
- Transformationsrisiken (1)
- UAS (1)
- Umweltbelastung (1)
- Umweltgerechtigkeit (1)
- Umweltveränderung (1)
- Universität (1)
- Urban Planning (1)
- Urban identity (1)
- Urbanität (1)
- Vertical roller mill (1)
- Vilém Flusser (1)
- Virtuelle Realität (1)
- Volkstracht (1)
- Vulnerability (1)
- Vulnerability assessment (1)
- Wasserreserve (1)
- Wasserressourcenmanagement (1)
- Wastewater manegement (1)
- Welfare State (1)
- Wettbewerb (1)
- Wohnfläche (1)
- Wohnraum (1)
- Wohnungsbau (1)
- Wohnungseigentum (1)
- Wärmeleitfähigkeit (1)
- Wärmeübergang (1)
- Wärmeübergangskoeffizient (1)
- Wärmeübergangskoeffizient an Zylinder (1)
- Zahlungsbereitschaft (1)
- Zementbeton (1)
- Zementmahlung (1)
- Zyklische Beanspruchung (1)
- action recognition (1)
- adaptive neuro-fuzzy inference system (ANFIS) (1)
- adaptive pushover (1)
- alumosilicate (1)
- ant colony optimization algorithm (ACO) (1)
- anti-colonialist (1)
- anti-racist (1)
- artificial neural network (1)
- artistic research (1)
- back-pressure (1)
- battery (1)
- berlinite (1)
- bridge inspection (1)
- capitalist city (1)
- cellulose (1)
- classification (1)
- classifier (1)
- clear channel assessments (1)
- cluster density (1)
- cluster shape (1)
- clustering (1)
- colonialcity (1)
- competition (1)
- computation (1)
- computational fluid dynamics (CFD) (1)
- congestion control (1)
- coronary artery disease (1)
- cross-contamination (1)
- damage information model (1)
- decolonisation (1)
- deep learning neural network (1)
- depletion method (1)
- desk fan (1)
- digital native news media (1)
- digital-born news media (1)
- digitale Kultur (1)
- digitization (1)
- dimensionality reduction (1)
- discrete Dirac operator (1)
- discrete boundary value problems (1)
- discrete monogenic functions (1)
- duty-cycles (1)
- earthquake damage (1)
- earthquake vulnerability assessment (1)
- energy consumption (1)
- energy efficiency (1)
- ensemble model (1)
- entrepreneurial journalism (1)
- experimental validation (1)
- extreme events (1)
- extreme pressure (1)
- fatigue (1)
- feministische Gesundheitsrecherche (1)
- firefly optimization algorithm (1)
- fisher-information matrix (1)
- fixed effects regression (1)
- flow pattern (1)
- fog computing (1)
- food informatics (1)
- fractional-order control (1)
- fuzzy decision making (1)
- fuzzy set qualitative comparative analysis (1)
- geoinformatics (1)
- geopolymer (1)
- ground water contamination (1)
- gully erosion susceptibility (1)
- health (1)
- health informatics (1)
- heart disease diagnosis (1)
- heat sink (1)
- heat transfer coefficient for cylinders (1)
- human blob (1)
- human body proportions (1)
- human thermal plume (1)
- hybrid machine learning (1)
- hybrid machine learning model (1)
- hydraulic jump (1)
- hydrological model (1)
- hydrology (1)
- ideological space (1)
- image processing (1)
- indoor air quality (1)
- industry 4.0 (1)
- journalism (1)
- journalism theories (1)
- künstlerischer Aktivismus (1)
- least square support vector machine (LSSVM) (1)
- longitudinal dispersion coefficient (1)
- mass spectrometry (1)
- mathematical modeling (1)
- mean-squared error (1)
- media performance (1)
- memoy (1)
- mitigation (1)
- mobility points (1)
- mobility stations (1)
- monolithic (1)
- museum (1)
- nachhaltige Regionalentwicklung (1)
- nachhaltige Stadtentwicklung (1)
- nachhaltige Verkehrspolitik (1)
- nachhaltige Wirtschaft (1)
- nanofluid (1)
- natural hazard (1)
- neural networks (NNs) (1)
- news start-ups (1)
- papercrete (1)
- partical swarm optimization (1)
- personalisierte Lüftung (1)
- personalized ventilation (1)
- photovoltaic (1)
- photovoltaic-thermal (PV/T) (1)
- physical activities (1)
- polymer adsorption (1)
- practice theories (1)
- practice theory (1)
- precipitation (1)
- predictive model (1)
- principal component analysis (1)
- public health (1)
- public service media (1)
- public space (1)
- received signal strength indicator (1)
- recovery-based and residual-based error estimators (1)
- remote sensing (1)
- residential buildings (1)
- rice (1)
- rivers (1)
- rule based classification (1)
- schlieren imaging (1)
- schlieren velocimetry (1)
- seasonal precipitation (1)
- seismic assessment (1)
- seismic control (1)
- seismic hazard analysis (1)
- seismic risk estimation (1)
- seismic vulnerability (1)
- signal processing (1)
- site-specific spectrum (1)
- smart mobility (1)
- smart sensors (1)
- socialist city (1)
- sodium silicate solution (1)
- soil temperature (1)
- space of ideology (1)
- space of terror (1)
- spatial analysis (1)
- spatial transition (1)
- spatiotemporal database (1)
- spearman correlation coefficient (1)
- square root cubature calman filter (1)
- stacking system (1)
- standard deviation of pressure fluctuations (1)
- statistical analysis (1)
- statistical coeffcient of the probability distribution (1)
- stilling basin (1)
- structural analysis (1)
- structural control (1)
- studies in 1:1 scale (1)
- support vector regression (1)
- sustainability (1)
- tech company (1)
- technische Bilder (1)
- theory development (1)
- thermisches Empfinden (1)
- totalitarian city (1)
- tower-like structures (1)
- tracer gas (1)
- tuned mass damper (1)
- type-3 fuzzy systems (1)
- urban development (1)
- urban health (1)
- urban history (1)
- urban planning (1)
- urban regeneration (1)
- urban space (1)
- urban sustainability (1)
- urbanism (1)
- vestimentäre Moderne (1)
- visible spectrophotometry (1)
- visual arts (1)
- water quality (1)
- wavelet transform (1)
- wireless sensor network (1)
- wireless sensor networks (1)
- woven composites (1)
- zyklische Beanspruchung (1)
- Ästhetik (1)
- Öffentlich-private Partnerschaft (1)
- Öffentlich-rechtlicher Rundfunk (1)
- äquivalente Temperatur (1)
Year of publication
- 2020 (123) (remove)
Radikale Pädagogik richtet ihre Aufmerksamkeit sowohl auf die alltäglichen Techniken pädagogischer Praxis – Techniken zur Aktivierung eines Begegnungsraumes, Techniken, sich um die Arbeit und einander zu kümmern, Techniken des kulturübergreifenden Zuhörens, Techniken, sich dem Mehr-als zuzuwenden – als auch auf Techniken zum »Überschreiten der Schwelle«. Das Überschreiten der Schwelle hängt mit der Art und Weise der Anpassung (accommodation) zusammen, die es ermöglicht, das Lernen in all seinen Erscheinungsformen wertzuschätzen.
Multi-user projection systems provide a coherent 3D interaction space for multiple co-located users that facilitates mutual awareness, full-body interaction, and the coordination of activities. The users perceive the shared scene from their respective viewpoints and can directly interact with the 3D content.
This thesis reports on novel interaction patterns for collaborative 3D interaction for local and distributed user groups based on such multi-user projection environments. A particular focus of our developments lies in the provision of multiple independent interaction territories in our workspaces and their tight integration into collaborative workflows. The motivation for such multi-focus workspaces is grounded in research on social cooperation patterns, specifically in the requirement for supporting phases of loose and tight collaboration and the emergence of dedicated orking territories for private usage and public exchange. We realized independent interaction territories in the form of handheld virtual viewing windows and multiple co-located hardware displays in a joint workspace. They provide independent views of a shared virtual environment and serve as access points for the exploration and manipulation of the 3D content. Their tight integration into our workspace supports fluent transitions between individual work and joint user engagement. The different affordances of various displays in an exemplary workspace consisting of a large 3D wall, a 3D tabletop, and handheld virtual viewing windows, promote different usage scenarios, for instance for views from an egocentric perspective, miniature scene representations, close-up views, or storage and transfer areas. This work shows that this versatile workspace can make the cooperation of multiple people in joint tasks more effective, e.g. by parallelizing activities, distributing subtasks, and providing mutual support.
In order to create, manage, and share virtual viewing windows, this thesis presents the interaction technique of Photoportals, a tangible interface based on the metaphor of digital photography. They serve as configurable viewing territories and enable the individual examination of scene details as well as the immediate sharing of the prepared views. Photoportals are specifically designed to complement other interface facets and provide extended functionality for scene navigation, object manipulation, and for the creation of temporal recordings of activities in the virtual scene.
A further objective of this work is the realization of a coherent interaction space for direct 3D input across the independent interaction territories in multi-display setups. This requires the simultaneous consideration of user input in several potential interaction windows as well as configurable disambiguation schemes for the implicit selection of distinct interaction contexts. We generalized the required implementation structures into a high-level software pattern and demonstrated its versatility by means of various multi-context 3D interaction tools.
Additionally, this work tackles specific problems related to group navigation in multiuser projection systems. Joint navigation of a collocated group of users can lead to unintentional collisions when passing narrow scene sections. In this context, we suggest various solutions that prevent individual collisions during group navigation and discuss their effect on the perceived integrity of the travel group and the 3D scene. For collaboration scenarios involving distributed user groups, we furthermore explored different configurations for joint and individual travel.
Last but not least, this thesis provides detailed information and implementation templates for the realization of the proposed interaction techniques and collaborative workspaces in scenegraph-based VR systems. These contributions to the abstraction of specific interaction patterns, such as group navigation and multi-window interaction, facilitate their reuse in other virtual reality systems and their adaptation to further collaborative scenarios.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
In recent decades, a multitude of concepts and models were developed to understand, assess and predict muscular mechanics in the context of physiological and pathological events.
Most of these models are highly specialized and designed to selectively address fields in, e.g., medicine, sports science, forensics, product design or CGI; their data are often not transferable to other ranges of application. A single universal model, which covers the details of biochemical and neural processes, as well as the development of internal and external force and motion patterns and appearance could not be practical with regard to the diversity of the questions to be investigated and the task to find answers efficiently. With reasonable limitations though, a generalized approach is feasible.
The objective of the work at hand was to develop a model for muscle simulation which covers the phenomenological aspects, and thus is universally applicable in domains where up until now specialized models were utilized. This includes investigations on active and passive motion, structural interaction of muscles within the body and with external elements, for example in crash scenarios, but also research topics like the verification of in vivo experiments and parameter identification. For this purpose, elements for the simulation of incompressible deformations were studied, adapted and implemented into the finite element code SLang. Various anisotropic, visco-elastic muscle models were developed or enhanced. The applicability was demonstrated on the base of several examples, and a general base for the implementation of further material models was developed and elaborated.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken.
The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history.
In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain.
Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics.
In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied.
Diese Dissertation beschäftigt sich mit Kunstwerken, die das alltägliche Ding in den Blick nehmen. Nährboden dieser Kunstform sind die soziokulturellen Entwicklungen des 20. Jahrhunderts, mit denen wesentliche Veränderungen hinsichtlich des Verhältnisses von Mensch und Ding einhergingen.
Daraus resultierte eine allgemeine künstlerische Zuwendung zu den Dingen und eine einzigartige Kulmination aus verschiedenartigen Auseinandersetzungen mit ihnen als kunstfähige Gegenstände, über die sich die neue Dingwelt erschlossen wurde und deren Kunstwerke einen Spiegel dieser Entwicklungen darstellen.
Die Dissertation stellt ebenfalls die Dinge selbst in den Fokus. Vier Aspekte von Dingen (Materialität, Funktionalität, Repräsentationalität und Relationalität) werden gesondert ins Auge gefasst und in den theoretischen Diskurs des 20. Jahrhunderts eingeordnet, um sie als Teil der gelebten Realität besser zu verstehen, von der sich der ästhetische Blick nicht trennen lässt. Anhand der künstlerischen Positionen von Robert Rauschenberg, Christo und Jeanne-Claude, Daniel Spoerri und Arman sowie Claes Oldenburg werden die verschiedenen Aspekte der Dinge näher betrachtet und analysiert, wie diese speziell in den Kunstwerken thematisiert werden und welche Relevanz sie für deren Rezeptionserfahrung haben.
Die Korrelation dieser beiden Ebenen - die Dinge als konstitutiver Bestandteil im sozialen Raum und die Dinge als Elemente in Kunstwerken -, die im Fokus der vorliegenden Untersuchung steht, ermöglicht es, die künstlerische Zuwendung zu den Dingen in den 1960er-Jahren neu einzuordnen. Darüber hinaus wird dadurch ein differenziertes Bild von der Kunst dieser Zeit sowie den Dingen in der Kunst im Allgemeinen gezeichnet.
The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.
Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed.
Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations.
Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control.
Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records.
Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations.
Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties.
In dieser Untersuchung wird eine Geschichte von Problemen der Gleichzeitigkeit zwischen Sehen und Hören, beziehungsweise der Synchronität von Bild und Ton, bis zur Entstehung des Tonfilms rekonstruiert. Dabei werden Linien gezogen zwischen diskursiven Konfigurationen und medialen Anordnungen, in denen das Verhältnis von Sehen und Hören oder Bild und Ton als zeitliches erscheint – in denen Sehen und Hören, Bild und Ton zwischen Mannigfaltigkeit und Einheit verschmelzen, auseinanderfallen, interagieren, redundant oder spezifisch werden, einander ergänzen, dominieren, verfehlen verdrängen, aufspalten…
Tonfilm ist in der Kinogeschichte eben nicht nur eine Ergänzung. Vielmehr ähnelt er dem Auftritt eines Gespensts, das das Wissen und die Techniken der Trennung der Sinne schon länger, vielleicht schon immer begleitet hatte. Das Auftreten des Tonfilms ist auch überhaupt früher Anlass eines weitreichenden Diskurses darüber, was Audiovision eigentlich sein könnte und sollte. Noch allgemeiner könnte auch davon gesprochen werden, dass Tonfilm eins der ersten großen Projekte der Konvergenz technischer Medien ist, die heute – besonders angesichts des Computers – als entscheidender Aspekt von Mediengeschichte erscheint.
Die Linien der Probleme von Gleichzeitigkeit/Ungleichzeitigkeit an den Schnittstellen von Wissen, Technik und Ästhetik werden insbesondere durch drei Felder hindurch nachgezeichnet:
1) Die Geschichte von Intermodalität in Bezug auf die Frage nach Gleichzeitigkeit und Ungleichzeitigkeit als Problem und Gegenstand von Wissenschaft seit dem 19. Jahrhundert, vornehmlich in zwei Gebieten: Als Fehlerquelle im astronomischen Observatorium bei der Messung, Feststellung und Vereinheitlichung von Raum und Zeit, die auf individuelle Abweichungen Intermodaler Wahrnehmung verweist und als Problem der „persönlichen Gleichung“ weit über die Astronomie hinaus Karriere macht. Als heiße Zone wahrnehmungspsychologischer Experimente und ihrer Apparate seit der Mitte des 19. Jahrhunderts, die mit dem Konzept der „Komplikation“ Fragen nach einer Synthese der Sinneswahrnehmungen und damit letztlich nach der Selbstgegenwart des Menschen stellt.
2) Eine Technikgeschichte des Problems auditive und visuelle Zeitmedien – wie Phonograph und Film – zu koppeln, zu synchronisieren. Darin eskalieren zwei zeitkritische Relationen: Einerseits zwischen diskreter, intermittierender Bewegung des Films und stetiger, kontinuierlicher Bewegung des Phonographen, andererseits in Bezug darauf, an welcher Stelle – wo und wann – audiovisuelle Gegenwart des Kinos ensteht; oder auch verfehlt wird.
3) Eine Geschichte von Filmtheorie und -ästhetik, in der sich mit der Durchsetzung des Tonfilms um 1930 die Frage stellt, was dieses neue Medium sei und was damit zu tun. Diese Verhandlungen spannen sich zwischen dem formulierten Ziel einer spezifischen Illusion oder Präsenz von Tonfilm durch Synchronität auf der einen Seite und der sich aus dem Verdacht des Betrugs durch Synchronität ergebenden Forderung nach „Asynchronismus“ als kritischer Methode auf der anderen Seite auf.
Ausgehend von der These, dass im 19. Jahrhundert die Sinne aufgeteilt werden, dann wird in diesen Anordnungen an irgendeiner Stelle Heterogenes gleichzeitig passieren. An welcher Stelle? Und was bedeuten diese (Un-)Gleichzeitigkeiten? Was dabei - sehr allgemein gesprochen - auf dem Spiel steht, sind Möglichkeiten einer audiovisuell geteilten – getrennten oder gemeinsamen - Welt und Gegenwart.
This article focuses on further developments of the background-oriented schlieren (BOS) technique to visualize convective indoor air flow, which is usually defined by very small density gradients. Since the light rays deflect when passing through fluids with different densities, BOS can detect the resulting refractive index gradients as integration along a line of sight. In this paper, the BOS technique is used to yield a two-dimensional visualization of small density gradients. The novelty of the described method is the implementation of a highly sensitive BOS setup to visualize the ascending thermal plume from a heated thermal manikin with temperature differences of minimum 1 K. To guarantee steady boundary conditions, the thermal manikin was seated in a climate laboratory. For the experimental investigations, a high-resolution DLSR camera was used capturing a large field of view with sufficient detail accuracy. Several parameters such as various backgrounds, focal lengths, room air temperatures, and distances between the object of investigation, camera, and structured background were tested to find the most suitable parameters to visualize convective indoor air flow. Besides these measurements, this paper presents the analyzing method using cross-correlation algorithms and finally the results of visualizing the convective indoor air flow with BOS. The highly sensitive BOS setup presented in this article complements the commonly used invasive methods that highly influence weak air flows.
Die Ruine der Barfüßerkirche in Erfurt stellt eine der letzten Erinnerungen an die Zerstörungen der Stadt im Zweiten Weltkrieg dar. Sie wird bis heute temporär und saisonal kulturell genutzt. Im Rahmen eines Studienprojektes im Sommersemester 2019 wurden an der Bauhaus-Universität Weimar, betreut durch die Professur Denkmalpflege und Baugeschichte und unterstützt vom Initiativkreis Barfüßerkirche, Nutzungskonzepte für ein Museum für Mittelalterkunst und für einen Tagungsort untersucht. Der vorliegende Band dokumentiert die 14 studentischen Entwürfe, die für ein Weiterbauen an der Barfüßerkirche entstanden sind.
Vor dem Hintergrund einer stetig wachsenden Nachfrage an Beton wie auch ambitionierter Reduktionsziele beim in der Zementproduktion anfallenden CO2 gelten calcinierte Tone als derzeit aussichtsreichste technische Neuerung im Bereich nachhaltiger Bindemittelkonzepte. Unter Ausnutzung ihrer Puzzolanität soll ein erheblicher Teil der Klinkerkomponente im Zement ersetzt werden, wobei der zu ihrer Aktivierung notwendige Energiebedarf vergleichsweise niedrig ist. Wesentliche Vorteile der Tone sind ihre weltweit nahezu unbegrenzte Verfügbarkeit sowie der äußerst geringe rohstoffbedingte CO2-Ausstoß während der Calcinierung. Schwierigkeiten auf dem Weg der Umsetzung bestehen allerdings in der Vielseitigkeit des Systems, welches durch eine hohe Varietät der Rohtone und des daraus folgenden thermischen Verhaltens gekennzeichnet ist. Entsprechend schwierig ist die Übertragbarkeit von Erfahrungen mit bereits etablierten calcinierten Tonen wie dem Metakaolin, der sich durch hohe Reinheit, einen aufwendigen Aufbereitungsprozess und eine entsprechend hohe Reaktivität auszeichnet. Ziel der Arbeit ist es daher, den bereits erlangten Kenntnisstand auf andere, wirtschaftlich relevante Tone zu erweitern und deren Eignung für die Anwendung im Beton herauszuarbeiten.
In einem mehrstufigen Arbeitsprogramm wurde untersucht, inwieweit großtechnisch nutzbare Tone aktivierbar sind und welche Eigenschaften sich daraus für Zement und Beton ergeben. Die dabei festgestellte Reihenfolge Kaolinit > Montmorillonit > Illit beschreibt sowohl die Reaktivität der Brennprodukte als auch umgekehrt die Höhe der optimalen Calciniertemperatur. Auch wandelt sich der Charakter der entstandenen Metaphasen in dieser Abfolge von röntgenamorph und hochreaktiv zu glasig und reaktionsträge. Trotz dieser Einordnung konnte selbst mit dem Illit eine mit Steinkohlenflugasche vergleichbare Puzzolanität festgestellt werden. Dies bestätigte sich anschließend in Parameterversuchen, bei denen die Einflüsse von Rohstoffqualität, Calcinierung, Aufbereitung und Zement hinsichtlich der Reaktivitätsausbeute bewertet wurden. Die Bandbreite der erzielbaren Qualitäten ist dabei immens und gipfelt nicht zuletzt in stark unterschiedlichen Wirkungen auf die Festbetoneigenschaften. Hier machte sich vor allem die für Puzzolane typische Porenverfeinerung bemerkbar, sodass viele von Transportvorgängen abhängige Schadmechanismen unterdrückt wurden. Andere Schadex-positionen wie der Frostangriff ließen sich durch Zusatzmaßnahmen wie dem Eintrag von Luftporen beherrschen. Zu bemängeln sind vor allem die schlechte Verarbeitbarkeit kaolinitischer Metatone wie auch die für Puzzolane stark ausgeprägte Carbonatisierungsneigung.
Wesentliches Ergebnis der Arbeit ist, dass auch Tone, die bisher als geringwertig bezüglich des Aktivierungspotentials galten, nutzbare puzzolanische Eigenschaften entwickeln können. So kann selbst ein stark verunreinigter Illit-Ton die Qualität von Flugasche erreichen. Mit stei-gendem Tonmineralgehalt sowie bei Präsens thermisch instabilerer Tonminerale wie Mont-morillonit und Kaolinit erweitert sich das Spektrum nutzbarer Puzzolanitäten bis hin zur hochreaktiven Metakaolin-Qualität. Damit lassen sich gute bis sehr gute Betoneigenschaften erzielen, sodass die Leistungsfähigkeit etablierter Kompositmaterialien erreicht wird. Somit sind die Voraussetzungen für eine umfangreiche Nutzung der erheblichen Tonmengen im Zement und Beton gegeben. Entsprechend können Tone einen effektiven Beitrag zu einer gesteigerten Nachhaltigkeit in der Baustoffproduktion weltweit leisten.
Experimente lernen, Techniken tauschen. Ein spekulatives Handbuch
Das spekulative Handbuch bietet vielfältige Techniken für ein radikales Lernen und Vermitteln. Es umfasst konkrete Anleitungen, Erfahrungen und theoretische Überlegungen. Die Texte beteiligen sich an der Konzeption einer Vermittlung, die das gemeinsame Experimentieren (wieder) einführt.
Im Seminarraum, in Workshops, auf Festivals, in Fluren, Parks und der Stadt finden Lernen und Verlernen statt. Texte und Anleitungen u. a. zu: Filmessays, Collagen, Banküberfällen, der Universität der Toten, wildem Schreiben, konzeptuellem speed Dating, neurodiversem Lernen, Format-Denken, dem Theater der Sorge, dem Schreiblabor, dem Körperstreik.
When it comes to monitoring of huge structures, main issues are limited time, high costs and how to deal with the big amount of data. In order to reduce and manage them, respectively, methods from the field of optimal design of experiments are useful and supportive. Having optimal experimental designs at hand before conducting any measurements is leading to a highly informative measurement concept, where the sensor positions are optimized according to minimal errors in the structures’ models. For the reduction of computational time a combined approach using Fisher Information Matrix and mean-squared error in a two-step procedure is proposed under the consideration of different error types. The error descriptions contain random/aleatoric and systematic/epistemic portions. Applying this combined approach on a finite element model using artificial acceleration time measurement data with artificially added errors leads to the optimized sensor positions. These findings are compared to results from laboratory experiments on the modeled structure, which is a tower-like structure represented by a hollow pipe as the cantilever beam. Conclusively, the combined approach is leading to a sound experimental design that leads to a good estimate of the structure’s behavior and model parameters without the need of preliminary measurements for model updating.
Die im Jahr 2020 in Deutschland praktizierte Siedlungs- und Wohnungspolitik erhält in Anbetracht ihrer Auswirkungen auf die soziale und ökologische Lage einen bitteren Beigeschmack. Arm und Reich triften weiter auseinander und einer zielgerichteten ökologischen Transformation der Art und Weise, wie Stadtentwicklung und Wohnungspolitik gestaltet werden,stehen noch immer historisch und systemisch bedingte Pfadabhängigkeiten im Weg. Diese werden nur durch eine integrierte Betrachtung sozialer und ökonomischer Aspekte sichtbar und deuten auf eine der ursprünglichen Fragen linker Gesellschaftsforschung hin: Die Auseinandersetzung mit dem Verhältnis von Eigentum und Gerechtigkeit.
Im Ergebnis stehen drei wesentliche Befunde: Der Diskurs zum Schutz des Klimas und der Biodiversität berührt direkt die Parameter Dichte, Nutzungsmischung und Flächeninanspruchnahme; zweitens steigt letztere relativ mit erhöhtem, individuell verfügbaren Kapital und insbesondere im selbstgenutztem Eigentum gegenüber Mietwohnungen; und drittens wächst der Eigentumsanteil mit fortschreitender Finanzialisierung des Wohnungsmarktes, sodass das Risiko sozialer und ökologischer Krisen sich verschärft.
Prediction of the groundwater nitrate concentration is of utmost importance for pollution control and water resource management. This research aims to model the spatial groundwater nitrate concentration in the Marvdasht watershed, Iran, based on several artificial intelligence methods of support vector machine (SVM), Cubist, random forest (RF), and Bayesian artificial neural network (Baysia-ANN) machine learning models. For this purpose, 11 independent variables affecting groundwater nitrate changes include elevation, slope, plan curvature, profile curvature, rainfall, piezometric depth, distance from the river, distance from residential, Sodium (Na), Potassium (K), and topographic wetness index (TWI) in the study area were prepared. Nitrate levels were also measured in 67 wells and used as a dependent variable for modeling. Data were divided into two categories of training (70%) and testing (30%) for modeling. The evaluation criteria coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency (NSE) were used to evaluate the performance of the models used. The results of modeling the susceptibility of groundwater nitrate concentration showed that the RF (R2 = 0.89, RMSE = 4.24, NSE = 0.87) model is better than the other Cubist (R2 = 0.87, RMSE = 5.18, NSE = 0.81), SVM (R2 = 0.74, RMSE = 6.07, NSE = 0.74), Bayesian-ANN (R2 = 0.79, RMSE = 5.91, NSE = 0.75) models. The results of groundwater nitrate concentration zoning in the study area showed that the northern parts of the case study have the highest amount of nitrate, which is higher in these agricultural areas than in other areas. The most important cause of nitrate pollution in these areas is agriculture activities and the use of groundwater to irrigate these crops and the wells close to agricultural areas, which has led to the indiscriminate use of chemical fertilizers by irrigation or rainwater of these fertilizers is washed and penetrates groundwater and pollutes the aquifer.
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Wind effects can be critical for the design of lifelines such as long-span bridges. The existence of a significant number of aerodynamic force models, used to assess the performance of bridges, poses an important question regarding their comparison and validation. This study utilizes a unified set of metrics for a quantitative comparison of time-histories in bridge aerodynamics with a host of characteristics. Accordingly, nine comparison metrics are included to quantify the discrepancies in local and global signal features such as phase, time-varying frequency and magnitude content, probability density, nonstationarity and nonlinearity. Among these, seven metrics available in the literature are introduced after recasting them for time-histories associated with bridge aerodynamics. Two additional metrics are established to overcome the shortcomings of the existing metrics. The performance of the comparison metrics is first assessed using generic signals with prescribed signal features. Subsequently, the metrics are applied to a practical example from bridge aerodynamics to quantify the discrepancies in the aerodynamic forces and response based on numerical and semi-analytical aerodynamic models. In this context, it is demonstrated how a discussion based on the set of comparison metrics presented here can aid a model evaluation by offering deeper insight. The outcome of the study is intended to provide a framework for quantitative comparison and validation of aerodynamic models based on the underlying physics of fluid-structure interaction. Immediate further applications are expected for the comparison of time-histories that are simulated by data-driven approaches.
Despite digitization and platformization, mass media and established media companies still play a crucial role in the provision of journalistic content in democratic societies. Competition is one key driver of (media) company behavior and is considered to have an impact on the media’s performance. However, theory and empirical research are ambiguous about the relationship. The objective of this article is to empirically analyze the effect of competition on media performance in a cross-national context. We assessed media performance of media companies as the importance of journalistic goals within their stated corporate goal system. We conducted a content analysis of letters to the shareholders in annual reports of more than 50 media companies from 2000 to 2014 to operationalize journalistic goal importance. When employing a fixed effects regression analysis, as well as a fuzzy set qualitative comparative analysis, results suggest that competition has a positive effect on the importance of journalistic goals, while the existence of a strong public service media sector appears to have the effect of “crowding out” commercial media companies.
This study permits a reliability analysis to solve the mechanical behaviour issues existing in the current structural design of fabric structures. Purely predictive material models are highly desirable to facilitate an optimized design scheme and to significantly reduce time and cost at the design stage, such as experimental characterization.
The present study examined the role of three major tasks; a) single-objective optimization, b) sensitivity analyses and c) multi-objective optimization on proposed weave structures for woven fabric composites. For single-objective optimization task, the first goal is to optimize the elastic properties of proposed complex weave structure under unit cells basis based on periodic boundary conditions.
We predict the geometric characteristics towards skewness of woven fabric composites via Evolutionary Algorithm (EA) and a parametric study. We also demonstrate the effect of complex weave structures on the fray tendency in woven fabric composites via tightness evaluation. We utilize a procedure which does not require a numerical averaging process for evaluating the elastic properties of woven fabric composites. The fray tendency and skewness of woven fabrics depends upon the behaviour of the floats which is related to the factor of weave. Results of this study may suggest a broader view for further research into the effects of complex weave structures or may provide an alternative to the fray and skewness problems of current weave structure in woven fabric composites.
A comprehensive study is developed on the complex weave structure model which adopts the dry woven fabric of the most potential pattern in singleobjective optimization incorporating the uncertainties parameters of woven fabric composites. The comprehensive study covers the regression-based and variance-based sensitivity analyses. The second task goal is to introduce the fabric uncertainties parameters and elaborate how they can be incorporated into finite element models on macroscopic material parameters such as elastic modulus and shear modulus of dry woven fabric subjected to uni-axial and biaxial deformations. Significant correlations in the study, would indicate the need for a thorough investigation of woven fabric composites under uncertainties parameters. The study describes here could serve as an alternative to identify effective material properties without prolonged time consumption and expensive experimental tests.
The last part focuses on a hierarchical stochastic multi-scale optimization approach (fine-scale and coarse-scale optimizations) under geometrical uncertainties parameters for hybrid composites considering complex weave structure. The fine-scale optimization is to determine the best lamina pattern that maximizes its macroscopic elastic properties, conducted by EA under the following uncertain mesoscopic parameters: yarn spacing, yarn height, yarn width and misalignment of yarn angle. The coarse-scale optimization has been carried out to optimize the stacking sequences of symmetric hybrid laminated composite plate with uncertain mesoscopic parameters by employing the Ant Colony Algorithm (ACO). The objective functions of the coarse-scale optimization are to minimize the cost (C) and weight (W) of the hybrid laminated composite plate considering the fundamental frequency and the buckling load factor as the design constraints.
Based on the uncertainty criteria of the design parameters, the appropriate variation required for the structural design standards can be evaluated using the reliability tool, and then an optimized design decision in consideration of cost can be subsequently determined.
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)
Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
This study investigates the performance of two systems: personalized ventilation (PV) and ductless personalized ventilation (DPV). Even though the literature indicates a compelling performance of PV, it is not often used in practice due to its impracticality. Therefore, the present study assesses the possibility of replacing the inflexible PV with DPV in office rooms equipped with displacement ventilation (DV) in the summer season. Numerical simulations were utilized to evaluate the inhaled concentration of pollutants when PV and DPV are used. The systems were compared in a simulated office with two occupants: a susceptible occupant and a source occupant. Three types of pollution were simulated: exhaled infectious air, dermally emitted contamination, and room contamination from a passive source. Results indicated that PV improved the inhaled air quality regardless of the location of the pollution source; a higher PV supply flow rate positively impacted the inhaled air quality. Contrarily, the performance of DPV was highly sensitive to the source location and the personalized flow rate. A higher DPV flow rate tends to decrease the inhaled air quality due to increased mixing of pollutants in the room. Moreover, both systems achieved better results when the personalized system of the source occupant was switched off.
This thesis explores how cultural heritage plays a role in the development of urban identity by engaging both actively and passively with memory, i.e. remembering and forgetting. I argue that architectural heritage is a medium where specific cultural and social decisions form its way of presentation, and it reflects the values and interests of the period. By the process of remembering and forgetting, the meanings between inhabitant and object in urban environment are practiced, and the meanings are created.
To enable the research in narrative observation, cultural tourism management is chosen as the main research object, which reflects the alteration of interaction between the architectural heritage and urban identity. Identifying the role of heritage management, the definition of social resilience and the prospects of cultural heritage as a means of social resilience are addressed. Case region of the research is East Ger- many, thereby, the study examines the distinct approaches and objectives regarding heritage management under the different political systems along the German reunification process.
The framework is based on various theoretical paradigms to investigate the broad research questions: 1) What is the role of historic urban quarters in the revitalisation of East German towns? 2) How was the transition processed by cultural heritage management? 3) How did policy affect residents’ lives?
The case study is applied to macro level (city level: Gotha and Eisenach) and micro level study (object level: specific heritage sites), to analyse the performance of selective remembering and making tourist destination through giving significance to specific heritage. By means of site observations, archival research, qualitative inter- views, photographs, and discourse analysis on printed tourism materials, the study demonstrates that certain sites and characteristics of the city enable creating and focusing messages, which aids the social resilience.
Combining theory and empirical studies this thesis attempts to widen the academic discussion regarding the practice of remembering and forgetting driven by cultural heritage. The thesis argues for cultural heritage tourism as an element of social resilience and one that embraces the historic and cultural identity of the inhabitants.
Personalisierte Lüftung (PL) kann die thermische Behaglichkeit sowie die Qualität der eingeatmeten Atemluft verbessern, in dem jedem Arbeitsplatz Frischluft separat zugeführt wird. In diesem Beitrag wird die Wirkung der PL auf die thermische Behaglichkeit der Nutzer unter sommerlichen Randbedingungen untersucht. Hierfür wurden zwei Ansätze zur Bewertung des Kühlungseffekts der PL untersucht: basierend auf (1) der äquivalenten Temperatur und (2) dem thermischen Empfinden. Grundlage der Auswertung sind in einer Klimakammer gemessene sowie numerisch simulierte Daten. Vor der Durchführung der Simulationen wurde das numerische Modell zunächst anhand der gemessenen Daten validiert. Die Ergebnisse zeigen, dass der Ansatz basierend auf dem thermischen Empfinden zur Evaluierung des Kühlungseffekts der PL sinnvoller sein kann, da bei diesem die komplexen physiologischen Faktoren besser berücksichtigt werden.
Der perfekte Bankraub
(2020)
Finanzielle Unabhängigkeit, überleben können, Superheld*in oder Pop-Star sein, Adrenalin-Kick, lebenslange Kompliz*innenschaft und ewige romanti- sche Verbundenheit, Verschwörung, siegreiches Über- listen, Täuschungstechniken – die Fantasien, die sich um die Idee des Bankraubs ranken, sind so verschieden wie die Menschen, die sie haben. Ein Banküberfall ist wahrscheinlich der Traum Vieler, angesichts der zuneh- menden Prekarisierung persönlicher Ökonomien und
– gleichzeitig oder gerade deswegen – ein spektakulari- siertes, fast popkulturelles Ereignis, das in den Medien gut dokumentiert und in unzähligen Filmen illustriert und weitergesponnen wird.
Das vorliegende Gutachten befasst sich mit der Innovationslandschaft des deutschen Journalismus. Innovation wird als eine essenzielle Voraussetzung verstanden, um tragfähige Lösungsansätze für die gegenwärtigen Probleme des Journa-lismus zu entwickeln. Im Mittelpunkt des Gutachtens steht die Frage, wie Innovationspolitik im Journalismus – d. h. die Unterstützung von Innovation durch die öffentliche Hand – funktionstüchtig ausgestaltet werden kann. Dabei wird dem Innovationssysteme-Ansatz gefolgt, welcher Probleme, Barrieren und Hemmnisse identifiziert, die der Innovationsfähigkeit des Journalismus in Deutschland grundlegend im Wege stehen.
In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.
Conventional superplasticizers based on polycarboxylate ether (PCE) show an intolerance to clay minerals due to intercalation of their polyethylene glycol (PEG) side chains into the interlayers of the clay mineral. An intolerance to very basic media is also known. This makes PCE an unsuitable choice as a superplasticizer for geopolymers. Bio-based superplasticizers derived from starch showed comparable effects to PCE in a cementitious system. The aim of the present study was to determine if starch superplasticizers (SSPs) could be a suitable additive for geopolymers by carrying out basic investigations with respect to slump, hardening, compressive and flexural strength, shrinkage, and porosity. Four SSPs were synthesized, differing in charge polarity and specific charge density. Two conventional PCE superplasticizers, differing in terms of molecular structure, were also included in this study. The results revealed that SSPs improved the slump of a metakaolin-based geopolymer (MK-geopolymer) mortar while the PCE investigated showed no improvement. The impact of superplasticizers on early hardening (up to 72 h) was negligible. Less linear shrinkage over the course of 56 days was seen for all samples in comparison with the reference. Compressive strengths of SSP specimens tested after 7 and 28 days of curing were comparable to the reference, while PCE led to a decline. The SSPs had a small impact on porosity with a shift to the formation of more gel pores while PCE caused an increase in porosity. Throughout this research, SSPs were identified as promising superplasticizers for MK-geopolymer mortar and concrete.
Die derzeitige Wohnungskrise hat eine sozial-ökologische Kernproblematik. Dabei ist die sozial ungerechte und ökologisch problematische Verteilung von Wohnfläche meist unsichtbar und wird weder in wissenschaftlichen noch in aktivistischen Kontexten ausreichend als Frage der Flächengerechtigkeit problematisiert. Denn Wohnraum und Fläche in einer Stadt sind keine endlos verfügbaren Güter: Wenn einige Menschen auf viel Raum leben, bleibt für andere Menschen weniger Fläche übrig. Und die Menschen, die am wenigstens für eine Verknappung von Wohnraum verantwortlich sind, leiden am meisten darunter. Dieser Artikel arbeitet zunächst den Begriff der Wohnflächengerechtigkeit heraus, wobei auf die Ungleichverteilung von Wohnfläche und deren gesellschaftliche Implikationen unter derzeitigen Wohnungsverteilungsmechanismen Bezug genommen wird. Anschließend wird der Verbrauch von (Wohn-)Fläche aus ökologischer Perspektive problematisiert. Der Artikel diskutiert scheinbare und transformationsorientierte Lösungs- und Handlungsansätze. Abschließend fordert er in der kritischen Stadtforschung und in aktivistischen Kontexten eine stärkere Debatte um eine Wohnflächengerechtigkeit, deren Verwirklichung gleichermaßen eine soziale wie ökologische Dimension hat.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
Die zu beobachtenden kürzeren Produktlebenszyklen und eine schnellere Marktdurchdringung von Produkttechnologien erfordern adaptive und leistungsfähige Produktionsanlagen. Die Adaptivität ermöglicht eine Anpassung der Produktionsanlage an neue Produkte, und die Leistungsfähigkeit der Anlage stellt sicher, dass ausreichend Produkte in kurzer Zeit und zu geringen Kosten hergestellt werden können. Durch eine Modularisierung der Produktionsanlage kann die Adaptivität erreicht werden. Jedoch erfordert heutzutage jede Adaption manuellen Aufwand, z.B. zur Anpassung von proprietären Signalen oder zur Anpassung übergeordneter Funktionen. Dadurch sinkt die Leistungsfähigkeit der Anlage.
Das Ziel dieser Arbeit ist es, die Interoperabilität in Bezug auf die Informationsverwendung in modularen Produktionsanlagen zu gewährleisten. Dazu werden Informationen durch semantische Modelle beschrieben. Damit wird ein einheitlicher Informationszugriff ermöglicht, und übergeordnete Funktionen erhalten Zugriff auf alle Informationen der Produktionsmodule, unabhängig von dem Typ, dem Hersteller und dem Alter des Moduls. Dadurch entfällt der manuelle Aufwand bei Anpassungen des modularen Produktionssystems, wodurch die Leistungsfähigkeit der Anlage gesteigert und Stillstandszeiten reduziert werden.
Nach dem Ermitteln der Anforderungen an einen Modellierungsformalismus wurden potentielle Formalismen mit den Anforderungen abgeglichen. OWL DL stellte sich als geeigneter Formalismus heraus und wurde für die Erstellung des semantischen Modells in dieser Arbeit verwendet. Es wurde exemplarisch ein semantisches Modell für die drei Anwendungsfälle Interaktion, Orchestrierung und Diagnose erstellt. Durch einen Vergleich der Modellierungselemente von unterschiedlichen Anwendungsfällen wurde die Allgemeingültigkeit des Modells bewertet. Dabei wurde gezeigt, dass die Erreichung eines allgemeinen Modells für technische Anwendungsfälle möglich ist und lediglich einige Hundert Begriffe benötigt.
Zur Evaluierung der erstellten Modelle wurde ein wandlungsfähiges Produktionssystem der SmartFactoryOWL verwendet, an dem die Anwendungsfälle umgesetzt wurden. Dazu wurde eine Laufzeitumgebung erstellt, die die semantischen Modelle der einzelnen Module zu einem Gesamtmodell vereint, Daten aus der Anlage in das Modell überträgt und eine Schnittstelle für die Services bereitstellt. Die Services realisieren übergeordnete Funktionen und verwenden die Informationen des semantischen Modells. In allen drei Anwendungsfällen wurden die semantischen Modelle korrekt zusammengefügt und mit den darin enthaltenen Informationen konnte die Aufgabe des jeweiligen Anwendungsfalles ohne zusätzlichen manuellen Aufwand gelöst werden.
Im Rahmen der Dissertation wurde ein Toolboxmodell für transdisziplinäres Wasserressourcenmanagement entwickelt. Das Modell liefert den methodischen Rahmen Wasserressourcen nachhaltig und transdisziplinär zu bewirtschaften.
Der Begriff der Nachhaltigkeit und eine Konkretisierung der nachhaltigen Bewirtschaftung globaler Wasserressourcen scheinen unüberschaubar und suggerieren die Forderung nach einer neuen Weltformel. Die globale Bedeutung der Wasserressourcen, die für Regionen spezifischen Besonderheiten des natürlichen Wasserhaushalts und der anthropogenen Nutzung, die Zeitskala und die Kontextualisierung in alle betroffenen und benachbarten Disziplinen deuten auf die Komplexität der Thematik hin. Es wird eine Systematisierung des Planungsprozesses von Wasserressourcen notwendig, anhand derer eine holistische Herangehensweise mit einer Strategieentwicklung für Regionen spezifischer Schwerpunktprobleme erfolgt. Ziel der Arbeit ist die Erarbeitung einer Strategie zur Systematisierung nach diesen Forderungen und die Bereitstellung eines Toolboxmodelles als Planungswerkzeug für das transdisziplinäre Wasserressourcenmanagement.
Das Toolboxmodell stellt den konzeptionellen Rahmen für die Bewirtschaftung von Wasserressourcen mit der Anwendung transdisziplinärer Forschungsmethoden bereit. Wesentliche Herausforderung bei der Anwendung der transdisziplinären Methode sind die Implementierung verschiedener Skalenbereiche, der Umgang mit der Komplexität von Daten, das Bewahren von Transparenz und Objektivität sowie die Ermöglichung eines auf andere Regionen übertragbaren Planungsprozesses.
Die theoretischen Grundlagen naturwissenschaftlicher Forschung zur Nachhaltigkeit haben ihren Ursprung in den biologischen und geographischen Disziplinen. Das Ineinandergreifen naturräumlicher Zusammenhänge und der Einfluss anthropogener Nutzung und technischer Innovationen auf den Naturhaushalt sind Kern der Kausalität übergreifenden Denkens und Verstehens. Mit dem Ansatz des integrierten Wasserressourcenmanagements (IWRM) erfolgt die Berücksichtigung wirtschaftlicher und sozioökonomischer Ziele in den Planungsprozess für ökologisch nachhaltige Wasserwirtschaft. Das Instrument der Wasserrahmenrichtlinie (EU-WRRL) ist auf eine Gewässerökologie ausgerichtete Richtlinie, welche die Integration verschiedener Interessenvertreter in den Planungsprozess vorsieht. Das Konzept der neuartigen Sanitärsysteme basiert auf Stoffflüssen zwischen konkurrierenden Handlungsbereichen, wie Abfall-, Ressourcen- und Landwirtschaft.
Den integrierten Ansätzen fehlt eine übergeordnete gemeinsame Zielstrategie – eine sogenannte Phase Null. Diese Phase Null – das Lernen aller 7 Zusammenfassung 157 relevanten, konkurrierenden und harmonisierenden Handlungsfelder eines Planungshorizontes wird durch eine transdisziplinäre Perspektive ermöglicht. Während bei der integralen Perspektive eine disziplinorientierte Kooperation im Vordergrund steht, verlangt die transdisziplinäre Perspektive nach einer problemorientierten Kooperation zwischen den Interessenvertretern (Werlen 2015). Die bestehenden Konzepte und Richtlinien für das nachhaltige Management von Wasserressourcen sind etabliert und evaluiert. Der Literatur zur Folge ist eine Weiterentwicklung nach der Perspektive der Transdisziplinarität erforderlich. Das Toolboxmodell für integrales Wasserressourcenmanagement entspricht einem Planungstool bestehend aus Werkzeugen für die Anwendung wissenschaftlicher Methoden. Die Zusammenstellung der Methoden/Werkzeuge erfüllt im Rahmen die Methode transdisziplinärer Forschung. Das Werkzeug zum Aufstellen der relevanten Handlungsfelder umfasst die Charakterisierung eines Untersuchungsgebietes und Planungsrahmens, die kausale Verknüpfung des Bewirtschaftungskonzeptes und konkurrierender sowie sich unterstützender Stakeholder. Mit dem Werkzeug der Kontextualisierung und Indikatorenaufstellung wird eine Methode der stufenweisen und von einer Skala unabhängigen Bewertung des Umweltzustandes für die Zielpriorisierung vorgenommen. Damit wird das Toolboxmodell dem Problem der Komplexität und Datenverfügbarkeit gerecht. Anhand der eingesetzten ABC Methode, werden die Bewertungsgrößen differenziert strukturiert auf verschiedene Skalen und Datenressourcen (A=Ersterkennung,B=Zeigerwerte, C=Modell/Index). Die ABC-Methode ermöglicht die Planung bereits mit unsicherer und lückenhafter Datengrundlage, ist jederzeit erweiterbar und bietet somit eine operative Wissensgenerierung während des Gestaltungsprozesses.
Für das Werkzeug zur Bewertung und Priorisierung wird der Algorithmus der Composite Programmierung angewandt. Diese Methode der Mehrfachzielplanung erfüllt den Anspruch der permanenten Erweiterbarkeit und der transparenten und objektiven Entscheidungsfindung. Die Komplexität des transdisziplinären Wasserressourcenmanagements kann durch die Methode der Composite Programmierung systematisiert werden. Das wesentliche Ergebnis der Arbeit stellt die erfolgreiche Erarbeitung und Anwendung des Tool-boxmodells für das transdisziplinäre Wasserressourcenmanagement im Untersuchungsgebiet Stadt Darkhan in der Mongolei dar. Auf Grund seiner besonderen hydrologischen und strukturellen Situa-tion wird die Relevanz eines nachhaltigen Bewirtschaftungskonzeptes deutlich. Im Rahmen des Querschnittsmoduls des MoMo-Projektes wurde eine für das Toolboxmodell geeignete Datengrundlage erarbeitet. Planungsrelevante Handlungsfelder wurden im Rahmen eines Workshops mit verschiedenen Interessenvertretern erarbeitet. Im Ergebnis dessen wurde die Systematik eines Zielbaumes mit Hauptzielen und untergeordneten Teilzielen als Grundlage der Priorisierung nach den holistischen Anspruch der transdisziplinären Forschung aufgestellt. Für die Messbarkeit, in-wieweit Teilziele erreicht sind oder Handlungsbedarf besteht, wurden Indikatoren erarbeitet. Die Indikatoren-Aufstellung erfolgte exemplarisch für das Handlungsfeld Siedlungswasserwirtschaft in allen Skalen des ABC-Systems. Die im BMBF-MoMo Projekt generierte umfassende Datengrundlage ermöglichte die Anwendung und Evaluierung des Toolboxmodells mit unterschiedlichem quantitativem und qualitativem Dateninput. Verschiedene Kombination von A (Ersterkennung), B (Zeigerwerte) und C (Modell/Index) als Grundlage der Priorisierung mit der Compostite Programmierung ermöglichten die Durchführung und Bewertung des transdisziplinären Planungstools. Die er-mittelten Rangfolgen von Teilzielen mit unterschiedlichen Bewertungsvarianten ergaben ähnliche
Tendenzen. Das ist ein Hinweis dafür, dass für die zukünftige Anwendung des Toolboxmodells die operative Wissensgenerierung, d.h. das schrittweise Hinzufügen neu ermittelter, gesicherterer Daten, funktioniert. Eine schwierige Datenverfügbarkeit oder eine noch im Prozess befindliche wissenschaftliche Analyse sollen keine Hindernisse für eine schrittweise und erweiterbare Zielpriorisierung und Maßnahmenplanung sein. Trotz der Komplexität des transdisziplinären Ansatzes wird durch die Anwendung des Toolboxmodells eine effiziente und zielorientierte Handlungspriorisierung ermöglicht. Die Effizienz wird erreicht durch ressourcenschonende und flexible, Ziel fokussierte Datenermittlung. Zeit und Kosten im Planungsprozess können eingespart werden. Die erzielte Priorisierung von letztlich Handlungsempfehlungen erfolgt individuell auf die Eigenart des Untersuchungsgebietes angepasst, was hinsichtlich seiner Wirkung als erfolgsversprechend gilt.
Marine Makroalgen besitzen vielversprechende Eigenschaften und Inhaltsstoffe für die Verwendung als Energieträger, Nahrungsmittel oder als Ausgangsstoff für Pharmazeutika. Dass die Quantität und Qualität der in natürlicher Umgebung wachsenden Makroalgen schwankt, reduziert jedoch deren Verwertbarkeit und erschwert die Erschließung hochpreisiger Marktsegmente. Zudem ist eine Ausweitung der Zucht in marinen und küstennahen Aquakulturen in Europa gegenwärtig wenig aussichtsreich, da vielversprechende Areale bereits zum Fischfang oder als Erholungs- bzw. Naturschutzgebiete ausgewiesen sind. Im Rahmen dieser Arbeit wird demzufolge ein geschlossenes Photobioreaktorsystem zur Makroalgenkultivierung entwickelt, welches eine umfassende Kontrolle der abiotischen Kultivierungsparameter und eine effektive Aufbereitung des Kulturmediums vorsieht, um eine standortunabhängige Algenproduktion zu ermöglichen. Zur Bilanzierung des Gesamtkonzeptes einer Kultivierung und Verwertung (stofflich oder energetisch) werden die spezifischen Wachstumsraten und Methanbildungspotentiale der Algenarten Ulva intestinalis, Fucus vesiculosus und Palmaria palmata in praktischen Versuchen ermittelt.
Im Ergebnis wird für den gegenwärtigen Entwicklungsstand der Kultivierungsanlage eine positive Bilanz für die stoffliche Verwertung der Algenart Ulva intestinalis und eine negative Bilanz für die energetische Verwertung aller untersuchten Algenarten erzielt. Wird ein Optimalszenario betrachtet, indem die Besatzdichten und Wachstumsraten der Algen in der Zucht erhöht werden, bleibt die Energiebilanz negativ. Allerdings summieren sich die finanzielle Einnahmen durch einen Verkauf der Algen als Produkt auf jährlich 460.869€ für Ulva intestinalis, 4.010€ für Fucus vesiculosus und 16.913€ für Palmaria palmata. Im Ergebnis ist insbesondere eine stoffliche Verwertung der gezüchteten Grünalge Ulva intestinalis anzustreben und die Produktivität der Zuchtanlage im Sinne des Optimalszenarios zu steigern.
Matthias Bernt und Andrej Holm weisen zu Recht darauf hin, dass es einer Forschung zu ostdeutschen Städten als konzeptionell eigenständigem Feld bedarf, die die spezifische Verräumlichung des tiefgreifenden gesellschaftlichen Transformationsprozesses nach 1990 ins Zentrum stellt. Dabei betrachten sie insbesondere das Feld des Wohnens als produktiv, um Kenntnis über die Struktur und Wirkung dieses Prozesses zu erlangen. Allerdings bleiben sie vage dabei, wie eine solche spezifisch auf Ostdeutschland gerichtete Wohnungsforschung zu konzipieren wäre und in welcher Weise die Besonderheiten und Parallelitäten ostdeutscher Entwicklungen zu den Transformationen von Wohnungs- und Stadtentwicklungspolitik in Westdeutschland, aber auch international, in Bezug zu setzen wären.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
Experimente lernen, Techniken tauschen
Ein spekulatives Handbuch
Das spekulative Handbuch bietet vielfältige Techniken für ein radikales Lernen und Vermitteln. Es umfasst konkrete Anleitungen, Erfahrungen und theoretische Überlegungen. Die Texte beteiligen sich an der Konzeption einer Vermittlung, die das gemeinsame Experimentieren (wieder) einführt.
Im Seminarraum, in Workshops, auf Festivals, in Fluren, Parks und der Stadt finden Lernen und Verlernen statt. Texte und Anleitungen u. a. zu: Filmessays, Collagen, Banküberfällen, der Universität der Toten, wildem Schreiben, konzeptuellem speed Dating, neurodiversem Lernen, Format-Denken, dem Theater der Sorge, dem Schreiblabor, dem Körperstreik.
Das spekulative Handbuch bietet vielfältige Techniken für ein radikales Lernen und Vermitteln. Es umfasst konkrete Anleitungen, Erfahrungen und theoretische Überlegungen. Die Texte beteiligen sich an der Konzeption einer Vermittlung, die das gemeinsame Experimentieren (wieder) einführt.
Im Seminarraum, in Workshops, auf Festivals, in Fluren, Parks und der Stadt finden Lernen und Verlernen statt. Texte und Anleitungen u. a. zu: Filmessays, Collagen, Banküberfällen, der Universität der Toten, wildem Schreiben, konzeptuellem speed Dating, neurodiversem Lernen, Format-Denken, dem Theater der Sorge, dem Schreiblabor, dem Körperstreik.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
Ausgehend von der vielfachen Verwertung der bäuerlichen Kleidung durch den Staat während des Sozialismus in Rumänien wird in der Arbeit das ‚Gemacht-Sein‘ von Volkstrachten befragt entlang von im untersuchten Zeitraum wirkenden Diskursen, wie dem Prozess der Modernisierung oder der Hervorhebung nationaler Werte. Die künstlerische Forschung setzt dabei auf Simulacra (Roland Barthes). Ziel war, tradierte Formate der Wissensaufbereitung und -verbreitung zu appropriieren, so auch von Strategien, die auf der Ebene von Bildern und Sprache agieren, um eine Re-Lektüre sowohl von ‚Volkstracht‘ im Sozialismus als auch von ihren Entsprechungen nach 1989 zu ermöglichen.
This paper reports the formation and structure of fast setting geopolymers activated by using three sodium silicate solutions with different modules (1.6, 2.0 and 2.4) and a berlinite-type aluminum orthophosphate. By varying the concentration of the aluminum orthophosphate, different Si/Al-ratios were established (6, 3 and 2). Reaction kinetics of binders were determined by isothermal calorimetric measurements at 20 °C. X-ray diffraction analysis as well as nuclear magnetic resonance (NMR) measurements were performed on binders to determine differences in structure by varying the alkalinity of the sodium silicate solutions and the Si/Al-ratio. The calorimetric results indicated that the higher the alkalinity of the sodium silicate solution, the higher the solubility and degree of conversion of the aluminum orthophosphate. The results of X-ray diffraction and Rietveldt analysis, as well as the NMR measurements, confirmed the assumption of the calorimetric experiments that first the aluminum orthophosphate was dissolved and then a polycondensation to an amorphous aluminosilicate network occurred. The different amounts of amorphous phases formed as a function of the alkalinity of the sodium silicate solution, indicate that tetrahydroxoaluminate species were formed during the dissolution of the aluminum orthophosphate, which reduce the pH value. This led to no further dissolution of the aluminum orthophosphate, which remained unreacted.
Der Forschungsgegenstand dieser Arbeit basiert auf einer phänomenologischen Beobachtung internationaler fotografischer Positionen des Selbstportraits, welche seit den 1960er-Jahren verwandte Inhalte, gleichartige bildästhetische Merkmale und ähnliche Prozesse im fotografischen Herstellungsprozess aufweisen. Gemeinsam haben die in dieser Arbeit besprochenen Künstler*innen, dass sich ihre Bildwerdung am eigenen Körper vollzieht und an einen durch Bewegung gekennzeichneten Handlungsablauf geknüpft ist. Die jeweilige Bildsprache weist eine ephemere Ästhetik aus, in welcher inhaltlich sowohl der physische als auch der philosophisch gemeinte Begriff des (Los-)lassens eine Rolle spielt. Die künstlerischen Positionen, die Gegenstand dieser Ph.D.-Arbeit sind, umfassen Arbeiten von Bas Jan Ader (1942 – 1975), Francesca Woodman (1958 – 1981), Bernhard (1937 – 2011) und Anna Blume (*1937), Antoine d’Agata (*1961) und Tom Pope (*1986).
For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.
While Public-Private Partnership (PPP) is widely adopted across various sectors, it raises a question on its meagre utilisation in the housing sector. This paper, therefore, gauges the perspective of the stakeholders in the building industry towards the application of PPP in various building sectors together with housing. It assesses the performance reliability of PPP for housing by learning possible take-aways from other sectors. The role of key stakeholders in the industry becomes highly responsible for an informed understanding and decision-making. To this end, a two-tier investigation was conducted including surveys and expert interviews, with several stakeholders in the PPP industry in Europe, involving the public sector, private sector, consultants, as well as other community/user representatives.
The survey results demonstrated the success rate with PPPs, major factors important for PPPs such as profitability or end-user acceptability, the prevalent practices and trends in the PPP world, and the majority of support expressed in favour of the suitability of PPP for housing. The interviews added more detailed dimensions to the understanding of the PPP industry, its functioning and enabling the formation of a comprehensive outlook. The results present the perspective, approaches, and experiences of stakeholders over PPP practices, current trends and scenarios and their take on PPP in housing. It shall aid in understanding the challenges prevalent in the PPP approach for implementation in housing and enable the policymakers and industry stakeholders to make provisions for higher uptake to accelerate housing provision.
In einer systematischen Interpretation von Vilém Flussers Werk schlägt die Arbeit vor, Flussers Ansatz als einen medienphilosophischen zu verstehen, insofern er das „wie“ der medienphilosophischen Fragestellung in den Mittelpunkt rückt. Medien werden nicht erst dann zu einem wesentlichen Bestandteil von Flussers Philosophie, wenn er sie explizit zum Gegenstand seiner Untersuchungen der gegenwärtigen Kultur und Gesellschaft oder historischer Rückblicke macht; Denken vollzieht sich immer in Medien oder medialen Praktiken, es wird nicht nur von ihnen (mit) geprägt – ohne Medien gäbe es kein Denken und umgekehrt verändert sich Philosophie mit den (jeweils) neuen Medien. Ausgehend von Begriffen oder eher Denkfiguren, die neben dem „was“ des jeweils verhandelten Themas auch das „wie“ der Reflexion selbst adressieren, wird der „Umbruch in der Struktur des Denkens“ zugleich als Beschreibung von Medienumbrüchen verstanden – mit dem Fluchtpunkt des Sprungs in das Universum der Komputation – und als Vollzug der gegenwärtigen Veränderung der „Methode des Denkens“. Flussers (Ver)Suche einer Reflexion, die nicht mehr durch das Medium Schrift strukturiert ist, sondern sowohl alten Medien wie dem Bild – bzw. Praktiken des Abbildens, Darstellens, Einbildens usw. – als auch neuen Medien – dem Komputieren – Geltung verschafft, laufen auf eine widersprüchliche Diagnose des neuen Universums der Komputation (anders: der technischen Bilder) hinaus : eine kybermetisch inspirierte Vision der frei modellierbaren Wirklichkeit(en) einerseits und die Dystopie einer Welt, in der Apparaten Denken, Wahrnehmen und Handeln beherrschen andererseits. Die Arbeit zeigt auf, wie Flusser zu dieser Aporie der Medienreflexion – die weit über Flussers Werk hinaus virulent bleibt – gelangt und wie sie, ausgehend von seiner Figur der Geste, im Sinne einer performativen Medienreflexion gelöst werden könnte.
In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential.
Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods.
This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed.
As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available.
After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using
adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples.
After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest.
The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion.
In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison.
Hans Ruin: Being with the Dead—Burial, ancestral politics, and the roots of historical consciousness
(2020)
How can society be thought of as something in which the living and the dead interact throughout history? In Being with the Dead. Burial, Ancestral Politics, and the Roots of Historical Consciousness, Hans Ruin turns to the relationship between the living and the dead as well as ‘historical consciousness’. He is referring to the expression ‘being with the dead’ (Mitsein mit dem Toten). Rather en passant, Martin Heidegger (1962: 282) shaped this existential-ontological term, which so far has hardly received any consideration. But for Ruin, it now forms the starting point for his “expanded phenomenological social ontology” (p. XI). By illuminating history and historical consciousness with the category ‘being with the dead,’ he gains remarkable insights into the meaning of ancestrality. Concerning ‘necropolitics,’ Ruin shows that the political space includes the living as well as the dead and how they constitute it. The foci of his considerations are the human sciences, above all sociology, anthropology, archaeology, philology and history. Ruin’s book aims at a “metacritical thanatology,” which he elaborates as “an exploration of the social ontology of being with the dead mediated through critical analyses of the human-historical sciences themselves” (p. XII). As a result, in a total of seven chapters, he succeeds astonishingly in emphasizing the political and ethical importance of a scientific gaze that cultivates the interaction of the living and the dead.
Das Buch greift die enge Verknüpfung von Industrialisierung und Urbanisierung auf, die in den letzten gut 250 Jahren Europas Städte und ihre Stadtbaugeschichte maßgeblich geprägt hat. Damit stellen sich auch vielfältige Fragen und Aufgaben für die Denkmalpflege.
Die Habilitationsschrift leistet einen Beitrag, um die stadtbaugeschichtlichen und stadtbildprägenden Werte historischer Industriekomplexe zu erkennen und zu erhalten. Wie können wir die industriellen Stadtlandschaften erfassen? Wie gestalten wir Umnutzungen und Konversionen denkmalgerecht und beziehen im Rahmen eines Heritage-Managements Aspekte der nachhaltigen Stadtentwicklung ein?
Das Buch greift die enge Verknüpfung von Industrialisierung und
Urbanisierung auf, die in den letzten gut 250 Jahren Europas Städte und ihre Stadtbaugeschichte maßgeblich geprägt hat. Damit stellen sich auch vielfältige Fragen und Aufgaben für die Denkmalpflege.
Die Habilitationsschrift leistet einen Beitrag, um die stadtbaugeschichtlichen und stadtbildprägenden Werte historischer Industriekomplexe zu erkennen und zu erhalten. Wie können wir die industriellen Stadtlandschaften erfassen? Wie gestalten wir Umnutzungen und Konversionen denkmalgerecht und beziehen im Rahmen eines Heritage-Managements Aspekte der nachhaltigen Stadtentwicklung ein?
Im Workshop des Sinnlichen
(2020)
Folgende fiktive Situation soll ein Problem markieren, das in diesem Beitrag diskutiert wird und nach Ansicht seines Autors ein an Kunsthochschulen weit verbreitetes Phänomen darstellt. Im Rahmen eines Mentoring-Workshops stellt eine Gruppe Studierender Arbeitsmaterial ihres aktuellen Projekts vor, um es anschließend in der Gruppe zu besprechen. Ziel des Veranstaltungsformats ist es, die Studierenden während der Entwicklung ihrer künstlerischen Praktiken zu begleiten und diese nicht anhand handwerklicher Kriterien überzudeterminieren, sondern ihrer Eigenlogik zu folgen, den ihnen inhärenten ästhetischen Potentialen nachzugehen und ein Bewusstsein für die Kontexte und Diskurse zu schaffen, in denen sie verortet sind. Die Studierenden, die heute ihr Projekt vorstellen, haben ein Photoalbum mitgebracht, in das sie, der Anordnungslogik von Urlaubsaufnahmen folgend, eine Reihe analoger Photographien geklebt haben, die Dutzende Schnappschüsse einer entfernten Insel zeigen, die sich hinter der den Vordergrund des Bildes einnehmenden Meeresoberfläche abzeichnet.
Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification
(2020)
Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.
das Theater der sorge ist konzipiert als 3-Phasen-modell:
in der 1. Phase geht es um die hersTellung einer laborsituation, in der präfigurative lebens-, denk- und arbeitsformen in diskursiver und spielerischer weise präsentisch erprobt und in konstituierende Prozesse übertragen werden können;
in der 2. Phase geht es um die gemeinschaftliche Konzeption, Vorbereitung und durchführung einer das labor abschließenden öffentlichen auFsTellung, die auf den kollektiven erfahrungen und ergebnissen der
labore basiert. dadurch können gesellschaftliche resonanzen erzeugt und die konstituierenden Prozesse vorläufig instituiert werden.
in der 3.Phase geht es dann darum, Vorbereitung: alle inhaltlichen erkenntnisse und ästhetischen Versuche in eine wiederholbare Theaterinszenierung / Performance zu überführen. die nötigen Voraussetzungen und bedingungen für diese von uns Vorstellungen genannten Formate sind für die hier verhandelte Fragestellung nicht von bedeutung – respektive würden sie den vorhandenen rahmen überschreiten – daher sparen wir diesen Komplex an dieser stelle aus.
Das spekulative Handbuch bietet vielfältige Techniken für ein radikales Lernen und Vermitteln. Es umfasst konkrete Anleitungen, Erfahrungen und theoretische Überlegungen. Die Texte beteiligen sich an der Konzeption einer Vermittlung, die das gemeinsame Experimentieren (wieder) einführt.
Im Seminarraum, in Workshops, auf Festivals, in Fluren, Parks und der Stadt finden Lernen und Verlernen statt. Texte und Anleitungen u. a. zu: Filmessays, Collagen, Banküberfällen, der Universität der Toten, wildem Schreiben, konzeptuellem speed Dating, neurodiversem Lernen, Format-Denken, dem Theater der Sorge, dem Schreiblabor, dem Körperstreik.
Körperstreik
(2020)
Der Beitrag verbindet die Diskussion um die postpolitische Stadt mit der zunehmenden wissenschaftlichen und aktivistischen Auseinandersetzung mit dem Anthropozän, ein Konzept, das die ökologischen und sozialpolitischen Implikationen menschlichen Handelns auf die Erdoberfläche beschreibt. Anhand von drei ausgewählten Fallstudien erkunden wir,
wie die spezifisch anthropogene, also menschengemachte, Krise urbaner Luftverschmutzung in künstlerischen Positionen problematisiert wird. Im Kontext des potenziellen Vormarschs von Postpolitik besprechen wir, wie der ambivalente Diskurs des Anthropozäns einerseits Depolitisierung begünstigt und andererseits neue Möglichkeiten für die Repolitisierung
globaler Umweltherausforderungen ermöglicht.
Das Hauptziel der vorliegenden Arbeit war es, eine stetige Kopplung zwischen der ananlytischen und numerischen Lösung von Randwertaufgaben mit Singularitäten zu realisieren. Durch die inter-polationsbasierte gekoppelte Methode kann eine globale C0 Stetigkeit erzielt werden. Für diesen Zweck wird ein spezielle finite Element (Kopplungselement) verwendet, das die Stetigkeit der Lösung sowohl mit dem analytischen Element als auch mit den normalen CST Elementen gewährleistet.
Die interpolationsbasierte gekoppelte Methode ist zwar für beliebige Knotenanzahl auf dem Interface ΓAD anwendbar, aber es konnte durch die Untersuchung von der Interpolationsmatrix und numerische Simulationen festgestellt werden, dass sie schlecht konditioniert ist. Um das Problem mit den numerischen Instabilitäten zu bewältigen, wurde eine approximationsbasierte Kopplungsmethode entwickelt und untersucht. Die Stabilität dieser Methode wurde anschließend anhand der Untersuchung von der Gramschen Matrix des verwendeten Basissystems auf zwei Intervallen [−π,π] und [−2π,2π] beurteilt. Die Gramsche Matrix auf dem Intervall [−2π,2π] hat einen günstigeren Konditionszahl in der Abhängigkeit von der Anzahl der Kopplungsknoten auf dem Interface aufgewiesen. Um die dazu gehörigen numerischen Instabilitäten ausschließen zu können wird das Basissystem mit Hilfe vom Gram-Schmidtschen Orthogonalisierungsverfahren auf beiden Intervallen orthogonalisiert. Das orthogonale Basissystem lässt sich auf dem Intervall [−2π,2π] mit expliziten Formeln schreiben. Die Methode des konsistentes Sampling, die häufig in der Nachrichtentechnik verwendet wird, wurde zur Realisierung von der approximationsbasierten Kopplung herangezogen. Eine Beschränkung dieser Methode ist es, dass die Anzahl der Sampling-Basisfunktionen muss gleich der Anzahl der Wiederherstellungsbasisfunktionen sein. Das hat dazu geführt, dass das eingeführt Basissys-tem (mit 2 n Basisfunktionen) nur mit n Basisfunktion verwendet werden kann.
Zur Lösung diese Problems wurde ein alternatives Basissystems (Variante 2) vorgestellt. Für die Verwendung dieses Basissystems ist aber eine Transformationsmatrix M nötig und bei der Orthogonalisierung des Basissystems auf dem Intervall [−π,π] kann die Herleitung von dieser Matrix kompliziert und aufwendig sein. Die Formfunktionen wurden anschließend für die beiden Varianten hergeleitet und grafisch (für n = 5) dargestellt und wurde gezeigt, dass diese Funktionen die Anforderungen an den Formfunktionen erfüllen und können somit für die FE- Approximation verwendet werden.
Anhand numerischer Simulationen, die mit der Variante 1 (mit Orthogonalisierung auf dem Intervall [−2π,2π]) durchgeführt wurden, wurden die grundlegenden Fragen (Beispielsweise: Stetigkeit der Verformungen auf dem Interface ΓAD, Spannungen auf dem analytischen Gebiet) über-
prüft.
In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost.
The contribution explores the migratory situation on the Balkans and more specifically in the so-called Refugee District in Belgrade from a spatial perspective. By visualizing the areas of tensions in the Refugee District, the city of Belgrade, Serbia and Europe it aims to disentangle the political and socio-spatial levels that lead to the stuck situation of in-betweenness at the gates of the European Union.
Acoustic travel-time TOMography (ATOM) allows the measurement and reconstruction of air temperature distributions. Due to limiting factors, such as the challenge of travel-time estimation of the early reflections in the room impulse response, which heavily depends on the position of transducers inside the measurement area, ATOM is applied mainly outdoors. To apply ATOM in buildings, this paper presents a numerical solution to optimize the positions of transducers. This optimization avoids reflection overlaps, leading to distinguishable travel-times in the impulse response reflectogram. To increase the accuracy of the measured temperature within tomographic voxels, an additional function is employed to the proposed numerical method to minimize the number of sound-path-free voxels, ensuring the best sound-ray coverage of the room. Subsequently, an experimental set-up has been performed to verify the proposed numerical method. The results indicate the positive impact of the optimal positions of transducers on the distribution of ATOM-temperatures.
Medien/Denken/Um/Formatieren
(2020)
Die medienwissenschaftliche Praxis des Um / Formatierens wurde im Wintersemester 2017 / 18 mit vierzig Bache- lor-Studierenden der Medien-, Theaterwissenschaft und den Gender Studies entwickelt und erprobt. Inhalt des Seminars waren konkrete mediale Standardformate, wie z. B. die Carte de Visite, die Schallplatte, der 16- oder 35mm Filmstreifen, digitale Dateiformate, wie das JPEG, GIF, MP3, die Videokassette oder auch Papierformate wie DIN A0 oder A8.
Im vorliegenden Beitrag werden Messungen und Berechnungen vorgestellt, die die Temperaturentwicklung in Betonzylindern aufgrund zyklischer Beanspruchung genau beschreiben. Die Messungen wurden in einem Versuchsstand, die Berechnungen im FEM-Programm ANSYS durchgeführt. Mit Hilfe der Temperaturmessungen konnten die Simulationen für die Temperaturentwicklung der Betonzylinder mit der verwendeten Betonrezeptur validiert werden. Die Untersuchungen lassen den Schluss zu, dass bei zyklischer Probekörperbelastung und der einhergehenden Probekörperdehnung Energie dissipiert wird und diese maßgeblich für die Erwärmung der Probe verantwortlich ist.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
Wie können journalistische Angebote nachhaltig finanziert werden? Dies bleibt die Kernfrage für Medienhäuser und journalistische Neugründungen bei der Entwicklung und beim Aufbau tragfähiger digitaler Geschäftsmodelle.
Die Autoren des vorliegenden Bandes vermitteln einen breiten Überblick über den Wissensstand zum Thema Paid Content, Plattformen und Zahlungsbereitschaft im Journalismus und eröffnen innovative Blickwinkel auf neuartige Plattformmodelle ebenso wie auf Motive und Bedürfnisse der Nutzerinnen und Nutzer digitaljournalistischer Inhalte. Auf Grundlage empirischer Forschung werden Handlungsempfehlungen für die nutzerzentrierte Ausgestaltung von Paid-Content-Angeboten sowie neue Perspektiven auf Zahlungsbereitschaft im digitalen Journalismus erschlossen – relevant sowohl für die Wissenschaft wie auch für die Medienpraxis.
Unmanned aircraft systems (UAS) show large potential for the construction industry. Their use in condition assessment has increased significantly, due to technological and computational progress. UAS play a crucial role in developing a digital maintenance strategy for infrastructure, saving cost and effort, while increasing safety and reliability. Part of that strategy are automated visual UAS inspections of the building’s condition. The resulting images can automatically be analyzed to identify and localize damages to the structure that have to be monitored. Further interest in parts of a structure can arise from events like accidents or collisions. Areas of low interest exist, where low resolution monitoring is sufficient.
From different requirements for resolution, different levels of detail can be derived. They require special image acquisition parameters that differ mainly in the distance between camera and structure. Areas with a higher level of detail require a smaller distance to the object, producing more images. This work proposes a multi-scale flight path planning procedure, enabling higher resolution requirements for areas of special interest, while reducing the number of required images to a minimum. Careful selection of the camera positions maintains the complete coverage of the structure, while achieving the required resolution in all areas. The result is an efficient UAS inspection, reducing effort for the maintenance of infrastructure.
Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low.
Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs.
Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite.
It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs.
Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods.
Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method.
Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD.
Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA.
The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries.
Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials.
This study aims to evaluate a new approach in modeling gully erosion susceptibility (GES) based on a deep learning neural network (DLNN) model and an ensemble particle swarm optimization (PSO) algorithm with DLNN (PSO-DLNN), comparing these approaches with common artificial neural network (ANN) and support vector machine (SVM) models in Shirahan watershed, Iran. For this purpose, 13 independent variables affecting GES in the study area, namely, altitude, slope, aspect, plan curvature, profile curvature, drainage density, distance from a river, land use, soil, lithology, rainfall, stream power index (SPI), and topographic wetness index (TWI), were prepared. A total of 132 gully erosion locations were identified during field visits. To implement the proposed model, the dataset was divided into the two categories of training (70%) and testing (30%). The results indicate that the area under the curve (AUC) value from receiver operating characteristic (ROC) considering the testing datasets of PSO-DLNN is 0.89, which indicates superb accuracy. The rest of the models are associated with optimal accuracy and have similar results to the PSO-DLNN model; the AUC values from ROC of DLNN, SVM, and ANN for the testing datasets are 0.87, 0.85, and 0.84, respectively. The efficiency of the proposed model in terms of prediction of GES was increased. Therefore, it can be concluded that the DLNN model and its ensemble with the PSO algorithm can be used as a novel and practical method to predict gully erosion susceptibility, which can help planners and managers to manage and reduce the risk of this phenomenon.
In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies.
Welfare‐state transformation and entrepreneurial urban politics in Western welfare states since the late 1970s have yielded converging trends in the transformation of the dominant Fordist paradigm of social housing in terms of its societal function and institutional and spatial form. In this article I draw from a comparative case study on two cities in Germany to show that the resulting new paradigm is simultaneously shaped by the idiosyncrasies of the country's national housing regime and local housing policies. While German governments have successively limited the societal function of social housing as a legitimate instrument only for addressing exceptional housing crises, local policies on providing and organizing social housing within this framework display significant variation. However, planning and design principles dominating the spatial forms of social housing have been congruent. They may be interpreted as both an expression of the marginalization of social housing within the restructured welfare housing regime and a tool of its implementation according to the logics of entrepreneurial urban politics.
Die Sicherstellung der medizinischen Versorgung in der Bundesrepublik wird seit einigen Jahren verstärkt diskutiert. Besonders in ländliche Regionen ist die Gewährleistung einer bedarfsgerechten medizinischen Versorgung problematisch. Längst ist dies zu einem gesamtgesellschaftlichen Thema geworden. Die Bachelorarbeit von Lena Wild untersucht die Schnittstelle von räumlicher Planung und medizinischer Planung mit der Frage „Inwiefern leisten die derzeitigen Planungen und Förderprogramme einen positiven Beitrag zur Sicherstellung der medizinischen Versorgung im ländlichen Raum Thüringens?“
Neben einer Literatur- und Datenanalyse stehen Expert:inneninterviews mit verschiedenen kommunalen und medizinischen Akteur:innen aus ausgewählten Landkreisen in Thüringen im Fokus der Arbeit.
Nach einer Einordnung der medizinischen Versorgung in den Kontext der räumlichen Planung wird eine Übersicht über das Gesundheitssystem der Bundesrepublik und die Besonderheiten in Ostdeutschland gegeben. Darauffolgend werden Instrumente zur Steuerung der medizinischen Versorgung genauer untersucht. Besonders die kassenärztliche Bedarfsplanung als zentrales Steuerungsinstrument und die Fördermöglichkeiten in Thüringen und deren Wirkungsweisen stehen dabei im Fokus. Im Weiteren wird auf ländliche Räume und deren Herausforderungen für die Sicherstellung der medizinischen Versorgung und die Kommunikation und Zusammenarbeit der Akteur:innen eingegangen.
Die Arbeit ist als Exkurs der Raumplanung in eine Fachplanung zu verstehen. Zwischen medizinischer Fachplanung und räumlicher Planung bestehen Wechselwirkungen. Das Kennen von Instrumenten und Wirkungsweisen der anderen Disziplin schafft dabei einen Mehrwert, um gemeinsame Ziele, wie die flächendeckende medizinische Versorgung und damit einhergehend eine nachhaltige räumliche Entwicklung, zu erreichen.
The performance of ductless personalized ventilation (DPV) was compared to the performance of a typical desk fan since they are both stand-alone systems that allow the users to personalize their indoor environment. The two systems were evaluated using a validated computational fluid dynamics (CFD) model of an office room occupied by two users. To investigate the impact of DPV and the fan on the inhaled air quality, two types of contamination sources were modelled in the domain: an active source and a passive source. Additionally, the influence of the compared systems on thermal comfort was assessed using the coupling of CFD with the comfort model developed by the University of California, Berkeley (UCB model). Results indicated that DPV performed generally better than the desk fan. It provided better thermal comfort and showed a superior performance in removing the exhaled contaminants. However, the desk fan performed better in removing the contaminants emitted from a passive source near the floor level. This indicates that the performance of DPV and desk fans depends highly on the location of the contamination source. Moreover, the simulations showed that both systems increased the spread of exhaled contamination when used by the source occupant.
Städte ohne Wachstum - eine bislang kaum vorstellbare Vision. Doch Klimawandel, Ressourcenverschwendung, wachsende soziale Ungleichheiten und viele andere Zukunftsgefahren stellen das bisherige Allheilmittel Wachstum grundsätzlich infrage. Wie wollen wir heute und morgen zusammenleben? Wie gestalten wir ein gutes Leben für alle in der Stadt? Während in einzelnen Nischen diese Fragen bereits ansatzweise beantwortet werden, fehlt es noch immer an umfassenden Entwürfen und Transformationsansätzen, die eine fundamental andere, solidarische Stadt konturieren. Diesen Versuch wagt das Projekt Postwachstumsstadt.
In diesem Buch werden konzeptionelle und pragmatische Aspekte aus verschiedenen Bereichen der Stadtpolitik zusammengebracht, die neue Pfade aufzeigen und verknüpfen. Die Beiträge diskutieren städtische Wachstumskrisen, transformative Planung und Konflikte um Gestaltungsmacht. Nicht zuletzt wird dabei auch die Frage nach der Rolle von Stadtutopien neu gestellt. Dadurch soll eine längst fällige Debatte darüber angestoßen werden, wie sich notwendige städtische Wenden durch eine sozialökologische Neuorientierung vor Ort verwirklichen lassen.
This paper proposes a practice-theoretical journalism research approach for an alternate and innovative perspective of digital journalism’s current empirical challenges. The practice-theoretical approach is introduced by demonstrating its explanatory power in relation to demarcation problems, technological changes, economic challenges and challenges to journalism’s legitimacy. Its respective advantages in dealing with these problems are explained and then compared to established journalism theories. The particular relevance of the theoretical perspective is due to (1) its central decision to observe journalistic practices, (2) the transgression of conventional journalistic boundaries, (3) the denaturalization of journalistic norms and laws, (4) the explicit consideration of a material, socio-technical dimension of journalism, (5) a focus on the conflicting relationship between journalistic practices and media management practices, and (6) prioritizing order generation over stability.
The assessment of wind-induced vibrations is considered vital for the design of long-span bridges. The aim of this research is to develop a methodological framework for robust and efficient prediction strategies for complex aerodynamic phenomena using hybrid models that employ numerical analyses as well as meta-models. Here, an approach to predict motion-induced aerodynamic forces is developed using artificial neural network (ANN). The ANN is implemented in the classical formulation and trained with a comprehensive dataset which is obtained from computational fluid dynamics forced vibration simulations. The input to the ANN is the response time histories of a bridge section, whereas the output is the motion-induced forces. The developed ANN has been tested for training and test data of different cross section geometries which provide promising predictions. The prediction is also performed for an ambient response input with multiple frequencies. Moreover, the trained ANN for aerodynamic forcing is coupled with the structural model to perform fully-coupled fluid--structure interaction analysis to determine the aeroelastic instability limit. The sensitivity of the ANN parameters to the model prediction quality and the efficiency has also been highlighted. The proposed methodology has wide application in the analysis and design of long-span bridges.
A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.
Schreibwalzer GeradekanonischeTexteumgibteine Aura der Unantastbarkeit und es scheint, als bräuchte man eine gewisse Art der Legitimation, sich diesen auch kritisch anzunähern oder mit eigenen Gedanken auf eine Stufe zu stellen. Im Präludium tasten wir uns phy- sisch mit Schere, Bleistift, Klebestift an diese Schwer- gewichte heran und darüber hinaus; mit Respekt, doch ohne Einschüchterung.
A new large‐field, high‐sensitivity, single‐mirror coincident schlieren optical instrument has been installed at the Bauhaus‐Universität Weimar for the purpose of indoor air research. Its performance is assessed by the non‐intrusive measurement of the thermal plume of a heated manikin. The schlieren system produces excellent qualitative images of the manikin's thermal plume and also quantitative data, especially schlieren velocimetry of the plume's velocity field that is derived from the digital cross‐correlation analysis of a large time sequence of schlieren images. The quantitative results are compared with thermistor and hot‐wire anemometer data obtained at discrete points in the plume. Good agreement is obtained, once the differences between path‐averaged schlieren data and planar anemometry data are reconciled.
Radikale Planung
(2020)
Die Verbreitung mobiler Smartphones und besonders deren allgegenwärtige Lokalisierungstechnologien verändern das Navigationsverhalten im Raum nachhaltig. Parallel zur schnell voranschreitenden Entwicklung alltäglicher Geräte, die mitgeführt werden, setzt der Übergang der bereits länger dauernden Entwicklung von Virtual-Reality-Technik in eine erweiterte und augmentierte Mixed Reality ein. In diesem Spannungsfeld untersucht die vorliegende Arbeit, inwieweit richtungsgebundene und binaural wiedergegebene Stereofonie die menschliche Bewegung im Raum beeinflussen kann und versucht zu erörtern, welche Potenziale in der Wiederentdeckung einer relativ lange bekannten Technik liegen. Der Autor hat im Rahmen dieser Arbeit eine binaurale mobile Applikation für richtungsgebundene Stereofonie entwickelt, mit der virtuelle bewegte oder statische Audio-Hotspots im Raum platziert werden können. So kann links, rechts oder 30 Meter vor einer Person ein virtueller oder tatsächlicher Klang im Raum verortet sein. Durch die in Echtzeit berechnete binaurale Wiedergabe der Klangquellen mit einem Stereo-Kopfhörer können diese räumlich verorteten Klänge mit zwei Ohren dreidimensional wahrgenommen werden, ähnlich dem räumlichen Sehen mit zwei Augen. Durch den Einsatz mehrerer lokalisierter Klangquellen als Soundscape entsteht eine augmentierte auditive Realität, die die physische Realität erweitert. Die Position und Navigation des Nutzers wird durch binaurale Lautstärkenmodulation (die Lautstärke nimmt bei abnehmender Distanz zur Quelle zu) und Stereopanning mit Laufzeitmodulation (die Richtung wird über ein Stereosignal auf beiden Ohren räumlich links-rechts-vorne verortet) interaktiv und kybernetisch beeinflusst. Die Nutzer navigieren — durch ihr Interesse an den hörbaren virtuellen Klangquellen geleitet — durch einen dynamisch erzeugten, dreidimensionalen akustischen Raum, der gleichzeitig ein virtueller und kybernetischer Raum ist, da die Repräsentation der Klänge an die Bewegung und Ausrichtung der Nutzer im Raum angepasst wird. Diese Arbeit untersucht, ob die Bewegung von Menschen durch (virtuelle) Klänge beeinflusst werden kann und wie groß oder messbar dieser Einfluss ist. Dabei können nicht alle künstlerischen, architektonischen und philosophischen Fragen im Rahmen der vorliegenden Schrift erörtert werden, obwohl sie dennoch als raumtheoretische Fragestellung von Interesse sind. Hauptgegenstand der vorliegenden Arbeit liegt in der Erforschung, ob richtungsgebundene Stereofonie einen relevanten Beitrag zur menschlichen Navigation, hauptsächlich zu Fuß, in urbanen Gebieten — vorwiegend im Außenraum — leisten kann. Der erste Teil gliedert sich in »Raum und Klang«, es werden raumtheoretische Überlegungen zur menschlichen Bewegung im Raum, Raumvorstellungen, räumliche Klänge und Klangwahrnehmung sowie die Entwicklung stereofoner Apparaturen und Aspekte der Augmented Audio Reality besprochen. Im zweiten Teil werden drei Demonstratoren als Anwendungsszenarien und drei Evaluierungen im Außenraum vorgestellt. Die Tests untersuchen, ob sich das Verfahren zur Navigation für Fußgänger eignet und inwieweit eine Einflussnahme auf das Bewegungsverhalten von Nutzern getroffen werden kann. Die Auswertungen der Tests zeigen, dass sich stereofone Klänge grundsätzlich als Navigationssystem eignen, da eine große Mehrzahl der Teilnehmer die akustisch markierten Ziele leicht gefunden hat. Ebenso zeigt sich ein klarer Einfluss auf die Bewegungsmuster, allerdings ist dieser abhängig von individuellen Interessen und Vorlieben. Abschließend werden die Ergebnisse der Untersuchungen im Kontext der vorgestellten Theorien diskutiert und die Potenziale stereofoner Anwendungen in einem Ausblick behandelt. Bei der Gestaltung, Erzeugung und Anwendung mobiler Systeme sind unterschiedliche mentale und räumliche Modelle und Vorstellungen der Entwickler und Anwender zu beachten. Da eine umfassende transdisziplinäre Betrachtung klare Begrifflichkeiten erfordert, werden Argumente für ein raumtheoretisches Vokabular diskutiert. Diese sind für einen gestalterischen Einsatz von richtungsgebundener Stereofonie — besonders im Kontext mobiler Navigation durch akustisch augmentierte Räume — äußerst relevant.
What you are about to read is the very last issue of the ZMK. Since our overall research enterprise, the IKKM, has to cease all of its activities due to the end of its twelve years’ funding by the German federal government, the ZMK will also come to an end. Its last topic, Schalten und Walten has also been the subject of the concluding biannual conference of the IKKM, and we hope it will be a fitting topic to resume the research of the IKKM on Operative Ontologies.
Although this final issue is in English, we decided to leave its title in German: Schalten und Walten. As it is the case for the name of the IKKM, (Internationales Kolleg für Kulturtechnikforschung und Medienphilosophie), the term seems untranslatable to us, not only for the poetic reason of the rhyming sound of the words. Switching and Ruling might be accepted as English versions, but quite an unbridgeable difference remains. In German, Schalten und Walten is a rather common and quite widespread idiom that can be found in everyday life. Whoever, the idiom stipulates, is able to execute Schalten und Walten has the power to act, has freedom of decision and power of disposition.
Although both terms are mentioned together and belong together in the German expression Schalten und Walten, they are nevertheless complements to each other. They both refer to the exercise and existence of domination, disposal or power, but they nonetheless designate two quite different modes of being. Schalten is not so much sheer command over something, but government or management. It is linked to control, intervention and change, in short: it is operative and goes along with distinctive measures and cause-and-effect relations. The English equivalent switching reflects this more or less adequately.