Refine
Document Type
- Article (1015) (remove)
Institute
- Professur Theorie und Geschichte der modernen Architektur (393)
- Institut für Strukturmechanik (ISM) (254)
- Professur Informatik im Bauwesen (130)
- Professur Stochastik und Optimierung (40)
- Professur Bauphysik (23)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (21)
- Professur Informatik in der Architektur (15)
- Junior-Professur Computational Architecture (12)
- Institut für Europäische Urbanistik (11)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Bauhaus-Kolloquium (395)
- Weimar (395)
- Angewandte Mathematik (186)
- Strukturmechanik (185)
- Architektur (168)
- 1986 (63)
- 1989 (60)
- Design (59)
- Bauhaus (55)
- Raum (55)
Object-Oriented Damage Information Modeling Concepts and Implementation for Bridge Inspection
(2022)
Bridges are designed to last for more than 50 years and consume up to 50% of their life-cycle costs during their operation phase. Several inspections and assessment actions are executed during this period. Bridge and damage information must be gathered, digitized, and exchanged between different stakeholders. Currently, the inspection and assessment practices rely on paper-based data collection and exchange, which is time-consuming and error-prone, and leads to loss of information. Storing and exchanging damage and building information in a digital format may lower costs and errors during inspection and assessment and support future needs, for example, immediate simulations regarding performance assessment, automated maintenance planning, and mixed reality inspections. This study focused on the concept for modeling damage information to support bridge reviews and structural analysis. Starting from the definition of multiple use cases and related requirements, the data model for damage information is defined independently from the subsequent implementation. In the next step, the implementation via an established standard is explained. Functional tests aim to identify problems in the concept and implementation. To show the capability of the final model, two example use cases are illustrated: the inspection review of the entire bridge and a finite-element analysis of a single component. Main results are the definition of necessary damage data, an object-oriented damage model, which supports multiple use cases, and the implementation of the model in a standard. Furthermore, the tests have shown that the standard is suitable to deliver damage information; however, several software programs lack proper implementation of the standard.
Quantification of cracks in concrete thin sections considering current methods of image analysis
(2022)
Image analysis is used in this work to quantify cracks in concrete thin sections via modern image processing. Thin sections were impregnated with a yellow epoxy resin, to increase the contrast between voids and other phases of the concrete. By the means of different steps of pre-processing, machine learning and python scripts, cracks can be quantified in an area of up to 40 cm2. As a result, the crack area, lengths and widths were estimated automatically within a single workflow. Crack patterns caused by freeze-thaw damages were investigated. To compare the inner degradation of the investigated thin sections, the crack density was used. Cracks in the thin sections were measured manually in two different ways for validation of the automatic determined results. On the one hand, the presented work shows that the width of cracks can be determined pixelwise, thus providing the plot of a width distribution. On the other hand, the automatically measured crack length differs in comparison to the manually measured ones.
In this work, we present a deep collocation method (DCM) for three-dimensional potential problems in non-homogeneous media. This approach utilizes a physics-informed neural network with material transfer learning reducing the solution of the non-homogeneous partial differential equations to an optimization problem. We tested different configurations of the physics-informed neural network including smooth activation functions, sampling methods for collocation points generation and combined optimizers. A material transfer learning technique is utilized for non-homogeneous media with different material gradations and parameters, which enhance the generality and robustness of the proposed method. In order to identify the most influential parameters of the network configuration, we carried out a global sensitivity analysis. Finally, we provide a convergence proof of our DCM. The approach is validated through several benchmark problems, also testing different material variations.
In machine learning, if the training data is independently and identically distributed as the test data then a trained model can make an accurate predictions for new samples of data. Conventional machine learning has a strong dependence on massive amounts of training data which are domain specific to understand their latent patterns. In contrast, Domain adaptation and Transfer learning methods are sub-fields within machine learning that are concerned with solving the inescapable problem of insufficient training data by relaxing the domain dependence hypothesis. In this contribution, this issue has been addressed and by making a novel combination of both the methods we develop a computationally efficient and practical algorithm to solve boundary value problems based on nonlinear partial differential equations. We adopt a meshfree analysis framework to integrate the prevailing geometric modelling techniques based on NURBS and present an enhanced deep collocation approach that also plays an important role in the accuracy of solutions. We start with a brief introduction on how these methods expand upon this framework. We observe an excellent agreement between these methods and have shown that how fine-tuning a pre-trained network to a specialized domain may lead to an outstanding performance compare to the existing ones. As proof of concept, we illustrate the performance of our proposed model on several benchmark problems.
Im Heft zum zehnjährigen Jubiläum von sub\urban mit dem Themenschwerpunkt „sub\x: Verortungen, Entortungen" veröffentlichen wir eine Debatte, die von den bisherigen in unserer Zeitschrift in dieser Rubrik geführten textlichen Diskussionen abweicht. Im Vorfeld der Planungen für unsere Jubiläumsausgabe haben wir die aktuellen Mitglieder unseres wissenschaftlichen Beirats darum gebeten, zwei grundlegende Fragen von kritischer Stadtforschung in kurzen Beiträgen zu diskutieren: Was ist Stadt? Was ist Kritik?
Der Aufruf, die Begriffe Stadt und Kritik in das Zentrum einer Debatte zu stellen, bietet die große Chance, uns weit über begriffliche Klärungen unseres gemeinsamen Arbeitsgegenstands hinaus – die ja auch für sich selbst sehr fruchtbar sein können – über die Funktion zu verständigen, die wir in der Gesellschaft ausüben, wenn wir räumliche Planung praktizieren, erforschen und lehren. Da in der Bundesrepublik nicht nur ein großer Bedarf, sondern auch eine beträchtliche Nachfrage nach öffentlicher Planung besteht und die planungsbezogenen Wissenschaften sich eines insgesamt stabilen institutionellen Standes erfreuen, laufen wir Gefahr, die gesellschaftspolitische Legitimation von Berufsfeld und Wissenschaft zu vernachlässigen, sie als gegeben zu behandeln. Wir müssen uns ja kaum rechtfertigen.
This article focuses on further developments of the background-oriented schlieren (BOS) technique to visualize convective indoor air flow, which is usually defined by very small density gradients. Since the light rays deflect when passing through fluids with different densities, BOS can detect the resulting refractive index gradients as integration along a line of sight. In this paper, the BOS technique is used to yield a two-dimensional visualization of small density gradients. The novelty of the described method is the implementation of a highly sensitive BOS setup to visualize the ascending thermal plume from a heated thermal manikin with temperature differences of minimum 1 K. To guarantee steady boundary conditions, the thermal manikin was seated in a climate laboratory. For the experimental investigations, a high-resolution DLSR camera was used capturing a large field of view with sufficient detail accuracy. Several parameters such as various backgrounds, focal lengths, room air temperatures, and distances between the object of investigation, camera, and structured background were tested to find the most suitable parameters to visualize convective indoor air flow. Besides these measurements, this paper presents the analyzing method using cross-correlation algorithms and finally the results of visualizing the convective indoor air flow with BOS. The highly sensitive BOS setup presented in this article complements the commonly used invasive methods that highly influence weak air flows.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
Operator Calculus Approach to Comparison of Elasticity Models for Modelling of Masonry Structures
(2022)
The solution of any engineering problem starts with a modelling process aimed at formulating a mathematical model, which must describe the problem under consideration with sufficient precision. Because of heterogeneity of modern engineering applications, mathematical modelling scatters nowadays from incredibly precise micro- and even nano-modelling of materials to macro-modelling, which is more appropriate for practical engineering computations. In the field of masonry structures, a macro-model of the material can be constructed based on various elasticity theories, such as classical elasticity, micropolar elasticity and Cosserat elasticity. Evidently, a different macro-behaviour is expected depending on the specific theory used in the background. Although there have been several theoretical studies of different elasticity theories in recent years, there is still a lack of understanding of how modelling assumptions of different elasticity theories influence the modelling results of masonry structures. Therefore, a rigorous approach to comparison of different three-dimensional elasticity models based on quaternionic operator calculus is proposed in this paper. In this way, three elasticity models are described and spatial boundary value problems for these models are discussed. In particular, explicit representation formulae for their solutions are constructed. After that, by using these representation formulae, explicit estimates for the solutions obtained by different elasticity theories are obtained. Finally, several numerical examples are presented, which indicate a practical difference in the solutions.
Multi-criteria decision analysis (MCDA) is an established methodology to support the decision-making of multi-objective problems. For conducting an MCDA, in most cases, a set of objectives (SOO) is required, which consists of a hierarchical structure comprised of objectives, criteria, and indicators. The development of an SOO is usually based on moderated development processes requiring high organizational and cognitive effort from all stakeholders involved. This article proposes elementary interactions as a key paradigm of an algorithm-driven development process for an SOO that requires little moderation efforts. Elementary interactions are self-contained information requests that may be answered with little cognitive effort. The pairwise comparison of elements in the well-known analytical hierarchical process (AHP) is an example of an elementary interaction. Each elementary interaction in the development process presented contributes to the stepwise development of an SOO. Based on the hypothesis that an SOO may be developed exclusively using elementary interactions (EIs), a concept for a multi-user platform is proposed. Essential components of the platform are a Model Aggregator, an Elementary Interaction Stream Generator, a Participant Manager, and a Discussion Forum. While the latter component serves the professional exchange of the participants, the first three components are intended to be automatable by algorithms. The platform concept proposed has been evaluated partly in an explorative validation study demonstrating the general functionality of the algorithms outlined. In summary, the platform concept suggested demonstrates the potential to ease SOO development processes as the platform concept does not restrict the application domain; it is intended to work with little administration moderation efforts, and it supports the further development of an existing SOO in the event of changes in external conditions. The algorithm-driven development of SOOs proposed in this article may ease the development of MCDA applications and, thus, may have a positive effect on the spread of MCDA applications.
Acoustic travel-time TOMography (ATOM) allows the measurement and reconstruction of air temperature distributions. Due to limiting factors, such as the challenge of travel-time estimation of the early reflections in the room impulse response, which heavily depends on the position of transducers inside the measurement area, ATOM is applied mainly outdoors. To apply ATOM in buildings, this paper presents a numerical solution to optimize the positions of transducers. This optimization avoids reflection overlaps, leading to distinguishable travel-times in the impulse response reflectogram. To increase the accuracy of the measured temperature within tomographic voxels, an additional function is employed to the proposed numerical method to minimize the number of sound-path-free voxels, ensuring the best sound-ray coverage of the room. Subsequently, an experimental set-up has been performed to verify the proposed numerical method. The results indicate the positive impact of the optimal positions of transducers on the distribution of ATOM-temperatures.
Multi-criteria decision analysis (MCDA) is an established methodology to support the decision-making of multi-objective problems. For conducting an MCDA, in most cases, a set of objectives (SOO) is required, which consists of a hierarchical structure comprised of objectives, criteria, and indicators. The development of an SOO is usually based on moderated development processes requiring high organizational and cognitive effort from all stakeholders involved. This article proposes elementary interactions as a key paradigm of an algorithm-driven development process for an SOO that requires little moderation efforts. Elementary interactions are self-contained information requests that may be answered with little cognitive effort. The pairwise comparison of elements in the well-known analytical hierarchical process (AHP) is an example of an elementary interaction. Each elementary interaction in the development process presented contributes to the stepwise development of an SOO. Based on the hypothesis that an SOO may be developed exclusively using elementary interactions (EIs), a concept for a multi-user platform is proposed. Essential components of the platform are a Model Aggregator, an Elementary Interaction Stream Generator, a Participant Manager, and a Discussion Forum. While the latter component serves the professional exchange of the participants, the first three components are intended to be automatable by algorithms. The platform concept proposed has been evaluated partly in an explorative validation study demonstrating the general functionality of the algorithms outlined. In summary, the platform concept suggested demonstrates the potential to ease SOO development processes as the platform concept does not restrict the application domain; it is intended to work with little administration moderation efforts, and it supports the further development of an existing SOO in the event of changes in external conditions. The algorithm-driven development of SOOs proposed in this article may ease the development of MCDA applications and, thus, may have a positive effect on the spread of MCDA applications.
Data acquisition systems and methods to capture high-resolution images or reconstruct 3D point clouds of existing structures are an effective way to document their as-is condition. These methods enable a detailed analysis of building surfaces, providing precise 3D representations. However, for the condition assessment and documentation, damages are mainly annotated in 2D representations, such as images, orthophotos, or technical drawings, which do not allow for the application of a 3D workflow or automated comparisons of multitemporal datasets. In the available software for building heritage data management and analysis, a wide range of annotation and evaluation functions are available, but they also lack integrated post-processing methods and systematic workflows. The article presents novel methods developed to facilitate such automated 3D workflows and validates them on a small historic church building in Thuringia, Germany. Post-processing steps using photogrammetric 3D reconstruction data along with imagery were implemented, which show the possibilities of integrating 2D annotations into 3D documentations. Further, the application of voxel-based methods on the dataset enables the evaluation of geometrical changes of multitemporal annotations in different states and the assignment to elements of scans or building models. The proposed workflow also highlights the potential of these methods for condition assessment and planning of restoration work, as well as the possibility to represent the analysis results in standardised building model formats.
This dataset consists mainly of two subsets. The first subset includes measurements and simulation data conducted to validate the simulation tool ENVI-met. The measurements were conducted at the campus of the Bauhaus-University Weimar in Weimar, Germany and consisted of recording exterior air temperature, globe temperature, relative humidity, and wind velocity at 1.5 m at four points on four different days. After the measurements, the geometry of the campus was modelled and meshed; the simulations were conducted using the weather data of the measurements days with the aim of investigating the accuracy of the model.
The second data subset consists of ENVI-met simulation data of the potential of facade greening in improving the outdoor environment and the indoor air temperature during heatwaves in Central European cities. The data consist of the boundary conditions and the simulation output of two simulation models: with and without facade greening. The geometry of the models corresponded to a residential buildings district in Stuttgart, Germany. The simulation output consisted of exterior air temperature, mean radiant temperature, relative humidity, and wind velocity at 12 different probe points in the model in addition to the indoor air temperature of an exemplary building. The dataset presents both vertical profiles of the probed parameters as well as the time series output of the five-day simulation duration. Both data subsets correspond to the investigations presented in the co-submitted article [1].
It is widely accepted that most people spend the majority of their lives indoors. Most individuals do not realize that while indoors, roughly half of heat exchange affecting their thermal comfort is in the form of thermal infrared radiation. We show that while researchers have been aware of its thermal comfort significance over the past century, systemic error has crept into the most common evaluation techniques, preventing adequate characterization of the radiant environment. Measuring and characterizing radiant heat transfer is a critical component of both building energy efficiency and occupant thermal comfort and productivity. Globe thermometers are typically used to measure mean radiant temperature (MRT), a commonly used metric for accounting for the radiant effects of an environment at a point in space. In this paper we extend previous field work to a controlled laboratory setting to (1) rigorously demonstrate that existing correction factors used in the American Society of Heating Ventilation and Air-conditioning Engineers (ASHRAE) Standard 55 or ISO7726 for using globe thermometers to quantify MRT are not sufficient; (2) develop a correction to improve the use of globe thermometers to address problems in the current standards; and (3) show that mean radiant temperature measured with ping-pong ball-sized globe thermometers is not reliable due to a stochastic convective bias. We also provide an analysis of the maximum precision of globe sensors themselves, a piece missing from the domain in contemporary literature.
The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency
In Germany, bridges have an average age of 40 years. A bridge consumes between 0.4% and 2% of its construction cost per year over its entire life cycle. This means that up to 80% of the construction cost are additionally needed for operation, inspection, maintenance, and destruction. Current practices rely either on paperbased inspections or on abstract specialist software. Every application in the inspection and maintenance sector uses its own data model for structures, inspections, defects, and maintenance. Due to this, data and properties have to be transferred manually, otherwise a converter is necessary for every data exchange between two applications. To overcome this issue, an adequate model standard for inspections, damage, and maintenance is necessary. Modern 3D models may serve as a single source of truth, which has been suggested in the Building Information Modeling (BIM) concept. Further, these models offer a clear visualization of the built infrastructure, and improve not only the planning and construction phases, but also the operation phase of construction projects. BIM is established mostly in the Architecture, Engineering, and Construction (AEC) sector to plan and construct new buildings. Currently, BIM does not cover the whole life cycle of a building, especially not inspection and maintenance. Creating damage models needs the building model first, because a defect is dependent on the building component, its properties and material. Hence, a building information model is necessary to obtain meaningful conclusions from damage information. This paper analyzes the requirements, which arise from practice, and the research that has been done in modeling damage and related information for bridges. With a look at damage categories and use cases related to inspection and maintenance, scientific literature is discussed and synthesized. Finally, research gaps and needs are identified and discussed.
Paper-based data acquisition and manual transfer between incompatible software or data formats during inspections of bridges, as done currently, are time-consuming, error-prone, cumbersome, and lead to information loss. A fully digitized workflow using open data formats would reduce data loss, efforts, and the costs of future inspections. On the one hand, existing studies proposed methods to automatize data acquisition and visualization for inspections. These studies lack an open standard to make the gathered data available for other processes. On the other hand, several studies discuss data structures for exchanging damage information among different stakeholders. However, those studies do not cover the process of automatic data acquisition and transfer. This study focuses on a framework that incorporates automatic damage data acquisition, transfer, and a damage information model for data exchange. This enables inspectors to use damage data for subsequent analyses and simulations. The proposed framework shows the potentials for a comprehensive damage information model and related (semi-)automatic data acquisition and processing.
In this paper, we present an open-source code for the first-order and higher-order nonlocal operator method (NOM) including a detailed description of the implementation. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combined with the method of weighed residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. The implementation in this paper is focused on linear elastic solids for sake of conciseness through the NOM can handle more complex nonlinear problems. The NOM can be very flexible and efficient to solve partial differential equations (PDEs), it’s also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Finally, we present some classical benchmark problems including the classical cantilever beam and plate-with-a-hole problem, and we also make an extension of this method to solve complicated problems including phase-field fracture modeling and gradient elasticity material.
Carrier-bound titanium dioxide catalysts were used in a photocatalytic ozonation reactor for the degradation of micro-pollutants in real wastewater. A photocatalytic immersion rotary body reactor with a 36-cm disk diameter was used, and was irradiated using UV-A light-emitting diodes. The rotating disks were covered with catalysts based on stainless steel grids coated with titanium dioxide. The dosing of ozone was carried out through the liquid phase via an external enrichment and a supply system transverse to the flow direction. The influence of irradiation power and ozone dose on the degradation rate for photocatalytic ozonation was investigated. In addition, the performance of the individual processes photocatalysis and ozonation were studied. The degradation kinetics of the parent compounds were determined using liquid chromatography tandem mass spectrometry. First-order kinetics were determined for photocatalysis and photocatalytic ozonation. A maximum reaction rate of the reactor was determined, which could be achieved by both photocatalysis and photocatalytic ozonation. At a dosage of 0.4 mg /mg DOC, the maximum reaction rate could be achieved using 75% of the irradiation power used for sole photocatalysis, allowing increases in the energetic efficiency of photocatalytic wastewater treatment processes. The process of photocatalytic ozonation is suitable to remove a wide spectrum of micro-pollutants from wastewater.
In this study, we propose a nonlocal operator method (NOM) for the dynamic analysis of (thin) Kirchhoff plates. The nonlocal Hessian operator is derived based on a second-order Taylor series expansion. The NOM does not require any shape functions and associated derivatives as ’classical’ approaches such as FEM, drastically facilitating the implementation. Furthermore, NOM is higher order continuous, which is exploited for thin plate analysis that requires C1 continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for the time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation.
We present a stochastic deep collocation method (DCM) based on neural architecture search (NAS) and transfer learning for heterogeneous porous media. We first carry out a sensitivity analysis to determine the key hyper-parameters of the network to reduce the search space and subsequently employ hyper-parameter optimization to finally obtain the parameter values. The presented NAS based DCM also saves the weights and biases of the most favorable architectures, which is then used in the fine-tuning process. We also employ transfer learning techniques to drastically reduce the computational cost. The presented DCM is then applied to the stochastic analysis of heterogeneous porous material. Therefore, a three dimensional stochastic flow model is built providing a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. The performance of the presented NAS based DCM is verified in different dimensions using the method of manufactured solutions. We show that it significantly outperforms finite difference methods in both accuracy and computational cost.
This paper presents numerical analysis of the discrete fundamental solution of the discrete Laplace operator on a rectangular lattice. Additionally, to provide estimates in interior and exterior domains, two different regularisations of the discrete fundamental solution are considered. Estimates for the absolute difference and lp-estimates are constructed for both regularisations. Thus, this work extends the classical results in the discrete potential theory to the case of a rectangular lattice and serves as a basis for future convergence analysis of the method of discrete potentials on rectangular lattices.
The spread of breathing air when playing wind instruments and singing was investigated and visualized using two methods: (1) schlieren imaging with a schlieren mirror and (2) background-oriented schlieren (BOS). These methods visualize airflow by visualizing density gradients in transparent media. The playing of professional woodwind and brass instrument players, as well as professional classical trained singers were investigated to estimate the spread distances of the breathing air. For a better comparison and consistent measurement series, a single high note, a single low note, and an extract of a musical piece were investigated. Additionally, anemometry was used to determine the velocity of the spreading breathing air and the extent to which it was quantifiable. The results showed that the ejected airflow from the examined instruments and singers did not exceed a spreading range of 1.2 m into the room. However, differences in the various instruments have to be considered to assess properly the spread of the breathing air. The findings discussed below help to estimate the risk of cross-infection for wind instrument players and singers and to develop efficacious safety precautions, which is essential during critical health periods such as the current COVID-19 pandemic.
In the wake of the news industry’s digitization, novel organizations that differ considerably from traditional media firms in terms of their functional roles and organizational practices of media work are emerging. One new type is the field repair organization, which is characterized by supporting high‐quality media work to compensate for the deficits (such as those which come from cost savings and layoffs) which have become apparent in legacy media today. From a practice‐theoretical research perspective and based on semi‐structured interviews, virtual field observations, and document analysis, we have conducted a single case study on Science Media Center Germany (SMC), a unique non‐profit news start‐up launched in 2016 in Cologne, Germany. Our findings show that, in addition to field repair activities, SMC aims to facilitate progress and innovation in the field, which we refer to as field advancement. This helps to uncover emerging needs and anticipates problems before they intensify or even occur, proactively providing products and tools for future journalism. This article contributes to our understanding of novel media organizations with distinct functions in the news industry, allowing for advancements in theory on media work and the organization of journalism in times of digital upheaval.
According to Eurocode, the computation of bending strength for steel cantilever beams is a straightforward process. The approach is based on an Ayrton-Perry formula adaptation of buckling curves for steel members in compression, which involves the computation of an elastic critical buckling load for considering the instability. NCCI documents offer a simplified formula to determine the critical bending moment for cantilevers beams with symmetric cross-section. Besides the NCCI recommendations, other approaches, e.g. research literature or Finite-Element-Analysis, may be employed to determine critical buckling loads. However, in certain cases they render different results. Present paper summarizes and compares the abovementioned analytical and numerical approaches for determining critical loads and it exemplarily analyses corresponding cantilever beam capacities using numerical approaches based on plastic zones theory (GMNIA).
Global structural analyses in civil engineering are usually performed considering linear-elastic material behavior. However, for steel structures, a certain degree of plasticization depending on the member classification may be considered. Corresponding plastic analyses taking material nonlinearities into account are effectively realized using numerical methods. Frequently applied finite elements of two and three-dimensional models evaluate the plasticity at defined nodes using a yield surface, i.e. by a yield condition, hardening rule, and flow rule. Corresponding calculations are connected to a large numerical as well as time-consuming effort and they do not rely on the theoretical background of beam theory, to which the regulations of standards mainly correspond. For that reason, methods using beam elements (one-dimensional) combined with cross-sectional analyses are commonly applied for steel members in terms of plastic zones theories. In these approaches, plasticization is in general assessed by means of axial stress only. In this paper, more precise numerical representation of the combined stress states, i.e. axial and shear stresses, is presented and results of the proposed approach are validated and discussed.
In der kritischen Stadtforschung wird die These der postdemokratischen Stadt aktuell immer wieder aufgegriffen und dabei eng mit Prozessen der Neoliberalisierung verknüpft. Ausgehend von einer kritischen Diskussion der konzeptionellen Zugänge bei Colin Crouch und Jacques Rancière geht der Beitrag anhand der Geschichte der kommunalen Selbstverwaltung in Frankfurt am Main dem Gehalt der beiden Begriffsbestimmungen in der konkreten historischen Analyse nach. Verwiesen wird dabei auf die unterschiedliche Analysetiefe der beiden Konzepte. Entgegen der bei Crouch vorherrschenden Annahme, dass es vor der neoliberalen Stadt eine demokratische Form städtischen Regierens gegeben hat, wird unter Rückbezug auf die Argumentation Rancières zur Demokratie betont, dass der Fordismus keinesfalls als egalitärer, inklusiver oder demokratischer charakterisiert werden kann. Vielmehr vertreten wir die These, dass die fordistische Stadt zwar aus anderen Gründen, aber vom Grundsatz her nicht weniger postdemokratisch gewesen ist als die neoliberale der Gegenwart und dass die demokratischen Momente am ehesten in den Brüchen und Spalten der sozialen Konflikte der 1970er und 1980er Jahre gefunden werden können.
Der Text folgt in essayistischer Form einem Spaziergang durch das politische Zentrum Brasílias in Brasilien. Die Konzentration liegt auf der Gestaltung des Bodens. Wie ist die Planhauptstadt „vom Reißbrett“ in der Horizontalen gestaltet? Wie sehen repräsentative Plätze einer Stadt aus, die vor allem für Autos gebaut worden ist? Der forschende Blick liegt auf dem erlebten Ist-Zustand und wird assoziativ mit Ergebnissen der Forschungsarbeit aus Deutschland reflektiert. „Mächtiger Boden“ entstand als Satellit zur aktuellen Forschung der Autorin im Rahmen eines Aufenthalts in Brasilien.
Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square.
Bolted connections are commonly used in steel construction. The load-bearing behavior of bolt fittings has extensively been studied in various research activities and the bearing capacity of bolted connections can be assessed well by standard regulations for practical applications. With regard to tensile loading, the nut does not have strong influence on resistances, since the failure occurs in the bolts due to higher material strengths of the nuts. In some applications, so-called “blind holes” are used to connect plated components. In a manner of speaking, the nut is replaced by the “outer” plate with a prefabricated hole and thread, in which the bolt can be screwed and tightened. In such connections, the limit load capacity cannot solely be assessed by the bolt resistance, since the threaded hole in the base material has strong influence on the structural behavior. In this context, the available screw-in depth of the blind hole is of fundamental importance. The German National Annex of EN 1993-1-8 provides information on a necessary depth in order to transfer the full tensile capacity of the bolt. However, some connections do not allow to fabricate such depths. In these cases, the capacity of the connection is unclear and not specified. In this paper, first experiments on corresponding connections with different screw-in depths are presented and compared to limit load capacities according to the standard.
Polylactic acid (PLA) is a highly applicable material that is used in 3D printers due to some significant features such as its deformation property and affordable cost. For improvement of the end-use quality, it is of significant importance to enhance the quality of fused filament fabrication (FFF)-printed objects in PLA. The purpose of this investigation was to boost toughness and to reduce the production cost of the FFF-printed tensile test samples with the desired part thickness. To remove the need for numerous and idle printing samples, the response surface method (RSM) was used. Statistical analysis was performed to deal with this concern by considering extruder temperature (ET), infill percentage (IP), and layer thickness (LT) as controlled factors. The artificial intelligence method of artificial neural network (ANN) and ANN-genetic algorithm (ANN-GA) were further developed to estimate the toughness, part thickness, and production-cost-dependent variables. Results were evaluated by correlation coefficient and RMSE values. According to the modeling results, ANN-GA as a hybrid machine learning (ML) technique could enhance the accuracy of modeling by about 7.5, 11.5, and 4.5% for toughness, part thickness, and production cost, respectively, in comparison with those for the single ANN method. On the other hand, the optimization results confirm that the optimized specimen is cost-effective and able to comparatively undergo deformation, which enables the usability of printed PLA objects.
This dataset presents the numerical analysis of the heat and moisture transport through a facade equipped with a living wall system designated for greywater treatment. While such greening systems provide many environmental benefits, they involve pumping large quantities of water onto the wall assembly, which can increase the risk of moisture in the wall as well as impaired energetic performance due to increased thermal conductivity with increased moisture content in the building materials. This dataset was acquired through numerical simulation using the coupling of two simulation tools, namely Envi-Met and Delphin. This coupling was used to include the complex role the plants play in shaping the near-wall environmental parameters in the hygrothermal simulations. Four different wall assemblies were investigated, each assembly was assessed twice: with and without the living wall. The presented data include the input and output parameters of the simulations, which were presented in the co-submitted article [1].
This study demonstrates the application and combination of multiple imaging techniques [light microscopy, micro-X-ray computer tomography (μ-CT), scanning electron microscopy (SEM) and focussed ion beam – nano-tomography (FIB-nT)] to the analysis of the microstructure of hydrated alite across multiple scales. However, by comparing findings with mercury intrusion porosimetry (MIP), it becomes obvious that the imaged 3D volumes and 2D images do not sufficiently overlap at certain scales to allow a continuous quantification of the pore size distribution (PSD). This can be overcome by improving the resolution and increasing the measured volume. Furthermore, results show that the fibrous morphology of calcium-silicate-hydrates (C-S-H) phases is preserved during FIB-nT. This is a requirement for characterisation of nano-scale porosity. Finally, it was proven that the combination of FIB-nT with energy-dispersive X-ray spectroscopy (EDX) data facilitates the phase segmentation of a 11 × 11 × 7.7 μm3 volume of hydrated alite.
Burning of clinker is the most influencing step of cement quality during the production process. Appropriate characterisation for quality control and decision-making is therefore the critical point to maintain a stable production but also for the development of alternative cements. Scanning electron microscopy (SEM) in combination with energy dispersive X-ray spectroscopy (EDX) delivers spatially resolved phase and chemical information for cement clinker. This data can be used to quantify phase fractions and chemical composition of identified phases.
The contribution aims to provide an overview of phase fraction quantification by semi-automatic phase segmentation using high-resolution backscattered electron (BSE) images and lower-resolved EDX element maps. Therefore, a tool for image analysis was developed that uses state-of-the-art algorithms for pixel-wise image segmentation and labelling in combination with a decision tree that allows searching for specific clinker phases. Results show that this tool can be applied to segment sub-micron scale clinker phases and to get a quantification of all phase fractions. In addition, statistical evaluation of the data is implemented within the tool to reveal whether the imaged area is representative for all clinker phases.
Bauhaus-Gastprofessorin Mirjam Wenzel referierte am 30. Juni 2021 im Audimax der Bauhaus-Universität Weimar zur Entstehungsgeschichte und Konzeption Jüdischer Museen. Dabei ging sie darauf ein, inwiefern diese Museen besonders relevant für aktuelle gesellschaftliche und politische Fragestellungen sind. Prof. Wenzels zweiter öffentlicher Vortrag an der Bauhaus-Universität Weimar skizzierte die Potentiale von Kultureinrichtungen in Zeiten gesellschaftspolitischer Veränderungen im Allgemeinen und die Bedeutung Jüdischer Museen angesichts verbaler und tätlicher Gewalt gegen Jüdinnen und Juden im Besonderen.
In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk.
Marine macroalgae such as Ulva intestinalis have promising properties as feedstock for cosmetics and pharmaceuticals. However, since the quantity and quality of naturally grown algae vary widely, their exploitability is reduced – especially for producers in high-priced markets. Moreover, the expansion of marine or shore-based cultivation systems is unlikely in Europe, since promising sites either lie in fishing zones, recreational areas, or natural reserves. The aim was therefore to develop a closed photobioreactor system enabling full control of abiotic environmental parameters and an effective reconditioning of the cultivation medium in order to produce marine macroalgae at sites distant from the shore. To assess the feasibility and functionality of the chosen technological concept, a prototypal plant has been implemented in central Germany – a site distant from the sea. Using a newly developed, submersible LED light source, cultivation experiments with Ulva intestinalis led to growth rates of 7.72 ± 0.04 % day−1 in a cultivation cycle of 28 days. Based on the space demand of the production system, this results in fresh mass productivity of 3.0 kg m−2, respectively, of 1.1 kg m−2 per year. Also considering the ratio of biomass to energy input amounting to 2.76 g kWh−1, significant future improvements of the developed photobioreactor system should include the optimization of growth parameters, and the reduction of the system’s overall energy demand.
Scaling of concrete due to salt frost attack is an important durability issue in moderate and cold climates. The actual damage mechanism is still not completely understood. Two recent damage theories—the glue spall theory and the cryogenic suction theory—offer plausible, but conflicting explanations for the salt frost scaling mechanism. The present study deals with the cryogenic suction theory, which assumes that freezing concrete can take up unfrozen brine from a partly frozen deicing solution during salt frost attack. According to the model hypothesis, the resulting saturation of the concrete surface layer intensifies the ice formation in this layer and causes salt frost scaling. In this study an experimental technique was developed that makes it possible to quantify to which extent brine uptake can increase ice formation in hardened cement paste (used as a model material for concrete). The experiments were carried out with low temperature differential scanning calorimetry, where specimens were subjected to freeze–thaw cycles while being in contact with NaCl brine. Results showed that the ice content in the specimens increased with subsequent freeze–thaw cycles due to the brine uptake at temperatures below 0 °C. The ability of the hardened cement paste to bind chlorides from the absorbed brine at the same time affected the freezing/melting behavior of the pore solution and the magnitude of the ice content.
The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate.
Within the scope of literature, the influence of openings within the infill walls that are bounded by a reinforced concrete frame and excited by seismic drift forces in both in- and out-of-plane direction is still uncharted. Therefore, a 3D micromodel was developed and calibrated thereafter, to gain more insight in the topic. The micromodels were calibrated against their equivalent physical test specimens of in-plane, out-of-plane drift driven tests on frames with and without infill walls and openings, as well as out-of-plane bend test of masonry walls. Micromodels were rectified based on their behavior and damage states. As a result of the calibration process, it was found that micromodels were sensitive and insensitive to various parameters, regarding the model’s behavior and computational stability. It was found that, even within the same material model, some parameters had more effects when attributed to concrete rather than on masonry. Generally, the in-plane behavior of infilled frames was found to be largely governed by the interface material model. The out-of-plane masonry wall simulations were governed by the tensile strength of both the interface and masonry material model. Yet, the out-of-plane drift driven test was governed by the concrete material properties.
Antimicrobial resistance (AMR) is identified by the World Health Organization (WHO) as one of the top ten threats to public health worldwide. In addition to public health, AMR also poses a major threat to food security and economic development. Current sanitation systems contribute to the emergence and spread of AMR and lack effective AMR mitigation measures. This study assesses source separation of blackwater as a mitigation measure against AMR. A source-separation-modified combined sanitation system with separate collection of blackwater and graywater is conceptually described. Measures taken at the source, such as the separate collection and discharge of material flows, were not considered so far on a load balance basis, i.e., they have not yet been evaluated for their effectiveness. The sanitation system described is compared with a combined system and a separate system regarding AMR emissions by means of simulation. AMR is represented in the simulation model by one proxy parameter each for antibiotics (sulfamethoxa-zole), antibiotic-resistant bacteria (extended-spectrum beta-lactamase E. Coli), and antibiotic re-sistance genes (blaTEM). The simulation results suggest that the source-separation-based sanitation system reduces emissions of antibiotic-resistant bacteria and antibiotic resistance genes into the aquatic environment by more than six logarithm steps compared to combined systems. Sulfa-methoxazole emissions can be reduced by 75.5% by keeping blackwater separate from graywater and treating it sufficiently. In summary, sanitation systems incorporating source separation are, to date, among the most effective means of preventing the emission of AMR into the aquatic envi-ronment.
Electric trains are considered one of the most eco-friendly and safest means of transportation. Catenary poles are used worldwide to support overhead power lines for electric trains. The performance of the catenary poles has an extensive influence on the integrity of the train systems and, consequently, the connected human services. It became a must nowadays to develop SHM systems that provide the instantaneous status of catenary poles in- service, making the decision-making processes to keep or repair the damaged poles more feasible. This study develops a data-driven, model-free approach for status monitoring of cantilever structures, focusing on pre-stressed, spun-cast ultrahigh-strength concrete catenary poles installed along high-speed train tracks. The pro-posed approach evaluates multiple damage features in an unfied damage index, which leads to straightforward interpretation and comparison of the output. Besides, it distinguishes between multiple damage scenarios of the poles, either the ones caused by material degradation of the concrete or by the cracks that can be propagated during the life span of the given structure. Moreover, using a logistic function to classify the integrity of structure avoids the expensive learning step in the existing damage detection approaches, namely, using the modern machine and deep learning methods. The findings of this study look very promising when applied to other types of cantilever structures, such as the poles that support the power transmission lines, antenna masts, chimneys, and wind turbines.
Personalized ventilation (PV) is a mean of delivering conditioned outdoor air into the breathing zone of the occupants. This study aims to qualitatively investigate the personalized flows using two methods of visualization: (1) schlieren imaging using a large schlieren mirror and (2) thermography using an infrared camera. While the schlieren imaging was used to render the velocity and mass transport of the supplied flow, thermography was implemented to visualize the air temperature distribution induced by the PV. Both studies were conducted using a thermal manikin to simulate an occupant facing a PV outlet. As a reference, the flow supplied by an axial fan and a cased axial fan was visualized with the schlieren system as well and compared to the flow supplied by PV. Schlieren visualization results indicate that the steady, low-turbulence flow supplied by PV was able to penetrate the thermal convective boundary layer encasing the manikin's body, providing clean air for inhalation. Contrarily, the axial fan diffused the supplied air over a large target area with high turbulence intensity; it only disturbed the convective boundary layer rather than destroying it. The cased fan supplied a flow with a reduced target area which allowed supplying more air into the breathing zone compared to the fan. The results of thermography visualization showed that the supplied cool air from PV penetrated the corona-shaped thermal boundary layer. Furthermore, the supplied air cooled the surface temperature of the face, which indicates the large impact of PV on local thermal sensation and comfort.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
The growing complexity of modern practical problems puts high demand on mathematical modelling. Given that various models can be used for modelling one physical phenomenon, the role of model comparison and model choice is becoming particularly important. Methods for model comparison and model choice typically used in practical applications nowadays are computationbased, and thus time consuming and computationally costly. Therefore, it is necessary to develop other approaches to working abstractly, i.e., without computations, with mathematical models. An abstract description of mathematical models can be achieved by the help of abstract mathematics, implying formalisation of models and relations between them. In this paper, a category theory-based approach to mathematical modelling is proposed. In this way, mathematical models are formalised in the language of categories, relations between the models are formally defined and several practically relevant properties are introduced on the level of categories. Finally, an illustrative example is presented, underlying how the category-theory based approach can be used in practice. Further, all constructions presented in this paper are also discussed from a modelling point of view by making explicit the link to concrete modelling scenarios.
The contribution explores the migratory situation on the Balkans and more specifically in the so-called Refugee District in Belgrade from a spatial perspective. By visualizing the areas of tensions in the Refugee District, the city of Belgrade, Serbia and Europe it aims to disentangle the political and socio-spatial levels that lead to the stuck situation of in-betweenness at the gates of the European Union.
Durch internationale Fluchtbewegungen über die sogenannte Balkanroute bildete sich in Serbiens Hauptstadt Belgrad in den letzten Jahren ein sogenannter Refugee District heraus. Im Kontext von Migration und Flucht werden dabei zahlreiche Spannungsfelder auf unterschiedlichen räumlichen und politischen Ebenen sichtbar. Für Flüchtende kreieren diese eine Situation, die von Stillstand, Ausweglosigkeit, Kontrolle, Gefahr und Verdrängung geprägt ist. Allerdings führen die Vielschichtigkeit und die Diversität unterschiedlicher Akteur*innen, die bezüglich der Situation von Flüchtenden auf der Balkanroute wirkmächtig sind, auch zu Nischen, Widerständigkeiten und der Möglichkeit (neuer) Allianzen. Auf diese Weise entsteht eine kollektive Praktik der Nicht-Bewegung im Widerstand gegen die Unterdrückung und für globale Bewegungsfreiheit.
In this paper we present a theoretical background for a coupled analytical–numerical approach to model a crack propagation process in two-dimensional bounded domains. The goal of the coupled analytical–numerical approach is to obtain the correct solution behaviour near the crack tip by help of the analytical solution constructed by using tools of complex function theory and couple it continuously with the finite element solution in the region far from the singularity. In this way, crack propagation could be modelled without using remeshing. Possible directions of crack growth can be calculated through the minimization of the total energy composed of the potential energy and the dissipated energy based on the energy release rate. Within this setting, an analytical solution of a mixed boundary value problem based on complex analysis and conformal mapping techniques is presented in a circular region containing an arbitrary crack path. More precisely, the linear elastic problem is transformed into a Riemann–Hilbert problem in the unit disk for holomorphic functions. Utilising advantages of the analytical solution in the region near the crack tip, the total energy could be evaluated within short computation times for various crack kink angles and lengths leading to a potentially efficient way of computing the minimization procedure. To this end, the paper presents a general strategy of the new coupled approach for crack propagation modelling. Additionally, we also discuss obstacles in the way of practical realisation of this strategy.
In this work, extensive reactive molecular dynamics simulations are conducted to analyze the nanopore creation by nano-particles impact over single-layer molybdenum disulfide (MoS2) with 1T and 2H phases. We also compare the results with graphene monolayer. In our simulations, nanosheets are exposed to a spherical rigid carbon projectile with high initial velocities ranging from 2 to 23 km/s. Results for three different structures are compared to examine the most critical factors in the perforation and resistance force during the impact. To analyze the perforation and impact resistance, kinetic energy and displacement time history of the projectile as well as perforation resistance force of the projectile are investigated.
Interestingly, although the elasticity module and tensile strength of the graphene are by almost five times higher than those of MoS2, the results demonstrate that 1T and 2H-MoS2 phases are more resistive to the impact loading and perforation than graphene. For the MoS2nanosheets, we realize that the 2H phase is more resistant to impact loading than the 1T counterpart.
Our reactive molecular dynamics results highlight that in addition to the strength and toughness, atomic structure is another crucial factor that can contribute substantially to impact resistance of 2D materials. The obtained results can be useful to guide the experimental setups for the nanopore creation in MoS2or other 2D lattices.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
Conventional superplasticizers based on polycarboxylate ether (PCE) show an intolerance to clay minerals due to intercalation of their polyethylene glycol (PEG) side chains into the interlayers of the clay mineral. An intolerance to very basic media is also known. This makes PCE an unsuitable choice as a superplasticizer for geopolymers. Bio-based superplasticizers derived from starch showed comparable effects to PCE in a cementitious system. The aim of the present study was to determine if starch superplasticizers (SSPs) could be a suitable additive for geopolymers by carrying out basic investigations with respect to slump, hardening, compressive and flexural strength, shrinkage, and porosity. Four SSPs were synthesized, differing in charge polarity and specific charge density. Two conventional PCE superplasticizers, differing in terms of molecular structure, were also included in this study. The results revealed that SSPs improved the slump of a metakaolin-based geopolymer (MK-geopolymer) mortar while the PCE investigated showed no improvement. The impact of superplasticizers on early hardening (up to 72 h) was negligible. Less linear shrinkage over the course of 56 days was seen for all samples in comparison with the reference. Compressive strengths of SSP specimens tested after 7 and 28 days of curing were comparable to the reference, while PCE led to a decline. The SSPs had a small impact on porosity with a shift to the formation of more gel pores while PCE caused an increase in porosity. Throughout this research, SSPs were identified as promising superplasticizers for MK-geopolymer mortar and concrete.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
When it comes to monitoring of huge structures, main issues are limited time, high costs and how to deal with the big amount of data. In order to reduce and manage them, respectively, methods from the field of optimal design of experiments are useful and supportive. Having optimal experimental designs at hand before conducting any measurements is leading to a highly informative measurement concept, where the sensor positions are optimized according to minimal errors in the structures’ models. For the reduction of computational time a combined approach using Fisher Information Matrix and mean-squared error in a two-step procedure is proposed under the consideration of different error types. The error descriptions contain random/aleatoric and systematic/epistemic portions. Applying this combined approach on a finite element model using artificial acceleration time measurement data with artificially added errors leads to the optimized sensor positions. These findings are compared to results from laboratory experiments on the modeled structure, which is a tower-like structure represented by a hollow pipe as the cantilever beam. Conclusively, the combined approach is leading to a sound experimental design that leads to a good estimate of the structure’s behavior and model parameters without the need of preliminary measurements for model updating.
Few studies have investigated how search behavior affects complex writing tasks. We analyze a dataset of 150 long essays whose authors searched the ClueWeb09 corpus for source material, while all querying, clicking, and writing activity was meticulously recorded. We model the effect of search and writing behavior on essay quality using path analysis. Since the boil-down and build-up writing strategies identified in previous research have been found to affect search behavior, we model each writing strategy separately. Our analysis shows that the search process contributes significantly to essay quality through both direct and mediated effects, while the author's writing strategy moderates this relationship. Our models explain 25–35% of the variation in essay quality through rather simple search and writing process characteristics alone, a fact that has implications on how search engines could personalize result pages for writing tasks. Authors' writing strategies and associated searching patterns differ, producing differences in essay quality. In a nutshell: essay quality improves if search and writing strategies harmonize—build-up writers benefit from focused, in-depth querying, while boil-down writers fare better with a broader and shallower querying strategy.
This article is focused on the research and development of new cellulose ether derivatives as innovative superplasticizers for mortar systems. Several synthetic strategies have been pursued to obtain new compounds to study their properties on cementitious systems as new bio-based additives. The new water-soluble admixtures were synthesized using a complex carboxymethylcellulose-based backbone that was first hydrolyzed and then sulfo-ethylated in the presence of sodium vinyl sulphonate. Starting with a complex biopolymer that is widely known as a thickening agent was very challenging. Only by varying the hydrolysis times and temperatures of the reactions was achieved the aimed goal. The obtained derivatives showed different molecular weight (Mw) and anionic charges on their backbones. An improvement in shear stress and dynamic viscosity values of CEM II 42.5R cement was observed with the samples obtained with a longer time of higher temperature hydrolysis and sulfo-ethylation. Investigations into the chemical nature of the pore solution, calorimetric studies and adsorption experiments clearly showed the ability of carboxymethyl cellulose superplasticizer (CMC SP) to interact with cement grains and influence hydration processes within a 48-h time window, causing a delay in hydration reactions in the samples. The fluidity of the cementitious matrices was ascertained through slump test and preliminary studies of mechanical and flexural strength of the hardened mortar formulated with the new ecological additives yielded values in terms of mechanical properties. Finally, the computed tomography (CT) images completed the investigation of the pore network structure of hardened specimens, highlighting their promising structure porosity.
Compiling and disseminating information about incidents and disasters are key to disaster management and relief. But due to inherent limitations of the acquisition process, the required information is often incomplete or missing altogether. To fill these gaps, citizen observations spread through social media are widely considered to be a promising source of relevant information, and many studies propose new methods to tap this resource. Yet, the overarching question of whether and under which circumstances social media can supply relevant information (both qualitatively and quantitatively) still remains unanswered. To shed some light on this question, we review 37 disaster and incident databases covering 27 incident types, compile a unified overview of the contained data and their collection processes, and identify the missing or incomplete information. The resulting data collection reveals six major use cases for social media analysis in incident data collection: (1) impact assessment and verification of model predictions, (2) narrative generation, (3) recruiting citizen volunteers, (4) supporting weakly institutionalized areas, (5) narrowing surveillance areas, and (6) reporting triggers for periodical surveillance. Furthermore, we discuss the benefits and shortcomings of using social media data for closing information gaps related to incidents and disasters.
Entrepreneurship and start-up activities are seen as a key response to recent upheavals in the media industry: Newly founded ventures can act as important drivers for industry transformation and renewal, pioneering new products, business models, and organizational designs (e.g. Achtenhagen, 2017; Buschow & Laugemann, 2020).
In principle, media students represent a crucial population of nascent entrepreneurs: individuals who will likely become founders of start-ups (Casero-Ripollés et al., 2016). However, their willingness to start a new business is generally considered to be rather low (Goyanes, 2015), and for journalism students, the idea of innovation tends to be conservative, following traditional norms and professional standards (Singer & Broersma, 2020). In a sample of Spanish journalism students, López-Meri et al. (2020) found that one of the main barriers to entrepreneurial intentions is that students feel they lack knowledge and training in entrepreneurship.
In the last 10 years, a wide variety of entrepreneurship education courses have been set up in media departments of colleges and universities worldwide.
These programs have been designed to sensitize and prepare communications, media and journalism students to think and act entrepreneurially (e.g. Caplan et al., 2020; Ferrier, 2013; Ferrier & Mays, 2017; Hunter & Nel, 2011). Entrepreneurial competencies
and practices not only play a crucial role for start-ups, but, in imes of digital transformation, are increasingly sought after by legacy media companies as well (Küng, 2015).
At the Department of Journalism and Communication Research, Hanover University of Music, Drama and Media, Germany, we have been addressing these developments with the “Media Entrepreneurship” program. The course, established in 2013, aims to provide fundamental knowledge of entrepreneurship, as well as promoting students‘ entrepreneurial thinking and behavior. This article presents the pedagogical approach of the program and investigates learning outcomes. By outlining and evaluating the Media Entrepreneurship program, this article aims to promote good practices of entrepreneurship education in communications, media and journalism, and to reflect on the limitations of such programs.
Chemical glass frosting processes are widely used to create visual attractive glass surfaces. A commonly used frosting bath mainly contains ammonium bifluoride (NH4HF2) mixed with hydrochloric acid (HCl). The frosting process consists of several baths. Firstly, the preliminary bath to clean the object. Secondly, the frosting bath which etches the rough light scattering structure into the glass surface. Finally, the washing baths to clean the frosted object. This is where the constituents of the preceding steps accumulate and have to be filtered from the sewage. In the present contribution, phosphoric acid (H3PO4) was used as a substitute for HCl to reduce the amount of ammonium (NH4+) and chloride (Cl−) dissolved in the waste water. In combination with magnesium carbonate (MgCO3), it allows the precipitation of ammonium within the sewage as ammonium magnesium phosphate (MgNH4PO4). However, a trivial replacement of HCl by H3PO4 within the frosting process causes extensive frosting errors, such as inhomogeneous size distributions of the structures or domains that are not fully covered by these structures. By modifying the preliminary bath composition, it was possible to improve the frosting result considerably. To determine the optimal composition of the preliminary bath, a semi-automatic evaluation method has been developed. This method renders the objective comparison of the resulting surface quality possible.
Biofeedback constitutes a well-established, non-invasive method to voluntary interfere in emotional processing by means of cognitive strategies. However, treatment durations exhibit strong inter-individual variations and first successes can often be achieved only after a large number of sessions. Sham feedback constitutes a rather untapped approach by providing feedback that does not correspond to the participant’s actual state. The current study aims to gain insights into mechanisms of sham feedback processing in order to support new techniques in biofeedback therapy. We carried out two experiments and applied different types of sham feedback on skin conductance responses and pupil size changes during affective processing. Results indicate that standardized but context-sensitive sham signals based on skin conductance responses exert a stronger influence on emotional regulation compared to individual sham feedback from ongoing pupil dynamics. Also, sham feedback should forego unnatural signal behavior to avoid irritation and skepticism among participants. Altogether, a reasonable combination of stimulus features and sham feedback characteristics enables to considerably reduce the actual bodily responsiveness already within a single session.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
While Public-Private Partnership (PPP) is widely adopted across various sectors, it raises a question on its meagre utilisation in the housing sector. This paper, therefore, gauges the perspective of the stakeholders in the building industry towards the application of PPP in various building sectors together with housing. It assesses the performance reliability of PPP for housing by learning possible take-aways from other sectors. The role of key stakeholders in the industry becomes highly responsible for an informed understanding and decision-making. To this end, a two-tier investigation was conducted including surveys and expert interviews, with several stakeholders in the PPP industry in Europe, involving the public sector, private sector, consultants, as well as other community/user representatives.
The survey results demonstrated the success rate with PPPs, major factors important for PPPs such as profitability or end-user acceptability, the prevalent practices and trends in the PPP world, and the majority of support expressed in favour of the suitability of PPP for housing. The interviews added more detailed dimensions to the understanding of the PPP industry, its functioning and enabling the formation of a comprehensive outlook. The results present the perspective, approaches, and experiences of stakeholders over PPP practices, current trends and scenarios and their take on PPP in housing. It shall aid in understanding the challenges prevalent in the PPP approach for implementation in housing and enable the policymakers and industry stakeholders to make provisions for higher uptake to accelerate housing provision.
Broadband dielectric measurement methods based on vector network analyzer coupled with coaxial transmission line cell (CC) and open-ended coaxial probe (OC) are simply reviewed, by which the dielectric behaviors in the frequency range of 1 MHz to 3 GHz of two practical geomaterials are investigated. Kaolin after modified compaction with different water contents is measured by using CC. The results are consistent with previous study on standardized compacted kaolin and suggest that the dielectric properties at frequencies below 100 MHz are not only a function of water content but also functions of other soil state parameters including dry density. The hydration process of a commercial grout is monitored in real time by using OC. It is found that the time dependent dielectric properties can accurately reveal the different stages of the hydration process. These measurement results demonstrate the practicability of the introduced methods in determining dielectric properties of soft geomaterials.
The development of a hydro-mechanically coupled Coupled-Eulerian–Lagrangian (CEL) method and its application to the back-analysisof vibratory pile driving model tests in water-saturated sand is presented. The predicted pile penetration using this approachis in good agreement with the results of the model tests as well as with fully Lagrangian simulations. In terms of pore water pressure, however, the results of the CEL simulation show a slightly worse accordance with the model tests compared to the Lagrangian simulation. Some shortcomings of the hydro-mechanically coupled CEL method in case of frictional contact problems and pore fluids with high bulk modulus are discussed. Lastly, the CEL method is applied to the simulation of vibratory driving of open-profile piles under partially drained conditions to study installation-induced changes in the soil state. It is concluded that the proposed method is capable of realistically reproducing the most important mechanisms in the soil during the driving process despite its addressed shortcomings.
Die derzeitige Wohnungskrise hat eine sozial-ökologische Kernproblematik. Dabei ist die sozial ungerechte und ökologisch problematische Verteilung von Wohnfläche meist unsichtbar und wird weder in wissenschaftlichen noch in aktivistischen Kontexten ausreichend als Frage der Flächengerechtigkeit problematisiert. Denn Wohnraum und Fläche in einer Stadt sind keine endlos verfügbaren Güter: Wenn einige Menschen auf viel Raum leben, bleibt für andere Menschen weniger Fläche übrig. Und die Menschen, die am wenigstens für eine Verknappung von Wohnraum verantwortlich sind, leiden am meisten darunter. Dieser Artikel arbeitet zunächst den Begriff der Wohnflächengerechtigkeit heraus, wobei auf die Ungleichverteilung von Wohnfläche und deren gesellschaftliche Implikationen unter derzeitigen Wohnungsverteilungsmechanismen Bezug genommen wird. Anschließend wird der Verbrauch von (Wohn-)Fläche aus ökologischer Perspektive problematisiert. Der Artikel diskutiert scheinbare und transformationsorientierte Lösungs- und Handlungsansätze. Abschließend fordert er in der kritischen Stadtforschung und in aktivistischen Kontexten eine stärkere Debatte um eine Wohnflächengerechtigkeit, deren Verwirklichung gleichermaßen eine soziale wie ökologische Dimension hat.
Matthias Bernt und Andrej Holm weisen zu Recht darauf hin, dass es einer Forschung zu ostdeutschen Städten als konzeptionell eigenständigem Feld bedarf, die die spezifische Verräumlichung des tiefgreifenden gesellschaftlichen Transformationsprozesses nach 1990 ins Zentrum stellt. Dabei betrachten sie insbesondere das Feld des Wohnens als produktiv, um Kenntnis über die Struktur und Wirkung dieses Prozesses zu erlangen. Allerdings bleiben sie vage dabei, wie eine solche spezifisch auf Ostdeutschland gerichtete Wohnungsforschung zu konzipieren wäre und in welcher Weise die Besonderheiten und Parallelitäten ostdeutscher Entwicklungen zu den Transformationen von Wohnungs- und Stadtentwicklungspolitik in Westdeutschland, aber auch international, in Bezug zu setzen wären.
Der Beitrag verbindet die Diskussion um die postpolitische Stadt mit der zunehmenden wissenschaftlichen und aktivistischen Auseinandersetzung mit dem Anthropozän, ein Konzept, das die ökologischen und sozialpolitischen Implikationen menschlichen Handelns auf die Erdoberfläche beschreibt. Anhand von drei ausgewählten Fallstudien erkunden wir,
wie die spezifisch anthropogene, also menschengemachte, Krise urbaner Luftverschmutzung in künstlerischen Positionen problematisiert wird. Im Kontext des potenziellen Vormarschs von Postpolitik besprechen wir, wie der ambivalente Diskurs des Anthropozäns einerseits Depolitisierung begünstigt und andererseits neue Möglichkeiten für die Repolitisierung
globaler Umweltherausforderungen ermöglicht.
Prediction of the groundwater nitrate concentration is of utmost importance for pollution control and water resource management. This research aims to model the spatial groundwater nitrate concentration in the Marvdasht watershed, Iran, based on several artificial intelligence methods of support vector machine (SVM), Cubist, random forest (RF), and Bayesian artificial neural network (Baysia-ANN) machine learning models. For this purpose, 11 independent variables affecting groundwater nitrate changes include elevation, slope, plan curvature, profile curvature, rainfall, piezometric depth, distance from the river, distance from residential, Sodium (Na), Potassium (K), and topographic wetness index (TWI) in the study area were prepared. Nitrate levels were also measured in 67 wells and used as a dependent variable for modeling. Data were divided into two categories of training (70%) and testing (30%) for modeling. The evaluation criteria coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency (NSE) were used to evaluate the performance of the models used. The results of modeling the susceptibility of groundwater nitrate concentration showed that the RF (R2 = 0.89, RMSE = 4.24, NSE = 0.87) model is better than the other Cubist (R2 = 0.87, RMSE = 5.18, NSE = 0.81), SVM (R2 = 0.74, RMSE = 6.07, NSE = 0.74), Bayesian-ANN (R2 = 0.79, RMSE = 5.91, NSE = 0.75) models. The results of groundwater nitrate concentration zoning in the study area showed that the northern parts of the case study have the highest amount of nitrate, which is higher in these agricultural areas than in other areas. The most important cause of nitrate pollution in these areas is agriculture activities and the use of groundwater to irrigate these crops and the wells close to agricultural areas, which has led to the indiscriminate use of chemical fertilizers by irrigation or rainwater of these fertilizers is washed and penetrates groundwater and pollutes the aquifer.
Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.
This study aims to evaluate a new approach in modeling gully erosion susceptibility (GES) based on a deep learning neural network (DLNN) model and an ensemble particle swarm optimization (PSO) algorithm with DLNN (PSO-DLNN), comparing these approaches with common artificial neural network (ANN) and support vector machine (SVM) models in Shirahan watershed, Iran. For this purpose, 13 independent variables affecting GES in the study area, namely, altitude, slope, aspect, plan curvature, profile curvature, drainage density, distance from a river, land use, soil, lithology, rainfall, stream power index (SPI), and topographic wetness index (TWI), were prepared. A total of 132 gully erosion locations were identified during field visits. To implement the proposed model, the dataset was divided into the two categories of training (70%) and testing (30%). The results indicate that the area under the curve (AUC) value from receiver operating characteristic (ROC) considering the testing datasets of PSO-DLNN is 0.89, which indicates superb accuracy. The rest of the models are associated with optimal accuracy and have similar results to the PSO-DLNN model; the AUC values from ROC of DLNN, SVM, and ANN for the testing datasets are 0.87, 0.85, and 0.84, respectively. The efficiency of the proposed model in terms of prediction of GES was increased. Therefore, it can be concluded that the DLNN model and its ensemble with the PSO algorithm can be used as a novel and practical method to predict gully erosion susceptibility, which can help planners and managers to manage and reduce the risk of this phenomenon.
For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.
In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost.
Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.
This paper reports the formation and structure of fast setting geopolymers activated by using three sodium silicate solutions with different modules (1.6, 2.0 and 2.4) and a berlinite-type aluminum orthophosphate. By varying the concentration of the aluminum orthophosphate, different Si/Al-ratios were established (6, 3 and 2). Reaction kinetics of binders were determined by isothermal calorimetric measurements at 20 °C. X-ray diffraction analysis as well as nuclear magnetic resonance (NMR) measurements were performed on binders to determine differences in structure by varying the alkalinity of the sodium silicate solutions and the Si/Al-ratio. The calorimetric results indicated that the higher the alkalinity of the sodium silicate solution, the higher the solubility and degree of conversion of the aluminum orthophosphate. The results of X-ray diffraction and Rietveldt analysis, as well as the NMR measurements, confirmed the assumption of the calorimetric experiments that first the aluminum orthophosphate was dissolved and then a polycondensation to an amorphous aluminosilicate network occurred. The different amounts of amorphous phases formed as a function of the alkalinity of the sodium silicate solution, indicate that tetrahydroxoaluminate species were formed during the dissolution of the aluminum orthophosphate, which reduce the pH value. This led to no further dissolution of the aluminum orthophosphate, which remained unreacted.
Despite digitization and platformization, mass media and established media companies still play a crucial role in the provision of journalistic content in democratic societies. Competition is one key driver of (media) company behavior and is considered to have an impact on the media’s performance. However, theory and empirical research are ambiguous about the relationship. The objective of this article is to empirically analyze the effect of competition on media performance in a cross-national context. We assessed media performance of media companies as the importance of journalistic goals within their stated corporate goal system. We conducted a content analysis of letters to the shareholders in annual reports of more than 50 media companies from 2000 to 2014 to operationalize journalistic goal importance. When employing a fixed effects regression analysis, as well as a fuzzy set qualitative comparative analysis, results suggest that competition has a positive effect on the importance of journalistic goals, while the existence of a strong public service media sector appears to have the effect of “crowding out” commercial media companies.
The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Neuartige Sanitärsysteme zielen auf eine ressourcenorientierte Verwertung von Abwasser ab. Erreicht werden soll dies durch die separate Erfassung von Abwasserteilströmen. In den Fachöffentlichkeiten der Wasserwirtschaft und Raumplanung werden neuartige Sanitärsysteme als ein geeigneter Ansatz für die zukünftige
Sicherung der Abwasserentsorgung in ländlichen Räumen betrachtet. Die Praxistauglichkeit dieser Systeme wurde zwar in Forschungsprojekten nachgewiesen, bisher erschweren jedoch für Abwasserentsorger vielfältige Risiken die Einführung einer ressourcenorientierten Abwasserbewirtschaftung. Ausgehend von einer Untersuchung der Kontexte bei der Umsetzung eines neuartigen Sanitärsystems im ländlichen Raum Thüringens wird in diesem Beitrag der Frage nachgegangen, wie auf Landesebene mit dem abwasserwirtschaftlichen Instrumentarium die Einführung von ressourcenorientierten Systemansätzen unterstützt werden kann. Zentrale Elemente des Beitrags sind die Darstellung der wesentlichen Transformationsrisiken in Bezug auf die Einführung innovativer Lösungsansätze, eine Erläuterung der spezifischen abwasserwirtschaftlichen Instrumente sowie die Darlegung von Steuerungsansätzen,mit denen die Einführung von neuartigen Sanitärsystemen gefördert werden kann. Im Ergebnis wird die Realisierbarkeit von neuartigen Sanitärsystemen durch den strategischen Einsatz des Instrumentariums deutlich, gleichwohl die Wasserwirtschaft durch die Erweiterung der bisherigen Systemgrenzen auf die Kooperation mit anderen Bereichen der Daseinsvorsorge angewiesen ist.
Personalisierte Lüftung (PL) kann die thermische Behaglichkeit sowie die Qualität der eingeatmeten Atemluft verbessern, in dem jedem Arbeitsplatz Frischluft separat zugeführt wird. In diesem Beitrag wird die Wirkung der PL auf die thermische Behaglichkeit der Nutzer unter sommerlichen Randbedingungen untersucht. Hierfür wurden zwei Ansätze zur Bewertung des Kühlungseffekts der PL untersucht: basierend auf (1) der äquivalenten Temperatur und (2) dem thermischen Empfinden. Grundlage der Auswertung sind in einer Klimakammer gemessene sowie numerisch simulierte Daten. Vor der Durchführung der Simulationen wurde das numerische Modell zunächst anhand der gemessenen Daten validiert. Die Ergebnisse zeigen, dass der Ansatz basierend auf dem thermischen Empfinden zur Evaluierung des Kühlungseffekts der PL sinnvoller sein kann, da bei diesem die komplexen physiologischen Faktoren besser berücksichtigt werden.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification
(2020)
Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.
This paper proposes a practice-theoretical journalism research approach for an alternate and innovative perspective of digital journalism’s current empirical challenges. The practice-theoretical approach is introduced by demonstrating its explanatory power in relation to demarcation problems, technological changes, economic challenges and challenges to journalism’s legitimacy. Its respective advantages in dealing with these problems are explained and then compared to established journalism theories. The particular relevance of the theoretical perspective is due to (1) its central decision to observe journalistic practices, (2) the transgression of conventional journalistic boundaries, (3) the denaturalization of journalistic norms and laws, (4) the explicit consideration of a material, socio-technical dimension of journalism, (5) a focus on the conflicting relationship between journalistic practices and media management practices, and (6) prioritizing order generation over stability.
Auguste Rodins Weimarer Eva
(2018)
In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.
This study investigates the performance of two systems: personalized ventilation (PV) and ductless personalized ventilation (DPV). Even though the literature indicates a compelling performance of PV, it is not often used in practice due to its impracticality. Therefore, the present study assesses the possibility of replacing the inflexible PV with DPV in office rooms equipped with displacement ventilation (DV) in the summer season. Numerical simulations were utilized to evaluate the inhaled concentration of pollutants when PV and DPV are used. The systems were compared in a simulated office with two occupants: a susceptible occupant and a source occupant. Three types of pollution were simulated: exhaled infectious air, dermally emitted contamination, and room contamination from a passive source. Results indicated that PV improved the inhaled air quality regardless of the location of the pollution source; a higher PV supply flow rate positively impacted the inhaled air quality. Contrarily, the performance of DPV was highly sensitive to the source location and the personalized flow rate. A higher DPV flow rate tends to decrease the inhaled air quality due to increased mixing of pollutants in the room. Moreover, both systems achieved better results when the personalized system of the source occupant was switched off.
Welfare‐state transformation and entrepreneurial urban politics in Western welfare states since the late 1970s have yielded converging trends in the transformation of the dominant Fordist paradigm of social housing in terms of its societal function and institutional and spatial form. In this article I draw from a comparative case study on two cities in Germany to show that the resulting new paradigm is simultaneously shaped by the idiosyncrasies of the country's national housing regime and local housing policies. While German governments have successively limited the societal function of social housing as a legitimate instrument only for addressing exceptional housing crises, local policies on providing and organizing social housing within this framework display significant variation. However, planning and design principles dominating the spatial forms of social housing have been congruent. They may be interpreted as both an expression of the marginalization of social housing within the restructured welfare housing regime and a tool of its implementation according to the logics of entrepreneurial urban politics.
A new large‐field, high‐sensitivity, single‐mirror coincident schlieren optical instrument has been installed at the Bauhaus‐Universität Weimar for the purpose of indoor air research. Its performance is assessed by the non‐intrusive measurement of the thermal plume of a heated manikin. The schlieren system produces excellent qualitative images of the manikin's thermal plume and also quantitative data, especially schlieren velocimetry of the plume's velocity field that is derived from the digital cross‐correlation analysis of a large time sequence of schlieren images. The quantitative results are compared with thermistor and hot‐wire anemometer data obtained at discrete points in the plume. Good agreement is obtained, once the differences between path‐averaged schlieren data and planar anemometry data are reconciled.
The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.
Wind effects can be critical for the design of lifelines such as long-span bridges. The existence of a significant number of aerodynamic force models, used to assess the performance of bridges, poses an important question regarding their comparison and validation. This study utilizes a unified set of metrics for a quantitative comparison of time-histories in bridge aerodynamics with a host of characteristics. Accordingly, nine comparison metrics are included to quantify the discrepancies in local and global signal features such as phase, time-varying frequency and magnitude content, probability density, nonstationarity and nonlinearity. Among these, seven metrics available in the literature are introduced after recasting them for time-histories associated with bridge aerodynamics. Two additional metrics are established to overcome the shortcomings of the existing metrics. The performance of the comparison metrics is first assessed using generic signals with prescribed signal features. Subsequently, the metrics are applied to a practical example from bridge aerodynamics to quantify the discrepancies in the aerodynamic forces and response based on numerical and semi-analytical aerodynamic models. In this context, it is demonstrated how a discussion based on the set of comparison metrics presented here can aid a model evaluation by offering deeper insight. The outcome of the study is intended to provide a framework for quantitative comparison and validation of aerodynamic models based on the underlying physics of fluid-structure interaction. Immediate further applications are expected for the comparison of time-histories that are simulated by data-driven approaches.
Die im Jahr 2020 in Deutschland praktizierte Siedlungs- und Wohnungspolitik erhält in Anbetracht ihrer Auswirkungen auf die soziale und ökologische Lage einen bitteren Beigeschmack. Arm und Reich triften weiter auseinander und einer zielgerichteten ökologischen Transformation der Art und Weise, wie Stadtentwicklung und Wohnungspolitik gestaltet werden,stehen noch immer historisch und systemisch bedingte Pfadabhängigkeiten im Weg. Diese werden nur durch eine integrierte Betrachtung sozialer und ökonomischer Aspekte sichtbar und deuten auf eine der ursprünglichen Fragen linker Gesellschaftsforschung hin: Die Auseinandersetzung mit dem Verhältnis von Eigentum und Gerechtigkeit.
Im Ergebnis stehen drei wesentliche Befunde: Der Diskurs zum Schutz des Klimas und der Biodiversität berührt direkt die Parameter Dichte, Nutzungsmischung und Flächeninanspruchnahme; zweitens steigt letztere relativ mit erhöhtem, individuell verfügbaren Kapital und insbesondere im selbstgenutztem Eigentum gegenüber Mietwohnungen; und drittens wächst der Eigentumsanteil mit fortschreitender Finanzialisierung des Wohnungsmarktes, sodass das Risiko sozialer und ökologischer Krisen sich verschärft.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
The performance of ductless personalized ventilation (DPV) was compared to the performance of a typical desk fan since they are both stand-alone systems that allow the users to personalize their indoor environment. The two systems were evaluated using a validated computational fluid dynamics (CFD) model of an office room occupied by two users. To investigate the impact of DPV and the fan on the inhaled air quality, two types of contamination sources were modelled in the domain: an active source and a passive source. Additionally, the influence of the compared systems on thermal comfort was assessed using the coupling of CFD with the comfort model developed by the University of California, Berkeley (UCB model). Results indicated that DPV performed generally better than the desk fan. It provided better thermal comfort and showed a superior performance in removing the exhaled contaminants. However, the desk fan performed better in removing the contaminants emitted from a passive source near the floor level. This indicates that the performance of DPV and desk fans depends highly on the location of the contamination source. Moreover, the simulations showed that both systems increased the spread of exhaled contamination when used by the source occupant.
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
Why Do Digital Native News Media Fail? An Investigation of Failure in the Early Start-Up Phase
(2020)
Digital native news media have great potential for improving journalism. Theoretically, they can be the sites where new products, novel revenue streams and alternative ways of organizing digital journalism are discovered, tested, and advanced. In practice, however, the situation appears to be more complicated. Besides the normal pressures facing new businesses, entrepreneurs in digital news are faced with specific challenges. Against the background of general and journalism specific entrepreneurship literature, and in light of a practice–theoretical approach, this qualitative case study research on 15 German digital native news media outlets empirically investigates what barriers curb their innovative capacity in the early start-up phase. In the new media organizations under study here, there are—among other problems—a high degree of homogeneity within founding teams, tensions between journalistic and economic practices, insufficient user orientation, as well as a tendency for organizations to be underfinanced. The patterns of failure investigated in this study can raise awareness, help news start-ups avoid common mistakes before actually entering the market, and help industry experts and investors to realistically estimate the potential of new ventures within the digital news industry.
Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.
Along with environmental pollution, urban planning has been connected to public health. The research indicates that the quality of built environments plays an important role in reducing mental disorders and overall health. The structure and shape of the city are considered as one of the factors influencing happiness and health in urban communities and the type of the daily activities of citizens. The aim of this study was to promote physical activity in the main structure of the city via urban design in a way that the main form and morphology of the city can encourage citizens to move around and have physical activity within the city. Functional, physical, cultural-social, and perceptual-visual features are regarded as the most important and effective criteria in increasing physical activities in urban spaces, based on literature review. The environmental quality of urban spaces and their role in the physical activities of citizens in urban spaces were assessed by using the questionnaire tool and analytical network process (ANP) of structural equation modeling. Further, the space syntax method was utilized to evaluate the role of the spatial integration of urban spaces on improving physical activities. Based on the results, consideration of functional diversity, spatial flexibility and integration, security, and the aesthetic and visual quality of urban spaces plays an important role in improving the physical health of citizens in urban spaces. Further, more physical activities, including motivation for walking and the sense of public health and happiness, were observed in the streets having higher linkage and space syntax indexes with their surrounding texture.