Refine
Document Type
- Article (1015) (remove)
Institute
- Professur Theorie und Geschichte der modernen Architektur (393)
- Institut für Strukturmechanik (ISM) (254)
- Professur Informatik im Bauwesen (130)
- Professur Stochastik und Optimierung (40)
- Professur Bauphysik (23)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (21)
- Professur Informatik in der Architektur (15)
- Junior-Professur Computational Architecture (12)
- Institut für Europäische Urbanistik (11)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Bauhaus-Kolloquium (395)
- Weimar (395)
- Angewandte Mathematik (186)
- Strukturmechanik (185)
- Architektur (168)
- 1986 (63)
- 1989 (60)
- Design (59)
- Bauhaus (55)
- Raum (55)
A new large‐field, high‐sensitivity, single‐mirror coincident schlieren optical instrument has been installed at the Bauhaus‐Universität Weimar for the purpose of indoor air research. Its performance is assessed by the non‐intrusive measurement of the thermal plume of a heated manikin. The schlieren system produces excellent qualitative images of the manikin's thermal plume and also quantitative data, especially schlieren velocimetry of the plume's velocity field that is derived from the digital cross‐correlation analysis of a large time sequence of schlieren images. The quantitative results are compared with thermistor and hot‐wire anemometer data obtained at discrete points in the plume. Good agreement is obtained, once the differences between path‐averaged schlieren data and planar anemometry data are reconciled.
Wind effects can be critical for the design of lifelines such as long-span bridges. The existence of a significant number of aerodynamic force models, used to assess the performance of bridges, poses an important question regarding their comparison and validation. This study utilizes a unified set of metrics for a quantitative comparison of time-histories in bridge aerodynamics with a host of characteristics. Accordingly, nine comparison metrics are included to quantify the discrepancies in local and global signal features such as phase, time-varying frequency and magnitude content, probability density, nonstationarity and nonlinearity. Among these, seven metrics available in the literature are introduced after recasting them for time-histories associated with bridge aerodynamics. Two additional metrics are established to overcome the shortcomings of the existing metrics. The performance of the comparison metrics is first assessed using generic signals with prescribed signal features. Subsequently, the metrics are applied to a practical example from bridge aerodynamics to quantify the discrepancies in the aerodynamic forces and response based on numerical and semi-analytical aerodynamic models. In this context, it is demonstrated how a discussion based on the set of comparison metrics presented here can aid a model evaluation by offering deeper insight. The outcome of the study is intended to provide a framework for quantitative comparison and validation of aerodynamic models based on the underlying physics of fluid-structure interaction. Immediate further applications are expected for the comparison of time-histories that are simulated by data-driven approaches.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
Carrier-bound titanium dioxide catalysts were used in a photocatalytic ozonation reactor for the degradation of micro-pollutants in real wastewater. A photocatalytic immersion rotary body reactor with a 36-cm disk diameter was used, and was irradiated using UV-A light-emitting diodes. The rotating disks were covered with catalysts based on stainless steel grids coated with titanium dioxide. The dosing of ozone was carried out through the liquid phase via an external enrichment and a supply system transverse to the flow direction. The influence of irradiation power and ozone dose on the degradation rate for photocatalytic ozonation was investigated. In addition, the performance of the individual processes photocatalysis and ozonation were studied. The degradation kinetics of the parent compounds were determined using liquid chromatography tandem mass spectrometry. First-order kinetics were determined for photocatalysis and photocatalytic ozonation. A maximum reaction rate of the reactor was determined, which could be achieved by both photocatalysis and photocatalytic ozonation. At a dosage of 0.4 mg /mg DOC, the maximum reaction rate could be achieved using 75% of the irradiation power used for sole photocatalysis, allowing increases in the energetic efficiency of photocatalytic wastewater treatment processes. The process of photocatalytic ozonation is suitable to remove a wide spectrum of micro-pollutants from wastewater.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
The growing complexity of modern practical problems puts high demand on mathematical modelling. Given that various models can be used for modelling one physical phenomenon, the role of model comparison and model choice is becoming particularly important. Methods for model comparison and model choice typically used in practical applications nowadays are computationbased, and thus time consuming and computationally costly. Therefore, it is necessary to develop other approaches to working abstractly, i.e., without computations, with mathematical models. An abstract description of mathematical models can be achieved by the help of abstract mathematics, implying formalisation of models and relations between them. In this paper, a category theory-based approach to mathematical modelling is proposed. In this way, mathematical models are formalised in the language of categories, relations between the models are formally defined and several practically relevant properties are introduced on the level of categories. Finally, an illustrative example is presented, underlying how the category-theory based approach can be used in practice. Further, all constructions presented in this paper are also discussed from a modelling point of view by making explicit the link to concrete modelling scenarios.
The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate.
Personalized ventilation (PV) is a mean of delivering conditioned outdoor air into the breathing zone of the occupants. This study aims to qualitatively investigate the personalized flows using two methods of visualization: (1) schlieren imaging using a large schlieren mirror and (2) thermography using an infrared camera. While the schlieren imaging was used to render the velocity and mass transport of the supplied flow, thermography was implemented to visualize the air temperature distribution induced by the PV. Both studies were conducted using a thermal manikin to simulate an occupant facing a PV outlet. As a reference, the flow supplied by an axial fan and a cased axial fan was visualized with the schlieren system as well and compared to the flow supplied by PV. Schlieren visualization results indicate that the steady, low-turbulence flow supplied by PV was able to penetrate the thermal convective boundary layer encasing the manikin's body, providing clean air for inhalation. Contrarily, the axial fan diffused the supplied air over a large target area with high turbulence intensity; it only disturbed the convective boundary layer rather than destroying it. The cased fan supplied a flow with a reduced target area which allowed supplying more air into the breathing zone compared to the fan. The results of thermography visualization showed that the supplied cool air from PV penetrated the corona-shaped thermal boundary layer. Furthermore, the supplied air cooled the surface temperature of the face, which indicates the large impact of PV on local thermal sensation and comfort.
Electric trains are considered one of the most eco-friendly and safest means of transportation. Catenary poles are used worldwide to support overhead power lines for electric trains. The performance of the catenary poles has an extensive influence on the integrity of the train systems and, consequently, the connected human services. It became a must nowadays to develop SHM systems that provide the instantaneous status of catenary poles in- service, making the decision-making processes to keep or repair the damaged poles more feasible. This study develops a data-driven, model-free approach for status monitoring of cantilever structures, focusing on pre-stressed, spun-cast ultrahigh-strength concrete catenary poles installed along high-speed train tracks. The pro-posed approach evaluates multiple damage features in an unfied damage index, which leads to straightforward interpretation and comparison of the output. Besides, it distinguishes between multiple damage scenarios of the poles, either the ones caused by material degradation of the concrete or by the cracks that can be propagated during the life span of the given structure. Moreover, using a logistic function to classify the integrity of structure avoids the expensive learning step in the existing damage detection approaches, namely, using the modern machine and deep learning methods. The findings of this study look very promising when applied to other types of cantilever structures, such as the poles that support the power transmission lines, antenna masts, chimneys, and wind turbines.
In this article, I show why it is necessary to abolish the use of predictive algorithms in the US criminal justice system at sentencing. After presenting the functioning of these algorithms in their context of emergence, I offer three arguments to demonstrate why their abolition is imperative. First, I show that sentencing based on predictive algorithms induces a process of rewriting the temporality of the judged individual, flattening their life into a present inescapably doomed by its past. Second, I demonstrate that recursive processes, comprising predictive algorithms and the decisions based on their predictions, systematically suppress outliers and progressively transform reality to match predictions. In my third and final argument, I show that decisions made on the basis of predictive algorithms actively perform a biopolitical understanding of justice as management and modulation of risks. In such a framework, justice becomes a means to maintain a perverse social homeostasis that systematically exposes disenfranchised Black and Brown populations to risk.
Scaling of concrete due to salt frost attack is an important durability issue in moderate and cold climates. The actual damage mechanism is still not completely understood. Two recent damage theories—the glue spall theory and the cryogenic suction theory—offer plausible, but conflicting explanations for the salt frost scaling mechanism. The present study deals with the cryogenic suction theory, which assumes that freezing concrete can take up unfrozen brine from a partly frozen deicing solution during salt frost attack. According to the model hypothesis, the resulting saturation of the concrete surface layer intensifies the ice formation in this layer and causes salt frost scaling. In this study an experimental technique was developed that makes it possible to quantify to which extent brine uptake can increase ice formation in hardened cement paste (used as a model material for concrete). The experiments were carried out with low temperature differential scanning calorimetry, where specimens were subjected to freeze–thaw cycles while being in contact with NaCl brine. Results showed that the ice content in the specimens increased with subsequent freeze–thaw cycles due to the brine uptake at temperatures below 0 °C. The ability of the hardened cement paste to bind chlorides from the absorbed brine at the same time affected the freezing/melting behavior of the pore solution and the magnitude of the ice content.
Marine macroalgae such as Ulva intestinalis have promising properties as feedstock for cosmetics and pharmaceuticals. However, since the quantity and quality of naturally grown algae vary widely, their exploitability is reduced – especially for producers in high-priced markets. Moreover, the expansion of marine or shore-based cultivation systems is unlikely in Europe, since promising sites either lie in fishing zones, recreational areas, or natural reserves. The aim was therefore to develop a closed photobioreactor system enabling full control of abiotic environmental parameters and an effective reconditioning of the cultivation medium in order to produce marine macroalgae at sites distant from the shore. To assess the feasibility and functionality of the chosen technological concept, a prototypal plant has been implemented in central Germany – a site distant from the sea. Using a newly developed, submersible LED light source, cultivation experiments with Ulva intestinalis led to growth rates of 7.72 ± 0.04 % day−1 in a cultivation cycle of 28 days. Based on the space demand of the production system, this results in fresh mass productivity of 3.0 kg m−2, respectively, of 1.1 kg m−2 per year. Also considering the ratio of biomass to energy input amounting to 2.76 g kWh−1, significant future improvements of the developed photobioreactor system should include the optimization of growth parameters, and the reduction of the system’s overall energy demand.
Antimicrobial resistance (AMR) is identified by the World Health Organization (WHO) as one of the top ten threats to public health worldwide. In addition to public health, AMR also poses a major threat to food security and economic development. Current sanitation systems contribute to the emergence and spread of AMR and lack effective AMR mitigation measures. This study assesses source separation of blackwater as a mitigation measure against AMR. A source-separation-modified combined sanitation system with separate collection of blackwater and graywater is conceptually described. Measures taken at the source, such as the separate collection and discharge of material flows, were not considered so far on a load balance basis, i.e., they have not yet been evaluated for their effectiveness. The sanitation system described is compared with a combined system and a separate system regarding AMR emissions by means of simulation. AMR is represented in the simulation model by one proxy parameter each for antibiotics (sulfamethoxa-zole), antibiotic-resistant bacteria (extended-spectrum beta-lactamase E. Coli), and antibiotic re-sistance genes (blaTEM). The simulation results suggest that the source-separation-based sanitation system reduces emissions of antibiotic-resistant bacteria and antibiotic resistance genes into the aquatic environment by more than six logarithm steps compared to combined systems. Sulfa-methoxazole emissions can be reduced by 75.5% by keeping blackwater separate from graywater and treating it sufficiently. In summary, sanitation systems incorporating source separation are, to date, among the most effective means of preventing the emission of AMR into the aquatic envi-ronment.
Bolted connections are commonly used in steel construction. The load-bearing behavior of bolt fittings has extensively been studied in various research activities and the bearing capacity of bolted connections can be assessed well by standard regulations for practical applications. With regard to tensile loading, the nut does not have strong influence on resistances, since the failure occurs in the bolts due to higher material strengths of the nuts. In some applications, so-called “blind holes” are used to connect plated components. In a manner of speaking, the nut is replaced by the “outer” plate with a prefabricated hole and thread, in which the bolt can be screwed and tightened. In such connections, the limit load capacity cannot solely be assessed by the bolt resistance, since the threaded hole in the base material has strong influence on the structural behavior. In this context, the available screw-in depth of the blind hole is of fundamental importance. The German National Annex of EN 1993-1-8 provides information on a necessary depth in order to transfer the full tensile capacity of the bolt. However, some connections do not allow to fabricate such depths. In these cases, the capacity of the connection is unclear and not specified. In this paper, first experiments on corresponding connections with different screw-in depths are presented and compared to limit load capacities according to the standard.
Besides their multiple known benefits regarding urban microclimate, living walls can be used as decentralized stand-alone systems to treat greywater locally at the buildings. While this offers numerous environmental advantages, it can have a considerable impact on the hygrothermal performance of the facade as such systems involve bringing large quantities of water onto the facade. As it is difficult to represent complex entities such as plants in the typical simulation tools used for heat and moisture transport, this study suggests a new approach to tackle this challenge by coupling two tools: ENVI-Met and Delphin. ENVI-Met was used to simulate the impact of the plants to determine the local environmental parameters at the living wall. Delphin, on the other hand, was used to conduct the hygrothermal simulations using the local parameters calculated by ENVI-Met. Four wall constructions were investigated in this study: an uninsulated brick wall, a precast concrete plate, a sandy limestone wall, and a double-shell wall. The results showed that the living wall improved the U-value, the exterior surface temperature, and the heat flux through the wall. Moreover, the living wall did not increase the risk of moisture in the wall during winter and eliminated the risk of condensation.
Few studies have investigated how search behavior affects complex writing tasks. We analyze a dataset of 150 long essays whose authors searched the ClueWeb09 corpus for source material, while all querying, clicking, and writing activity was meticulously recorded. We model the effect of search and writing behavior on essay quality using path analysis. Since the boil-down and build-up writing strategies identified in previous research have been found to affect search behavior, we model each writing strategy separately. Our analysis shows that the search process contributes significantly to essay quality through both direct and mediated effects, while the author's writing strategy moderates this relationship. Our models explain 25–35% of the variation in essay quality through rather simple search and writing process characteristics alone, a fact that has implications on how search engines could personalize result pages for writing tasks. Authors' writing strategies and associated searching patterns differ, producing differences in essay quality. In a nutshell: essay quality improves if search and writing strategies harmonize—build-up writers benefit from focused, in-depth querying, while boil-down writers fare better with a broader and shallower querying strategy.
In this paper we present a theoretical background for a coupled analytical–numerical approach to model a crack propagation process in two-dimensional bounded domains. The goal of the coupled analytical–numerical approach is to obtain the correct solution behaviour near the crack tip by help of the analytical solution constructed by using tools of complex function theory and couple it continuously with the finite element solution in the region far from the singularity. In this way, crack propagation could be modelled without using remeshing. Possible directions of crack growth can be calculated through the minimization of the total energy composed of the potential energy and the dissipated energy based on the energy release rate. Within this setting, an analytical solution of a mixed boundary value problem based on complex analysis and conformal mapping techniques is presented in a circular region containing an arbitrary crack path. More precisely, the linear elastic problem is transformed into a Riemann–Hilbert problem in the unit disk for holomorphic functions. Utilising advantages of the analytical solution in the region near the crack tip, the total energy could be evaluated within short computation times for various crack kink angles and lengths leading to a potentially efficient way of computing the minimization procedure. To this end, the paper presents a general strategy of the new coupled approach for crack propagation modelling. Additionally, we also discuss obstacles in the way of practical realisation of this strategy.
In this work, extensive reactive molecular dynamics simulations are conducted to analyze the nanopore creation by nano-particles impact over single-layer molybdenum disulfide (MoS2) with 1T and 2H phases. We also compare the results with graphene monolayer. In our simulations, nanosheets are exposed to a spherical rigid carbon projectile with high initial velocities ranging from 2 to 23 km/s. Results for three different structures are compared to examine the most critical factors in the perforation and resistance force during the impact. To analyze the perforation and impact resistance, kinetic energy and displacement time history of the projectile as well as perforation resistance force of the projectile are investigated.
Interestingly, although the elasticity module and tensile strength of the graphene are by almost five times higher than those of MoS2, the results demonstrate that 1T and 2H-MoS2 phases are more resistive to the impact loading and perforation than graphene. For the MoS2nanosheets, we realize that the 2H phase is more resistant to impact loading than the 1T counterpart.
Our reactive molecular dynamics results highlight that in addition to the strength and toughness, atomic structure is another crucial factor that can contribute substantially to impact resistance of 2D materials. The obtained results can be useful to guide the experimental setups for the nanopore creation in MoS2or other 2D lattices.
Durch internationale Fluchtbewegungen über die sogenannte Balkanroute bildete sich in Serbiens Hauptstadt Belgrad in den letzten Jahren ein sogenannter Refugee District heraus. Im Kontext von Migration und Flucht werden dabei zahlreiche Spannungsfelder auf unterschiedlichen räumlichen und politischen Ebenen sichtbar. Für Flüchtende kreieren diese eine Situation, die von Stillstand, Ausweglosigkeit, Kontrolle, Gefahr und Verdrängung geprägt ist. Allerdings führen die Vielschichtigkeit und die Diversität unterschiedlicher Akteur*innen, die bezüglich der Situation von Flüchtenden auf der Balkanroute wirkmächtig sind, auch zu Nischen, Widerständigkeiten und der Möglichkeit (neuer) Allianzen. Auf diese Weise entsteht eine kollektive Praktik der Nicht-Bewegung im Widerstand gegen die Unterdrückung und für globale Bewegungsfreiheit.
According to Eurocode, the computation of bending strength for steel cantilever beams is a straightforward process. The approach is based on an Ayrton-Perry formula adaptation of buckling curves for steel members in compression, which involves the computation of an elastic critical buckling load for considering the instability. NCCI documents offer a simplified formula to determine the critical bending moment for cantilevers beams with symmetric cross-section. Besides the NCCI recommendations, other approaches, e.g. research literature or Finite-Element-Analysis, may be employed to determine critical buckling loads. However, in certain cases they render different results. Present paper summarizes and compares the abovementioned analytical and numerical approaches for determining critical loads and it exemplarily analyses corresponding cantilever beam capacities using numerical approaches based on plastic zones theory (GMNIA).
The spread of breathing air when playing wind instruments and singing was investigated and visualized using two methods: (1) schlieren imaging with a schlieren mirror and (2) background-oriented schlieren (BOS). These methods visualize airflow by visualizing density gradients in transparent media. The playing of professional woodwind and brass instrument players, as well as professional classical trained singers were investigated to estimate the spread distances of the breathing air. For a better comparison and consistent measurement series, a single high note, a single low note, and an extract of a musical piece were investigated. Additionally, anemometry was used to determine the velocity of the spreading breathing air and the extent to which it was quantifiable. The results showed that the ejected airflow from the examined instruments and singers did not exceed a spreading range of 1.2 m into the room. However, differences in the various instruments have to be considered to assess properly the spread of the breathing air. The findings discussed below help to estimate the risk of cross-infection for wind instrument players and singers and to develop efficacious safety precautions, which is essential during critical health periods such as the current COVID-19 pandemic.
This paper presents numerical analysis of the discrete fundamental solution of the discrete Laplace operator on a rectangular lattice. Additionally, to provide estimates in interior and exterior domains, two different regularisations of the discrete fundamental solution are considered. Estimates for the absolute difference and lp-estimates are constructed for both regularisations. Thus, this work extends the classical results in the discrete potential theory to the case of a rectangular lattice and serves as a basis for future convergence analysis of the method of discrete potentials on rectangular lattices.
Global structural analyses in civil engineering are usually performed considering linear-elastic material behavior. However, for steel structures, a certain degree of plasticization depending on the member classification may be considered. Corresponding plastic analyses taking material nonlinearities into account are effectively realized using numerical methods. Frequently applied finite elements of two and three-dimensional models evaluate the plasticity at defined nodes using a yield surface, i.e. by a yield condition, hardening rule, and flow rule. Corresponding calculations are connected to a large numerical as well as time-consuming effort and they do not rely on the theoretical background of beam theory, to which the regulations of standards mainly correspond. For that reason, methods using beam elements (one-dimensional) combined with cross-sectional analyses are commonly applied for steel members in terms of plastic zones theories. In these approaches, plasticization is in general assessed by means of axial stress only. In this paper, more precise numerical representation of the combined stress states, i.e. axial and shear stresses, is presented and results of the proposed approach are validated and discussed.
Polylactic acid (PLA) is a highly applicable material that is used in 3D printers due to some significant features such as its deformation property and affordable cost. For improvement of the end-use quality, it is of significant importance to enhance the quality of fused filament fabrication (FFF)-printed objects in PLA. The purpose of this investigation was to boost toughness and to reduce the production cost of the FFF-printed tensile test samples with the desired part thickness. To remove the need for numerous and idle printing samples, the response surface method (RSM) was used. Statistical analysis was performed to deal with this concern by considering extruder temperature (ET), infill percentage (IP), and layer thickness (LT) as controlled factors. The artificial intelligence method of artificial neural network (ANN) and ANN-genetic algorithm (ANN-GA) were further developed to estimate the toughness, part thickness, and production-cost-dependent variables. Results were evaluated by correlation coefficient and RMSE values. According to the modeling results, ANN-GA as a hybrid machine learning (ML) technique could enhance the accuracy of modeling by about 7.5, 11.5, and 4.5% for toughness, part thickness, and production cost, respectively, in comparison with those for the single ANN method. On the other hand, the optimization results confirm that the optimized specimen is cost-effective and able to comparatively undergo deformation, which enables the usability of printed PLA objects.
Burning of clinker is the most influencing step of cement quality during the production process. Appropriate characterisation for quality control and decision-making is therefore the critical point to maintain a stable production but also for the development of alternative cements. Scanning electron microscopy (SEM) in combination with energy dispersive X-ray spectroscopy (EDX) delivers spatially resolved phase and chemical information for cement clinker. This data can be used to quantify phase fractions and chemical composition of identified phases.
The contribution aims to provide an overview of phase fraction quantification by semi-automatic phase segmentation using high-resolution backscattered electron (BSE) images and lower-resolved EDX element maps. Therefore, a tool for image analysis was developed that uses state-of-the-art algorithms for pixel-wise image segmentation and labelling in combination with a decision tree that allows searching for specific clinker phases. Results show that this tool can be applied to segment sub-micron scale clinker phases and to get a quantification of all phase fractions. In addition, statistical evaluation of the data is implemented within the tool to reveal whether the imaged area is representative for all clinker phases.
This study demonstrates the application and combination of multiple imaging techniques [light microscopy, micro-X-ray computer tomography (μ-CT), scanning electron microscopy (SEM) and focussed ion beam – nano-tomography (FIB-nT)] to the analysis of the microstructure of hydrated alite across multiple scales. However, by comparing findings with mercury intrusion porosimetry (MIP), it becomes obvious that the imaged 3D volumes and 2D images do not sufficiently overlap at certain scales to allow a continuous quantification of the pore size distribution (PSD). This can be overcome by improving the resolution and increasing the measured volume. Furthermore, results show that the fibrous morphology of calcium-silicate-hydrates (C-S-H) phases is preserved during FIB-nT. This is a requirement for characterisation of nano-scale porosity. Finally, it was proven that the combination of FIB-nT with energy-dispersive X-ray spectroscopy (EDX) data facilitates the phase segmentation of a 11 × 11 × 7.7 μm3 volume of hydrated alite.
Within the scope of literature, the influence of openings within the infill walls that are bounded by a reinforced concrete frame and excited by seismic drift forces in both in- and out-of-plane direction is still uncharted. Therefore, a 3D micromodel was developed and calibrated thereafter, to gain more insight in the topic. The micromodels were calibrated against their equivalent physical test specimens of in-plane, out-of-plane drift driven tests on frames with and without infill walls and openings, as well as out-of-plane bend test of masonry walls. Micromodels were rectified based on their behavior and damage states. As a result of the calibration process, it was found that micromodels were sensitive and insensitive to various parameters, regarding the model’s behavior and computational stability. It was found that, even within the same material model, some parameters had more effects when attributed to concrete rather than on masonry. Generally, the in-plane behavior of infilled frames was found to be largely governed by the interface material model. The out-of-plane masonry wall simulations were governed by the tensile strength of both the interface and masonry material model. Yet, the out-of-plane drift driven test was governed by the concrete material properties.
Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square.
Bauhaus-Gastprofessorin Mirjam Wenzel referierte am 30. Juni 2021 im Audimax der Bauhaus-Universität Weimar zur Entstehungsgeschichte und Konzeption Jüdischer Museen. Dabei ging sie darauf ein, inwiefern diese Museen besonders relevant für aktuelle gesellschaftliche und politische Fragestellungen sind. Prof. Wenzels zweiter öffentlicher Vortrag an der Bauhaus-Universität Weimar skizzierte die Potentiale von Kultureinrichtungen in Zeiten gesellschaftspolitischer Veränderungen im Allgemeinen und die Bedeutung Jüdischer Museen angesichts verbaler und tätlicher Gewalt gegen Jüdinnen und Juden im Besonderen.
Compiling and disseminating information about incidents and disasters are key to disaster management and relief. But due to inherent limitations of the acquisition process, the required information is often incomplete or missing altogether. To fill these gaps, citizen observations spread through social media are widely considered to be a promising source of relevant information, and many studies propose new methods to tap this resource. Yet, the overarching question of whether and under which circumstances social media can supply relevant information (both qualitatively and quantitatively) still remains unanswered. To shed some light on this question, we review 37 disaster and incident databases covering 27 incident types, compile a unified overview of the contained data and their collection processes, and identify the missing or incomplete information. The resulting data collection reveals six major use cases for social media analysis in incident data collection: (1) impact assessment and verification of model predictions, (2) narrative generation, (3) recruiting citizen volunteers, (4) supporting weakly institutionalized areas, (5) narrowing surveillance areas, and (6) reporting triggers for periodical surveillance. Furthermore, we discuss the benefits and shortcomings of using social media data for closing information gaps related to incidents and disasters.
Chemical glass frosting processes are widely used to create visual attractive glass surfaces. A commonly used frosting bath mainly contains ammonium bifluoride (NH4HF2) mixed with hydrochloric acid (HCl). The frosting process consists of several baths. Firstly, the preliminary bath to clean the object. Secondly, the frosting bath which etches the rough light scattering structure into the glass surface. Finally, the washing baths to clean the frosted object. This is where the constituents of the preceding steps accumulate and have to be filtered from the sewage. In the present contribution, phosphoric acid (H3PO4) was used as a substitute for HCl to reduce the amount of ammonium (NH4+) and chloride (Cl−) dissolved in the waste water. In combination with magnesium carbonate (MgCO3), it allows the precipitation of ammonium within the sewage as ammonium magnesium phosphate (MgNH4PO4). However, a trivial replacement of HCl by H3PO4 within the frosting process causes extensive frosting errors, such as inhomogeneous size distributions of the structures or domains that are not fully covered by these structures. By modifying the preliminary bath composition, it was possible to improve the frosting result considerably. To determine the optimal composition of the preliminary bath, a semi-automatic evaluation method has been developed. This method renders the objective comparison of the resulting surface quality possible.
Entrepreneurship and start-up activities are seen as a key response to recent upheavals in the media industry: Newly founded ventures can act as important drivers for industry transformation and renewal, pioneering new products, business models, and organizational designs (e.g. Achtenhagen, 2017; Buschow & Laugemann, 2020).
In principle, media students represent a crucial population of nascent entrepreneurs: individuals who will likely become founders of start-ups (Casero-Ripollés et al., 2016). However, their willingness to start a new business is generally considered to be rather low (Goyanes, 2015), and for journalism students, the idea of innovation tends to be conservative, following traditional norms and professional standards (Singer & Broersma, 2020). In a sample of Spanish journalism students, López-Meri et al. (2020) found that one of the main barriers to entrepreneurial intentions is that students feel they lack knowledge and training in entrepreneurship.
In the last 10 years, a wide variety of entrepreneurship education courses have been set up in media departments of colleges and universities worldwide.
These programs have been designed to sensitize and prepare communications, media and journalism students to think and act entrepreneurially (e.g. Caplan et al., 2020; Ferrier, 2013; Ferrier & Mays, 2017; Hunter & Nel, 2011). Entrepreneurial competencies
and practices not only play a crucial role for start-ups, but, in imes of digital transformation, are increasingly sought after by legacy media companies as well (Küng, 2015).
At the Department of Journalism and Communication Research, Hanover University of Music, Drama and Media, Germany, we have been addressing these developments with the “Media Entrepreneurship” program. The course, established in 2013, aims to provide fundamental knowledge of entrepreneurship, as well as promoting students‘ entrepreneurial thinking and behavior. This article presents the pedagogical approach of the program and investigates learning outcomes. By outlining and evaluating the Media Entrepreneurship program, this article aims to promote good practices of entrepreneurship education in communications, media and journalism, and to reflect on the limitations of such programs.
This article is focused on the research and development of new cellulose ether derivatives as innovative superplasticizers for mortar systems. Several synthetic strategies have been pursued to obtain new compounds to study their properties on cementitious systems as new bio-based additives. The new water-soluble admixtures were synthesized using a complex carboxymethylcellulose-based backbone that was first hydrolyzed and then sulfo-ethylated in the presence of sodium vinyl sulphonate. Starting with a complex biopolymer that is widely known as a thickening agent was very challenging. Only by varying the hydrolysis times and temperatures of the reactions was achieved the aimed goal. The obtained derivatives showed different molecular weight (Mw) and anionic charges on their backbones. An improvement in shear stress and dynamic viscosity values of CEM II 42.5R cement was observed with the samples obtained with a longer time of higher temperature hydrolysis and sulfo-ethylation. Investigations into the chemical nature of the pore solution, calorimetric studies and adsorption experiments clearly showed the ability of carboxymethyl cellulose superplasticizer (CMC SP) to interact with cement grains and influence hydration processes within a 48-h time window, causing a delay in hydration reactions in the samples. The fluidity of the cementitious matrices was ascertained through slump test and preliminary studies of mechanical and flexural strength of the hardened mortar formulated with the new ecological additives yielded values in terms of mechanical properties. Finally, the computed tomography (CT) images completed the investigation of the pore network structure of hardened specimens, highlighting their promising structure porosity.
The development of a hydro-mechanically coupled Coupled-Eulerian–Lagrangian (CEL) method and its application to the back-analysisof vibratory pile driving model tests in water-saturated sand is presented. The predicted pile penetration using this approachis in good agreement with the results of the model tests as well as with fully Lagrangian simulations. In terms of pore water pressure, however, the results of the CEL simulation show a slightly worse accordance with the model tests compared to the Lagrangian simulation. Some shortcomings of the hydro-mechanically coupled CEL method in case of frictional contact problems and pore fluids with high bulk modulus are discussed. Lastly, the CEL method is applied to the simulation of vibratory driving of open-profile piles under partially drained conditions to study installation-induced changes in the soil state. It is concluded that the proposed method is capable of realistically reproducing the most important mechanisms in the soil during the driving process despite its addressed shortcomings.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
Biofeedback constitutes a well-established, non-invasive method to voluntary interfere in emotional processing by means of cognitive strategies. However, treatment durations exhibit strong inter-individual variations and first successes can often be achieved only after a large number of sessions. Sham feedback constitutes a rather untapped approach by providing feedback that does not correspond to the participant’s actual state. The current study aims to gain insights into mechanisms of sham feedback processing in order to support new techniques in biofeedback therapy. We carried out two experiments and applied different types of sham feedback on skin conductance responses and pupil size changes during affective processing. Results indicate that standardized but context-sensitive sham signals based on skin conductance responses exert a stronger influence on emotional regulation compared to individual sham feedback from ongoing pupil dynamics. Also, sham feedback should forego unnatural signal behavior to avoid irritation and skepticism among participants. Altogether, a reasonable combination of stimulus features and sham feedback characteristics enables to considerably reduce the actual bodily responsiveness already within a single session.
This paper outlines an important step in characterizing a novel field of robotic construction research where a cable-driven parallel robot is used to extrude cementitious material in three-dimensional space, and thus offering a comprehensive new approach to computational design and construction, and to robotic fabrication at larger scales. Developed by the Faculty of Art and Design at Bauhaus-University Weimar (Germany), the faculty of Architecture at the University of Applied Sciences Dortmund (Germany) and the Chair of Mechatronics at the University of Duisburg-Essen (Germany), this approach offers unique advantages over existing additive manufacturing methods: the system is easily transportable and scalable, it does not require additional formwork or scaffolding, and it offers digital integration and informational oversight across the entire design and building process. This paper considers 1) key research components of cable robotic 3D-printing (such as computational design, material exploration, and robotic control), and 2) the integration of these parameters into a unified design and building process. The demonstration of the approach at full-scale is of particular concern.
This article aims to develop a social theory of violence that emphasizes the role of the third party as well as the communication between the involved subjects. For this Teresa Koloma Beck’s essay ‘The Eye of the Beholder: Violence as a Social Process’ is taken as a starting point, which adopts a social-constructivist perspective. On the one hand, the basic concepts and the benefits of this approach are presented. On the other hand, social-theoretical problems of this approach are revealed. These deficits are counteracted by expanding Koloma Beck’s approach with a communicative-constructivist framework. Thus, the role of communicative action and the ‘objectification of violence’ is emphasized. These aspects impact the perception, judgement and (de-)legitimation of violence phenomena and the emergence of a ‘knowledge of violence’. Communicative actions and objectifications form a key to understanding violent interactions and the link between the micro and macro levels. Finally, the methodological consequences for the research of violence and Communicative Constructivism are discussed. Furthermore, possible research fields are outlined, which open up by looking at communicative action and the objectifications within the ‘triads of violence’.
Zu den diversen Unternehmungen sozialbewegter „Gegenwissenschaft“, die um 1980 auf der Bildfläche der BRD erschienen, zählte der 1982 gegründete Berliner Wissenschaftsladen e. V., kurz WILAB – eine Art „alternatives“ Spin-off der Technischen Universität Berlin. Der vorliegende Beitrag situiert die Ausgründung des „Ladens“ im Kontext zeitgenössischer Fortschritte der (regionalen) Forschungs- und Technologiepolitik. Gezeigt wird, wie der deindustrialisierenden Inselstadt, qua „innovationspolitischer“ Gegensteuerung, dabei sogar eine gewisse Vorreiterrolle zukam: über die Stadtgrenzen hinaus sichtbare Neuerungen wie die Gründermesse BIG TECH oder das 1983 eröffnete Berliner Innovations- und Gründerzentrum (BIG), der erste „Incubator“ [sic] der BRD, etwa gingen auf das Konto der 1977/78 lancierten Technologie-Transferstelle der TU Berlin, TU-transfer.
Anders gesagt: tendenziell bekam man es hier nun mit Verhältnissen zu tun, die immer weniger mit den Träumen einer „kritischen“, nicht-fremdbestimmten (Gegen‑)Wissenschaft kompatibel waren. Latent konträr zur historiographischen Prominenz des wissenschaftskritischen Zeitgeists fristeten „alternativen“ Zielsetzungen verpflichtete Unternehmungen wie „WILAB“ ein relativ marginalisiertes Nischendasein. Dennoch wirft das am WILAB verfolgte, so gesehen wenig aussichtsreiche Anliegen, eine andere, nämlich „humanere“ Informationstechnologie in die Wege zu leiten, ein instruktives Licht auf die Aufbrüche „unternehmerischer“ Wissenschaft in der BRD um 1980.
Object-Oriented Damage Information Modeling Concepts and Implementation for Bridge Inspection
(2022)
Bridges are designed to last for more than 50 years and consume up to 50% of their life-cycle costs during their operation phase. Several inspections and assessment actions are executed during this period. Bridge and damage information must be gathered, digitized, and exchanged between different stakeholders. Currently, the inspection and assessment practices rely on paper-based data collection and exchange, which is time-consuming and error-prone, and leads to loss of information. Storing and exchanging damage and building information in a digital format may lower costs and errors during inspection and assessment and support future needs, for example, immediate simulations regarding performance assessment, automated maintenance planning, and mixed reality inspections. This study focused on the concept for modeling damage information to support bridge reviews and structural analysis. Starting from the definition of multiple use cases and related requirements, the data model for damage information is defined independently from the subsequent implementation. In the next step, the implementation via an established standard is explained. Functional tests aim to identify problems in the concept and implementation. To show the capability of the final model, two example use cases are illustrated: the inspection review of the entire bridge and a finite-element analysis of a single component. Main results are the definition of necessary damage data, an object-oriented damage model, which supports multiple use cases, and the implementation of the model in a standard. Furthermore, the tests have shown that the standard is suitable to deliver damage information; however, several software programs lack proper implementation of the standard.
Quantification of cracks in concrete thin sections considering current methods of image analysis
(2022)
Image analysis is used in this work to quantify cracks in concrete thin sections via modern image processing. Thin sections were impregnated with a yellow epoxy resin, to increase the contrast between voids and other phases of the concrete. By the means of different steps of pre-processing, machine learning and python scripts, cracks can be quantified in an area of up to 40 cm2. As a result, the crack area, lengths and widths were estimated automatically within a single workflow. Crack patterns caused by freeze-thaw damages were investigated. To compare the inner degradation of the investigated thin sections, the crack density was used. Cracks in the thin sections were measured manually in two different ways for validation of the automatic determined results. On the one hand, the presented work shows that the width of cracks can be determined pixelwise, thus providing the plot of a width distribution. On the other hand, the automatically measured crack length differs in comparison to the manually measured ones.
In this paper, we present an open-source code for the first-order and higher-order nonlocal operator method (NOM) including a detailed description of the implementation. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combined with the method of weighed residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. The implementation in this paper is focused on linear elastic solids for sake of conciseness through the NOM can handle more complex nonlinear problems. The NOM can be very flexible and efficient to solve partial differential equations (PDEs), it’s also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Finally, we present some classical benchmark problems including the classical cantilever beam and plate-with-a-hole problem, and we also make an extension of this method to solve complicated problems including phase-field fracture modeling and gradient elasticity material.
In this study, we propose a nonlocal operator method (NOM) for the dynamic analysis of (thin) Kirchhoff plates. The nonlocal Hessian operator is derived based on a second-order Taylor series expansion. The NOM does not require any shape functions and associated derivatives as ’classical’ approaches such as FEM, drastically facilitating the implementation. Furthermore, NOM is higher order continuous, which is exploited for thin plate analysis that requires C1 continuity. The nonlocal dynamic governing formulation and operator energy functional for Kirchhoff plates are derived from a variational principle. The Verlet-velocity algorithm is used for the time discretization. After confirming the accuracy of the nonlocal Hessian operator, several numerical examples are simulated by the nonlocal dynamic Kirchhoff plate formulation.
We present a stochastic deep collocation method (DCM) based on neural architecture search (NAS) and transfer learning for heterogeneous porous media. We first carry out a sensitivity analysis to determine the key hyper-parameters of the network to reduce the search space and subsequently employ hyper-parameter optimization to finally obtain the parameter values. The presented NAS based DCM also saves the weights and biases of the most favorable architectures, which is then used in the fine-tuning process. We also employ transfer learning techniques to drastically reduce the computational cost. The presented DCM is then applied to the stochastic analysis of heterogeneous porous material. Therefore, a three dimensional stochastic flow model is built providing a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. The performance of the presented NAS based DCM is verified in different dimensions using the method of manufactured solutions. We show that it significantly outperforms finite difference methods in both accuracy and computational cost.
In machine learning, if the training data is independently and identically distributed as the test data then a trained model can make an accurate predictions for new samples of data. Conventional machine learning has a strong dependence on massive amounts of training data which are domain specific to understand their latent patterns. In contrast, Domain adaptation and Transfer learning methods are sub-fields within machine learning that are concerned with solving the inescapable problem of insufficient training data by relaxing the domain dependence hypothesis. In this contribution, this issue has been addressed and by making a novel combination of both the methods we develop a computationally efficient and practical algorithm to solve boundary value problems based on nonlinear partial differential equations. We adopt a meshfree analysis framework to integrate the prevailing geometric modelling techniques based on NURBS and present an enhanced deep collocation approach that also plays an important role in the accuracy of solutions. We start with a brief introduction on how these methods expand upon this framework. We observe an excellent agreement between these methods and have shown that how fine-tuning a pre-trained network to a specialized domain may lead to an outstanding performance compare to the existing ones. As proof of concept, we illustrate the performance of our proposed model on several benchmark problems.
In this work, we present a deep collocation method (DCM) for three-dimensional potential problems in non-homogeneous media. This approach utilizes a physics-informed neural network with material transfer learning reducing the solution of the non-homogeneous partial differential equations to an optimization problem. We tested different configurations of the physics-informed neural network including smooth activation functions, sampling methods for collocation points generation and combined optimizers. A material transfer learning technique is utilized for non-homogeneous media with different material gradations and parameters, which enhance the generality and robustness of the proposed method. In order to identify the most influential parameters of the network configuration, we carried out a global sensitivity analysis. Finally, we provide a convergence proof of our DCM. The approach is validated through several benchmark problems, also testing different material variations.