Refine
Document Type
- Article (1015) (remove)
Institute
- Professur Theorie und Geschichte der modernen Architektur (393)
- Institut für Strukturmechanik (ISM) (254)
- Professur Informatik im Bauwesen (130)
- Professur Stochastik und Optimierung (40)
- Professur Bauphysik (23)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (21)
- Professur Informatik in der Architektur (15)
- Junior-Professur Computational Architecture (12)
- Institut für Europäische Urbanistik (11)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Bauhaus-Kolloquium (395)
- Weimar (395)
- Angewandte Mathematik (186)
- Strukturmechanik (185)
- Architektur (168)
- 1986 (63)
- 1989 (60)
- Design (59)
- Bauhaus (55)
- Raum (55)
The work has studied the structure and properties of gypsum compositions modified with the manmade modifier based on metallurgical dust and multi-walled carbon nanotubes. The results show that changing the structure of solid gypsum leads to the increase in bending and compressive strength by 70,5% and 138% correspondingly, the water resistance increasing and the softening factor reaching 0,85. Modifying gypsum composition with complex additive leads to the formation of amorphous structures based on calcium hydrosilicates on the surface of primary gypsum crystallohydrates that bond gypsum crystals and reduce the access of water.
Maturation and Structure Formation Processes in Binders with Aqueous Alkali-Silicate Solutions
(2017)
Maturation and structure formation processes can lead to crack formation in silicate and aluminosilicate binders (e.g. for coating materials...) through restricted deformation, loss of strength and thus to loss of durability. These processes are evaluated with silicate materials with an outlook on aluminosilicate binders.
Welche Zukünfte?
(2017)
The fire resistance of concrete members is controlled by the temperature distribution of the considered cross section. The thermal analysis can be performed with the advanced temperature dependent physical properties provided by 5EN6 1992-1-2. But the recalculation of laboratory tests on columns from 5TU6 Braunschweig shows, that there are deviations between the calculated and measured temperatures. Therefore it can be assumed, that the mathematical formulation of these thermal properties could be improved. A sensitivity analysis is performed to identify the governing parameters of the temperature calculation and a nonlinear optimization method is used to enhance the formulation of the thermal properties. The proposed simplified properties are partly validated by the recalculation of measured temperatures of concrete columns. These first results show, that the scatter of the differences from the calculated to the measured temperatures can be reduced by the proposed simple model for the thermal analysis of concrete.
Tensile strain and compress strain can greatly affect the thermal conductivity of graphene nanoribbons (GNRs). However, the effect of GNRs under shear strain, which is also one of the main strain effect, has not been studied systematically yet. In this work, we employ reverse nonequilibrium molecular dynamics (RNEMD) to the systematical study of the thermal conductivity of GNRs (with model size of 4 nm × 15 nm) under the shear strain. Our studies show that the thermal conductivity of GNRs is not sensitive to the shear strain, and the thermal conductivity decreases only 12–16% before the pristine structure is broken. Furthermore, the phonon frequency and the change of the micro-structure of GNRs, such as band angel and bond length, are analyzed to explore the tendency of thermal conductivity. The results show that the main influence of shear strain is on the in-plane phonon density of states (PDOS), whose G band (higher frequency peaks) moved to the low frequency, thus the thermal conductivity is decreased. The unique thermal properties of GNRs under shear strains suggest their great potentials for graphene nanodevices and great potentials in the thermal managements and thermoelectric applications.
Im Aufsatz "Zwei Pater noster-Vertonungen von Melchior Vulpius und die Missa V. vocum super Pater noster qui es in Coelis Melchioris Vulpii von Georg Vintz" werden Pater noster-Vertonungen aus dem OPUSCULUM NOVUM SELECTISSIMARVUM CANTIONVM SACRARVM (Erfurt 1610) und aus der PARS PRIMA CANTIONVM SACRARVM (Jena 1602) des Weimarer Kantors und Komponisten Melchior Vulpius (1570 - 1615) besprochen und miteinander verglichen. Ebenso wird untersucht, in welcher Weise der Naumburger Organist Georg Vintz (ca. 1580 - ca. 1635) eine dieser Motetten als Grundlage für die Parodiemesse "MISSA V. VOCUM,Super Pater noster [...] Melchioris Vulpii" genutzt und bearbeitet hat.
Der Aufsatz beruht auf einem Vortrag bei einer Tagung über Vulpius, die 2015 in Meiningen stattfand. Der Tagungsband, den Maren Goltz herausgeben wird, erscheint vermutlich 2018. Da im Band nicht genügend Platz für ausführliche Notenbeispiele ist, werden die besprochenen Kompositionen vollständig als Noten- und Klangdateien durch diesen Permalink zur Verfügung gestellt.
One of the frequently examined design principles recommendations in multimedia learning is the personalization principle. Based on empirical evidence this principle states that using personalised messages in multimedia learning is more beneficial than using formal language (e.g. using ‘you’ instead of ‘the’). Although there is evidence that these slight changes in regard to the language style affect learning, motivation and the perceived cognitive load, it remains unclear, (1) whether the positive effects of personalised language can be transferred to all kinds of content of learning materials (e.g. specific potentially aversive health issues) and (2) which are the underlying processes (e.g. attention allocation) of the personalization effect. German university students (N= 37) learned symptoms and causes of cerebral haemorrhages either with a formal or a personalised version of the learning material. Analysis revealed comparable results to the few existing previous studies, indicating an inverted personalization effect for potentially aversive learning material. This effect was specifically revealed in regard to decreased average fixation duration and the number of fixations exclusively on the images in the personalised compared to the formal version. This result can be seen as indicators for an inverted effect of personalization on the level of visual attention.
Die Idee eines neuen Munizipalismus wird in den linken sozialen Bewegungen Europas und darüber hinaus breit diskutiert. Munizipalistische Bewegungen streben es an, kommunale Regierungen zu übernehmen oder zu beeinflussen, um lokale Institutionen (wieder) gemeinwohlorientiert auszurichten, ein neues Verhältnis zwischen kommunalen Regierungen und sozialen Bewegungen zu schaffen und so die Art wie Politik gestaltet wird von unten her zu demokratisieren und institutionelle Rahmenbedingungen zu verändern. Sie entstehen in Reaktion auf die aktuelle ökonomische und politische Krise – ebenso wie neue rechte und rechtspopulistische Bewegungen, als deren Gegenpart sie sich verstehen. Mit Mut und konkreten Utopien will man der multiplen städtischen Krise begegnen, statt mit Angst und Angstmacherei wie rechte Bewegungen. Deshalb trafen sich im Juni 2017 über 600 Vertreter_innen dieser munizipalistischen Bewegungen auf Einladung Barcelona en Comús.
Dieser Artikel analysiert, in welcher Weise sich die Weltkunstausstellung documenta 14 mit dem öffentlichen Raum in Kassel auseinandersetzte. Als Kritik an globalen Unrechtszuständen konzipiert, ging die diesjährige Documenta nicht auf die lokalen Umstände in Kassel ein und benutzte die Stadt stattdessen als Bühne. Statt sich mit den konkreten Prozessen vor Ort auseinanderzusetzen, wie die Ausstellung dies in Athen getan hat, wird die Tradition der Documenta gebrochen, einen Beitrag zur gesellschaftlichen Stadtentwicklung leisten zu wollen.
Elitenkritik, populare Bündnisse und inklusive Solidaritär. Interview zur Debatte um Linkspopulismus
(2018)
In der aktuellen ökonomischen und politischen Krise haben Debatten um linke Strategien wieder Hochkonjunktur. Besonders kontrovers werden Vorschläge diskutiert, die einen Linkspopulismus als Alternative zum rechten politischen Projekt, zum Neoliberalismus und als Transformationsstrategie hin zu einer sozialistischen Gesellschaft propagieren. Thomas Goes und Violetta Bock haben mit ihrem Buch Ein unanständiges Angebot? Mit linkem Populismus gegen Eliten und Rechte (2017) eine programmatische Aufarbeitung existierender linker Populismuskonzepte und ihre eigene Vorstellung davon, wie ein linker Populismus gelingen kann, vorgelegt. Damit haben sie die Debatte um Linkspopulismus in Deutschland befeuert. Im Interview werden sie nach ihren Positionen und den Kontroversen um das Buch befragt. Das Interview soll als Aufschlag für eine Debatte dienen. Antworten zu den dargestellten Positionen und Bezüge zu städtischen Themen und städtischen sozialen Bewegungen sind sehr willkommen.
Following restructuring of power industry, electricity supply to end-use customers has undergone fundamental changes. In the restructured power system, some of the responsibilities of the vertically integrated distribution companies have been assigned to network managers and retailers. Under the new situation, retailers are in charge of providing electrical energy to electricity consumers who have already signed contract with them. Retailers usually provide the required energy at a variable price, from wholesale electricity markets, forward contracts with energy producers, or distributed energy generators, and sell it at a fixed retail price to its clients. Different strategies are implemented by retailers to reduce the potential financial losses and risks associated with the uncertain nature of wholesale spot electricity market prices and electrical load of the consumers. In this paper, the strategic behavior of retailers in implementing forward contracts, distributed energy sources, and demand-response programs with the aim of increasing their profit and reducing their risk, while keeping their retail prices as low as possible, is investigated. For this purpose, risk management problem of the retailer companies collaborating with wholesale electricity markets, is modeled through bi-level programming approach and a comprehensive framework for retail electricity pricing, considering customers’ constraints, is provided in this paper. In the first level of the proposed bi-level optimization problem, the retailer maximizes its expected profit for a given risk level of profit variability, while in the second level, the customers minimize their consumption costs. The proposed programming problem is modeled as Mixed Integer programming (MIP) problem and can be efficiently solved using available commercial solvers. The simulation results on a test case approve the effectiveness of the proposed demand-response program based on dynamic pricing approach on reducing the retailer’s risk and increasing its profit.
In this paper, the decision-making problem of the retailers under dynamic pricing approach for demand response integration have been investigated. The retailer was supposed to rely on forward contracts, DGs, and spot electricity market to supply the required active and reactive power of its customers. To verify the effectiveness of the proposed model, four schemes for retailer’s scheduling problem are considered and the resulted profit under each scheme are analyzed and compared. The simulation results on a test case indicate that providing more options for the retailer to buy the required power of its customers and increase its flexibility in buying energy from spot electricity market reduces the retailers’ risk and increases its profit. From the customers’ perspective also the retailers’accesstodifferentpowersupplysourcesmayleadtoareductionintheretailelectricityprices. Since the retailer would be able to decrease its electricity selling price to the customers without losing its profitability, with the aim of attracting more customers. Inthiswork,theconditionalvalueatrisk(CVaR)measureisusedforconsideringandquantifying riskinthedecision-makingproblems. Amongallthepossibleoptioninfrontoftheretailertooptimize its profit and risk, demand response programs are the most beneficial option for both retailer and its customers. The simulation results on the case study prove that implementing dynamic pricing approach on retail electricity prices to integrate demand response programs can successfully provoke customers to shift their flexible demand from peak-load hours to mid-load and low-load hours. Comparing the simulation results of the third and fourth schemes evidences the impact of DRPs and customers’ load shifting on the reduction of retailer’s risk, as well as the reduction of retailer’s payment to contract holders, DG owners, and spot electricity market. Furthermore, the numerical results imply on the potential of reducing average retail prices up to 8%, under demand response activation. Consequently, it provides a win–win solution for both retailer and its customers.
Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.
The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.
Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.
As part of an international research project – funded by the European Union – capillary glasses for facades are being developed exploiting storage energy by means of fluids flowing through the capillaries. To meet highest visual demands, acrylate adhesives and EVA films are tested as possible bonding materials for the glass setup. Especially non-destructive methods (visual analysis, analysis of birefringent properties and computed tomographic data) are applied to evaluate failure patterns as well as the long-term behavior considering climatic influences. The experimental investigations are presented after different loading periods, providing information of failure developments. In addition, detailed information and scientific findings on the application of computed tomographic analyses are presented.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar. Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products. By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
The Local Governance of Arrival in Leipzig: Housing of Asylum-Seeking Persons as a Contested Field
(2018)
The article examines how the German city of Leipzig governs the housing of asylum seekers. Leipzig was a frontrunner in organizing the decentralized accommodation of asylum seekers when adopting its accommodation concept in 2012. This concept aimed at integrating asylum-seeking persons in the regular housing market at an early stage of arrival. However, since then, the city of Leipzig faces more and more challenges in implementing the concept. This is particularly due to the increasingly tight situation on the housing market while the number of people seeking protection increased and partly due to discriminating and xenophobic attitudes on the side of house owners and managers. Therefore, we argue that the so-called refugee crisis of 2015–2016 has to be seen in close interaction with a growing general housing shortage in Leipzig like in many other large European cities. Furthermore, we understand the municipal governing of housing as a contested field regarding its entanglement of diverse federal levels and policy scales, the diversity of stakeholders involved, and its dynamic change over the last years. We analyze this contested field set against the current context of arrival and dynamic urban growth on a local level. Based on empirical qualitative research that was conducted by us in 2016, Leipzig’s local specifics will be investigated under the umbrella of our conceptual framework of Governance of Arrival. The issues of a strained housing market and the integration of asylum seekers in it do not apply only to Leipzig, but shed light on similar developments in other European Cities.
Die späten 1960er Jahre und vor allem die 1970er Jahre waren eine Hochphase der Mieter_innenproteste in der BRD. Dieser Beitrag verfolgt die These, dass die Krise der fordistischen Wohnraumversorgung in den 1960er Jahren, bzw. die von der Politik implementierten Lösungsstrategien dieser Krise, eine Klassenallianz in wohnungsbezogenen Protesten ermöglichte und, dass sich diese Klassenallianz im Laufe der 1970er und 1980er Jahre aufspaltete, was zur Einhegung des Protests in das entstehende neoliberale Projekt führte. Im Folgenden beschreibe ich also zunächst die Wohnungsfrage 1968 als Krise der fordistischen Wohnraumproduktion und damit die materielle Basis der Klassenallianz. Daran anschließend illustriere ich anhand von Protesten in den drei Bereichen Massenwohnungsbau, Sanierungsgebiete und Hausbesetzungen die Klassenallianz und vollziehe ich deren Aufspaltung nach. Und schließlich stelle ich die Frage, was heute aus dieser Geschichte gelernt werden kann.
Performance assessment of a ductless personalized ventilation system using a validated CFD model
(2018)
The aim of this study is twofold: to validate a computational fluid dynamics (CFD) model, and then to use the validated model to evaluate the performance of a ductless personalized ventilation (DPV) system. To validate the numerical model, a series of measurements was conducted in a climate chamber equipped with a thermal manikin. Various turbulence models, settings, and options were tested; simulation results were compared to the measured data to determine the turbulence model and solver settings that achieve the best agreement between the measured and simulated values. Subsequently, the validated CFD model was then used to evaluate the thermal environment and indoor air quality in a room equipped with a DPV system combined with displacement ventilation. Results from the numerical model were then used to quantify thermal sensation and comfort using the UC Berkeley thermal comfort model.
This study aims to develop an approach to couple a computational fluid dynamics (CFD) solver to the University of California, Berkeley (UCB) thermal comfort model to accurately evaluate thermal comfort. The coupling was made using an iterative JavaScript to automatically transfer data for each individual segment of the human body back and forth between the CFD solver and the UCB model until reaching convergence defined by a stopping criterion. The location from which data are transferred to the UCB model was determined using a new approach based on the temperature difference between subsequent points on the temperature profile curve in the vicinity of the body surface. This approach was used because the microclimate surrounding the human body differs in thickness depending on the body segment and the surrounding environment. To accurately simulate the thermal environment, the numerical model was validated beforehand using experimental data collected in a climate chamber equipped with a thermal manikin. Furthermore, an example of the practical implementations of this coupling is reported in this paper through radiant floor cooling simulation cases, in which overall and local thermal sensation and comfort were investigated using the coupled UCB model.
Auguste Rodins Weimarer Eva
(2018)
A broadband soil dielectric spectra retrieval approach ( 1 MHz– 2 GHz) has been implemented for a layered half space. The inversion kernel consists of a two-port transmission line forward model in the frequency domain and a constitutive material equation based on a power law soil mixture rule (Complex Refractive Index Model - CRIM). The spatially-distributed retrieval of broadband dielectric spectra was achieved with a global optimization approach based on a Shuffled Complex Evolution (SCE) algorithm using the full set of the scattering parameters. For each layer, the broadband dielectric spectra were retrieved with the corresponding parameters thickness, porosity, water saturation and electrical conductivity of the aqueous pore solution. For the validation of the approach, a coaxial transmission line cell measured with a network analyzer was used. The possibilities and limitations of the inverse parameter estimation were numerically analyzed in four scenarios. Expected and retrieved layer thicknesses, soil properties and broadband dielectric spectra in each scenario were in reasonable agreement. Hence, the model is suitable for an estimation of in-homogeneous material parameter distributions. Moreover, the proposed frequency domain approach allows an automatic adaptation of layer number and thickness or regular grids in time and/or space.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar.
Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products.
By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
Wireless sensor networks have attracted great attention for applications in structural health monitoring due to their ease of use, flexibility of deployment, and cost-effectiveness. This paper presents a software framework for WiFi-based wireless sensor networks composed of low-cost mass market single-board computers. A number of specific system-level software components were developed to enable robust data acquisition, data processing, sensor network communication, and timing with a focus on structural health monitoring (SHM) applications. The framework was validated on Raspberry Pi computers, and its performance was studied in detail. The paper presents several characteristics of the measurement quality such as sampling accuracy and time synchronization and discusses the specific limitations of the system. The implementation includes a complementary smartphone application that is utilized for data acquisition, visualization, and analysis. A prototypical implementation further demonstrates the feasibility of integrating smartphones as data acquisition nodes into the network, utilizing their internal sensors. The measurement system was employed in several monitoring campaigns, three of which are documented in detail. The suitability of the system is evaluated based on comparisons of target quantities with reference measurements. The results indicate that the presented system can robustly achieve a measurement performance commensurate with that required in many typical SHM tasks such as modal identification. As such, it represents a cost-effective alternative to more traditional monitoring solutions.
Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal mehods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility.
For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.
Seit 50 Jahren wird über Erklärungsansätze für Gentrifizierung gestritten. Sehr viel länger schon wandert anlagesuchendes Kapital von einem
Ort zum anderen und hinterlässt dabei Investitionsruinen einerseits und Menschen, die durch Verdrängung ihr Zuhause verlieren, andererseits. Sehr viel kürzer erst wird der Begriff Gentrifizierung hier und da von sozialen Bewegungen aufgegriffen, die sich mit letzterem Phänomen auseinandersetzen.
In diesem Beitrag soll es nicht um die wissenschaftliche Debatte um Erklärungsansätze für Gentrifizierung und auch nicht um die wissenschaftliche Relevanz des Begriffes gehen, sondern um seine Rolle und Funktion in sozialen Bewegungen.
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.
Rapid advancements of modern technologies put high demands on mathematical modelling of engineering systems. Typically, systems are no longer “simple” objects, but rather coupled systems involving multiphysics phenomena, the modelling of which involves coupling of models that describe different phenomena. After constructing a mathematical model, it is essential to analyse the correctness of the coupled models and to detect modelling errors compromising the final modelling result. Broadly, there are two classes of modelling errors: (a) errors related to abstract modelling, eg, conceptual errors concerning the coherence of a model as a whole and (b) errors related to concrete modelling or instance modelling, eg, questions of approximation quality and implementation. Instance modelling errors, on the one hand, are relatively well understood. Abstract modelling errors, on the other, are not appropriately addressed by modern modelling methodologies. The aim of this paper is to initiate a discussion on abstract approaches and their usability for mathematical modelling of engineering systems with the goal of making it possible to catch conceptual modelling errors early and automatically by computer assistant tools. To that end, we argue that it is necessary to identify and employ suitable mathematical abstractions to capture an accurate conceptual description of the process of modelling engineering systems.
This study aims to evaluate a new approach in modeling gully erosion susceptibility (GES) based on a deep learning neural network (DLNN) model and an ensemble particle swarm optimization (PSO) algorithm with DLNN (PSO-DLNN), comparing these approaches with common artificial neural network (ANN) and support vector machine (SVM) models in Shirahan watershed, Iran. For this purpose, 13 independent variables affecting GES in the study area, namely, altitude, slope, aspect, plan curvature, profile curvature, drainage density, distance from a river, land use, soil, lithology, rainfall, stream power index (SPI), and topographic wetness index (TWI), were prepared. A total of 132 gully erosion locations were identified during field visits. To implement the proposed model, the dataset was divided into the two categories of training (70%) and testing (30%). The results indicate that the area under the curve (AUC) value from receiver operating characteristic (ROC) considering the testing datasets of PSO-DLNN is 0.89, which indicates superb accuracy. The rest of the models are associated with optimal accuracy and have similar results to the PSO-DLNN model; the AUC values from ROC of DLNN, SVM, and ANN for the testing datasets are 0.87, 0.85, and 0.84, respectively. The efficiency of the proposed model in terms of prediction of GES was increased. Therefore, it can be concluded that the DLNN model and its ensemble with the PSO algorithm can be used as a novel and practical method to predict gully erosion susceptibility, which can help planners and managers to manage and reduce the risk of this phenomenon.
When it comes to monitoring of huge structures, main issues are limited time, high costs and how to deal with the big amount of data. In order to reduce and manage them, respectively, methods from the field of optimal design of experiments are useful and supportive. Having optimal experimental designs at hand before conducting any measurements is leading to a highly informative measurement concept, where the sensor positions are optimized according to minimal errors in the structures’ models. For the reduction of computational time a combined approach using Fisher Information Matrix and mean-squared error in a two-step procedure is proposed under the consideration of different error types. The error descriptions contain random/aleatoric and systematic/epistemic portions. Applying this combined approach on a finite element model using artificial acceleration time measurement data with artificially added errors leads to the optimized sensor positions. These findings are compared to results from laboratory experiments on the modeled structure, which is a tower-like structure represented by a hollow pipe as the cantilever beam. Conclusively, the combined approach is leading to a sound experimental design that leads to a good estimate of the structure’s behavior and model parameters without the need of preliminary measurements for model updating.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
Acoustic travel-time TOMography (ATOM) allows the measurement and reconstruction of air temperature distributions. Due to limiting factors, such as the challenge of travel-time estimation of the early reflections in the room impulse response, which heavily depends on the position of transducers inside the measurement area, ATOM is applied mainly outdoors. To apply ATOM in buildings, this paper presents a numerical solution to optimize the positions of transducers. This optimization avoids reflection overlaps, leading to distinguishable travel-times in the impulse response reflectogram. To increase the accuracy of the measured temperature within tomographic voxels, an additional function is employed to the proposed numerical method to minimize the number of sound-path-free voxels, ensuring the best sound-ray coverage of the room. Subsequently, an experimental set-up has been performed to verify the proposed numerical method. The results indicate the positive impact of the optimal positions of transducers on the distribution of ATOM-temperatures.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
This article focuses on further developments of the background-oriented schlieren (BOS) technique to visualize convective indoor air flow, which is usually defined by very small density gradients. Since the light rays deflect when passing through fluids with different densities, BOS can detect the resulting refractive index gradients as integration along a line of sight. In this paper, the BOS technique is used to yield a two-dimensional visualization of small density gradients. The novelty of the described method is the implementation of a highly sensitive BOS setup to visualize the ascending thermal plume from a heated thermal manikin with temperature differences of minimum 1 K. To guarantee steady boundary conditions, the thermal manikin was seated in a climate laboratory. For the experimental investigations, a high-resolution DLSR camera was used capturing a large field of view with sufficient detail accuracy. Several parameters such as various backgrounds, focal lengths, room air temperatures, and distances between the object of investigation, camera, and structured background were tested to find the most suitable parameters to visualize convective indoor air flow. Besides these measurements, this paper presents the analyzing method using cross-correlation algorithms and finally the results of visualizing the convective indoor air flow with BOS. The highly sensitive BOS setup presented in this article complements the commonly used invasive methods that highly influence weak air flows.
The contribution explores the migratory situation on the Balkans and more specifically in the so-called Refugee District in Belgrade from a spatial perspective. By visualizing the areas of tensions in the Refugee District, the city of Belgrade, Serbia and Europe it aims to disentangle the political and socio-spatial levels that lead to the stuck situation of in-betweenness at the gates of the European Union.
Neuartige Sanitärsysteme zielen auf eine ressourcenorientierte Verwertung von Abwasser ab. Erreicht werden soll dies durch die separate Erfassung von Abwasserteilströmen. In den Fachöffentlichkeiten der Wasserwirtschaft und Raumplanung werden neuartige Sanitärsysteme als ein geeigneter Ansatz für die zukünftige
Sicherung der Abwasserentsorgung in ländlichen Räumen betrachtet. Die Praxistauglichkeit dieser Systeme wurde zwar in Forschungsprojekten nachgewiesen, bisher erschweren jedoch für Abwasserentsorger vielfältige Risiken die Einführung einer ressourcenorientierten Abwasserbewirtschaftung. Ausgehend von einer Untersuchung der Kontexte bei der Umsetzung eines neuartigen Sanitärsystems im ländlichen Raum Thüringens wird in diesem Beitrag der Frage nachgegangen, wie auf Landesebene mit dem abwasserwirtschaftlichen Instrumentarium die Einführung von ressourcenorientierten Systemansätzen unterstützt werden kann. Zentrale Elemente des Beitrags sind die Darstellung der wesentlichen Transformationsrisiken in Bezug auf die Einführung innovativer Lösungsansätze, eine Erläuterung der spezifischen abwasserwirtschaftlichen Instrumente sowie die Darlegung von Steuerungsansätzen,mit denen die Einführung von neuartigen Sanitärsystemen gefördert werden kann. Im Ergebnis wird die Realisierbarkeit von neuartigen Sanitärsystemen durch den strategischen Einsatz des Instrumentariums deutlich, gleichwohl die Wasserwirtschaft durch die Erweiterung der bisherigen Systemgrenzen auf die Kooperation mit anderen Bereichen der Daseinsvorsorge angewiesen ist.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Personalisierte Lüftung (PL) kann die thermische Behaglichkeit sowie die Qualität der eingeatmeten Atemluft verbessern, in dem jedem Arbeitsplatz Frischluft separat zugeführt wird. In diesem Beitrag wird die Wirkung der PL auf die thermische Behaglichkeit der Nutzer unter sommerlichen Randbedingungen untersucht. Hierfür wurden zwei Ansätze zur Bewertung des Kühlungseffekts der PL untersucht: basierend auf (1) der äquivalenten Temperatur und (2) dem thermischen Empfinden. Grundlage der Auswertung sind in einer Klimakammer gemessene sowie numerisch simulierte Daten. Vor der Durchführung der Simulationen wurde das numerische Modell zunächst anhand der gemessenen Daten validiert. Die Ergebnisse zeigen, dass der Ansatz basierend auf dem thermischen Empfinden zur Evaluierung des Kühlungseffekts der PL sinnvoller sein kann, da bei diesem die komplexen physiologischen Faktoren besser berücksichtigt werden.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
Die im Jahr 2020 in Deutschland praktizierte Siedlungs- und Wohnungspolitik erhält in Anbetracht ihrer Auswirkungen auf die soziale und ökologische Lage einen bitteren Beigeschmack. Arm und Reich triften weiter auseinander und einer zielgerichteten ökologischen Transformation der Art und Weise, wie Stadtentwicklung und Wohnungspolitik gestaltet werden,stehen noch immer historisch und systemisch bedingte Pfadabhängigkeiten im Weg. Diese werden nur durch eine integrierte Betrachtung sozialer und ökonomischer Aspekte sichtbar und deuten auf eine der ursprünglichen Fragen linker Gesellschaftsforschung hin: Die Auseinandersetzung mit dem Verhältnis von Eigentum und Gerechtigkeit.
Im Ergebnis stehen drei wesentliche Befunde: Der Diskurs zum Schutz des Klimas und der Biodiversität berührt direkt die Parameter Dichte, Nutzungsmischung und Flächeninanspruchnahme; zweitens steigt letztere relativ mit erhöhtem, individuell verfügbaren Kapital und insbesondere im selbstgenutztem Eigentum gegenüber Mietwohnungen; und drittens wächst der Eigentumsanteil mit fortschreitender Finanzialisierung des Wohnungsmarktes, sodass das Risiko sozialer und ökologischer Krisen sich verschärft.
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
The performance of ductless personalized ventilation (DPV) was compared to the performance of a typical desk fan since they are both stand-alone systems that allow the users to personalize their indoor environment. The two systems were evaluated using a validated computational fluid dynamics (CFD) model of an office room occupied by two users. To investigate the impact of DPV and the fan on the inhaled air quality, two types of contamination sources were modelled in the domain: an active source and a passive source. Additionally, the influence of the compared systems on thermal comfort was assessed using the coupling of CFD with the comfort model developed by the University of California, Berkeley (UCB model). Results indicated that DPV performed generally better than the desk fan. It provided better thermal comfort and showed a superior performance in removing the exhaled contaminants. However, the desk fan performed better in removing the contaminants emitted from a passive source near the floor level. This indicates that the performance of DPV and desk fans depends highly on the location of the contamination source. Moreover, the simulations showed that both systems increased the spread of exhaled contamination when used by the source occupant.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Temporary changes in precipitation may lead to sustained and severe drought or massive floods in different parts of the world. Knowing the variation in precipitation can effectively help the water resources decision-makers in water resources management. Large-scale circulation drivers have a considerable impact on precipitation in different parts of the world. In this research, the impact of El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and North Atlantic Oscillation (NAO) on seasonal precipitation over Iran was investigated. For this purpose, 103 synoptic stations with at least 30 years of data were utilized. The Spearman correlation coefficient between the indices in the previous 12 months with seasonal precipitation was calculated, and the meaningful correlations were extracted. Then, the month in which each of these indices has the highest correlation with seasonal precipitation was determined. Finally, the overall amount of increase or decrease in seasonal precipitation due to each of these indices was calculated. Results indicate the Southern Oscillation Index (SOI), NAO, and PDO have the most impact on seasonal precipitation, respectively. Additionally, these indices have the highest impact on the precipitation in winter, autumn, spring, and summer, respectively. SOI has a diverse impact on winter precipitation compared to the PDO and NAO, while in the other seasons, each index has its special impact on seasonal precipitation. Generally, all indices in different phases may decrease the seasonal precipitation up to 100%. However, the seasonal precipitation may increase more than 100% in different seasons due to the impact of these indices. The results of this study can be used effectively in water resources management and especially in dam operation.
Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40.
In Germany, bridges have an average age of 40 years. A bridge consumes between 0.4% and 2% of its construction cost per year over its entire life cycle. This means that up to 80% of the construction cost are additionally needed for operation, inspection, maintenance, and destruction. Current practices rely either on paperbased inspections or on abstract specialist software. Every application in the inspection and maintenance sector uses its own data model for structures, inspections, defects, and maintenance. Due to this, data and properties have to be transferred manually, otherwise a converter is necessary for every data exchange between two applications. To overcome this issue, an adequate model standard for inspections, damage, and maintenance is necessary. Modern 3D models may serve as a single source of truth, which has been suggested in the Building Information Modeling (BIM) concept. Further, these models offer a clear visualization of the built infrastructure, and improve not only the planning and construction phases, but also the operation phase of construction projects. BIM is established mostly in the Architecture, Engineering, and Construction (AEC) sector to plan and construct new buildings. Currently, BIM does not cover the whole life cycle of a building, especially not inspection and maintenance. Creating damage models needs the building model first, because a defect is dependent on the building component, its properties and material. Hence, a building information model is necessary to obtain meaningful conclusions from damage information. This paper analyzes the requirements, which arise from practice, and the research that has been done in modeling damage and related information for bridges. With a look at damage categories and use cases related to inspection and maintenance, scientific literature is discussed and synthesized. Finally, research gaps and needs are identified and discussed.
Der Beitrag verbindet die Diskussion um die postpolitische Stadt mit der zunehmenden wissenschaftlichen und aktivistischen Auseinandersetzung mit dem Anthropozän, ein Konzept, das die ökologischen und sozialpolitischen Implikationen menschlichen Handelns auf die Erdoberfläche beschreibt. Anhand von drei ausgewählten Fallstudien erkunden wir,
wie die spezifisch anthropogene, also menschengemachte, Krise urbaner Luftverschmutzung in künstlerischen Positionen problematisiert wird. Im Kontext des potenziellen Vormarschs von Postpolitik besprechen wir, wie der ambivalente Diskurs des Anthropozäns einerseits Depolitisierung begünstigt und andererseits neue Möglichkeiten für die Repolitisierung
globaler Umweltherausforderungen ermöglicht.
Conventional superplasticizers based on polycarboxylate ether (PCE) show an intolerance to clay minerals due to intercalation of their polyethylene glycol (PEG) side chains into the interlayers of the clay mineral. An intolerance to very basic media is also known. This makes PCE an unsuitable choice as a superplasticizer for geopolymers. Bio-based superplasticizers derived from starch showed comparable effects to PCE in a cementitious system. The aim of the present study was to determine if starch superplasticizers (SSPs) could be a suitable additive for geopolymers by carrying out basic investigations with respect to slump, hardening, compressive and flexural strength, shrinkage, and porosity. Four SSPs were synthesized, differing in charge polarity and specific charge density. Two conventional PCE superplasticizers, differing in terms of molecular structure, were also included in this study. The results revealed that SSPs improved the slump of a metakaolin-based geopolymer (MK-geopolymer) mortar while the PCE investigated showed no improvement. The impact of superplasticizers on early hardening (up to 72 h) was negligible. Less linear shrinkage over the course of 56 days was seen for all samples in comparison with the reference. Compressive strengths of SSP specimens tested after 7 and 28 days of curing were comparable to the reference, while PCE led to a decline. The SSPs had a small impact on porosity with a shift to the formation of more gel pores while PCE caused an increase in porosity. Throughout this research, SSPs were identified as promising superplasticizers for MK-geopolymer mortar and concrete.
Die derzeitige Wohnungskrise hat eine sozial-ökologische Kernproblematik. Dabei ist die sozial ungerechte und ökologisch problematische Verteilung von Wohnfläche meist unsichtbar und wird weder in wissenschaftlichen noch in aktivistischen Kontexten ausreichend als Frage der Flächengerechtigkeit problematisiert. Denn Wohnraum und Fläche in einer Stadt sind keine endlos verfügbaren Güter: Wenn einige Menschen auf viel Raum leben, bleibt für andere Menschen weniger Fläche übrig. Und die Menschen, die am wenigstens für eine Verknappung von Wohnraum verantwortlich sind, leiden am meisten darunter. Dieser Artikel arbeitet zunächst den Begriff der Wohnflächengerechtigkeit heraus, wobei auf die Ungleichverteilung von Wohnfläche und deren gesellschaftliche Implikationen unter derzeitigen Wohnungsverteilungsmechanismen Bezug genommen wird. Anschließend wird der Verbrauch von (Wohn-)Fläche aus ökologischer Perspektive problematisiert. Der Artikel diskutiert scheinbare und transformationsorientierte Lösungs- und Handlungsansätze. Abschließend fordert er in der kritischen Stadtforschung und in aktivistischen Kontexten eine stärkere Debatte um eine Wohnflächengerechtigkeit, deren Verwirklichung gleichermaßen eine soziale wie ökologische Dimension hat.
Matthias Bernt und Andrej Holm weisen zu Recht darauf hin, dass es einer Forschung zu ostdeutschen Städten als konzeptionell eigenständigem Feld bedarf, die die spezifische Verräumlichung des tiefgreifenden gesellschaftlichen Transformationsprozesses nach 1990 ins Zentrum stellt. Dabei betrachten sie insbesondere das Feld des Wohnens als produktiv, um Kenntnis über die Struktur und Wirkung dieses Prozesses zu erlangen. Allerdings bleiben sie vage dabei, wie eine solche spezifisch auf Ostdeutschland gerichtete Wohnungsforschung zu konzipieren wäre und in welcher Weise die Besonderheiten und Parallelitäten ostdeutscher Entwicklungen zu den Transformationen von Wohnungs- und Stadtentwicklungspolitik in Westdeutschland, aber auch international, in Bezug zu setzen wären.
Prediction of the groundwater nitrate concentration is of utmost importance for pollution control and water resource management. This research aims to model the spatial groundwater nitrate concentration in the Marvdasht watershed, Iran, based on several artificial intelligence methods of support vector machine (SVM), Cubist, random forest (RF), and Bayesian artificial neural network (Baysia-ANN) machine learning models. For this purpose, 11 independent variables affecting groundwater nitrate changes include elevation, slope, plan curvature, profile curvature, rainfall, piezometric depth, distance from the river, distance from residential, Sodium (Na), Potassium (K), and topographic wetness index (TWI) in the study area were prepared. Nitrate levels were also measured in 67 wells and used as a dependent variable for modeling. Data were divided into two categories of training (70%) and testing (30%) for modeling. The evaluation criteria coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency (NSE) were used to evaluate the performance of the models used. The results of modeling the susceptibility of groundwater nitrate concentration showed that the RF (R2 = 0.89, RMSE = 4.24, NSE = 0.87) model is better than the other Cubist (R2 = 0.87, RMSE = 5.18, NSE = 0.81), SVM (R2 = 0.74, RMSE = 6.07, NSE = 0.74), Bayesian-ANN (R2 = 0.79, RMSE = 5.91, NSE = 0.75) models. The results of groundwater nitrate concentration zoning in the study area showed that the northern parts of the case study have the highest amount of nitrate, which is higher in these agricultural areas than in other areas. The most important cause of nitrate pollution in these areas is agriculture activities and the use of groundwater to irrigate these crops and the wells close to agricultural areas, which has led to the indiscriminate use of chemical fertilizers by irrigation or rainwater of these fertilizers is washed and penetrates groundwater and pollutes the aquifer.
Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification
(2020)
Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.
This paper proposes a practice-theoretical journalism research approach for an alternate and innovative perspective of digital journalism’s current empirical challenges. The practice-theoretical approach is introduced by demonstrating its explanatory power in relation to demarcation problems, technological changes, economic challenges and challenges to journalism’s legitimacy. Its respective advantages in dealing with these problems are explained and then compared to established journalism theories. The particular relevance of the theoretical perspective is due to (1) its central decision to observe journalistic practices, (2) the transgression of conventional journalistic boundaries, (3) the denaturalization of journalistic norms and laws, (4) the explicit consideration of a material, socio-technical dimension of journalism, (5) a focus on the conflicting relationship between journalistic practices and media management practices, and (6) prioritizing order generation over stability.
This study investigates the performance of two systems: personalized ventilation (PV) and ductless personalized ventilation (DPV). Even though the literature indicates a compelling performance of PV, it is not often used in practice due to its impracticality. Therefore, the present study assesses the possibility of replacing the inflexible PV with DPV in office rooms equipped with displacement ventilation (DV) in the summer season. Numerical simulations were utilized to evaluate the inhaled concentration of pollutants when PV and DPV are used. The systems were compared in a simulated office with two occupants: a susceptible occupant and a source occupant. Three types of pollution were simulated: exhaled infectious air, dermally emitted contamination, and room contamination from a passive source. Results indicated that PV improved the inhaled air quality regardless of the location of the pollution source; a higher PV supply flow rate positively impacted the inhaled air quality. Contrarily, the performance of DPV was highly sensitive to the source location and the personalized flow rate. A higher DPV flow rate tends to decrease the inhaled air quality due to increased mixing of pollutants in the room. Moreover, both systems achieved better results when the personalized system of the source occupant was switched off.
In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.
Why Do Digital Native News Media Fail? An Investigation of Failure in the Early Start-Up Phase
(2020)
Digital native news media have great potential for improving journalism. Theoretically, they can be the sites where new products, novel revenue streams and alternative ways of organizing digital journalism are discovered, tested, and advanced. In practice, however, the situation appears to be more complicated. Besides the normal pressures facing new businesses, entrepreneurs in digital news are faced with specific challenges. Against the background of general and journalism specific entrepreneurship literature, and in light of a practice–theoretical approach, this qualitative case study research on 15 German digital native news media outlets empirically investigates what barriers curb their innovative capacity in the early start-up phase. In the new media organizations under study here, there are—among other problems—a high degree of homogeneity within founding teams, tensions between journalistic and economic practices, insufficient user orientation, as well as a tendency for organizations to be underfinanced. The patterns of failure investigated in this study can raise awareness, help news start-ups avoid common mistakes before actually entering the market, and help industry experts and investors to realistically estimate the potential of new ventures within the digital news industry.
Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.
Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.
Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)
Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.
Along with environmental pollution, urban planning has been connected to public health. The research indicates that the quality of built environments plays an important role in reducing mental disorders and overall health. The structure and shape of the city are considered as one of the factors influencing happiness and health in urban communities and the type of the daily activities of citizens. The aim of this study was to promote physical activity in the main structure of the city via urban design in a way that the main form and morphology of the city can encourage citizens to move around and have physical activity within the city. Functional, physical, cultural-social, and perceptual-visual features are regarded as the most important and effective criteria in increasing physical activities in urban spaces, based on literature review. The environmental quality of urban spaces and their role in the physical activities of citizens in urban spaces were assessed by using the questionnaire tool and analytical network process (ANP) of structural equation modeling. Further, the space syntax method was utilized to evaluate the role of the spatial integration of urban spaces on improving physical activities. Based on the results, consideration of functional diversity, spatial flexibility and integration, security, and the aesthetic and visual quality of urban spaces plays an important role in improving the physical health of citizens in urban spaces. Further, more physical activities, including motivation for walking and the sense of public health and happiness, were observed in the streets having higher linkage and space syntax indexes with their surrounding texture.
While Public-Private Partnership (PPP) is widely adopted across various sectors, it raises a question on its meagre utilisation in the housing sector. This paper, therefore, gauges the perspective of the stakeholders in the building industry towards the application of PPP in various building sectors together with housing. It assesses the performance reliability of PPP for housing by learning possible take-aways from other sectors. The role of key stakeholders in the industry becomes highly responsible for an informed understanding and decision-making. To this end, a two-tier investigation was conducted including surveys and expert interviews, with several stakeholders in the PPP industry in Europe, involving the public sector, private sector, consultants, as well as other community/user representatives.
The survey results demonstrated the success rate with PPPs, major factors important for PPPs such as profitability or end-user acceptability, the prevalent practices and trends in the PPP world, and the majority of support expressed in favour of the suitability of PPP for housing. The interviews added more detailed dimensions to the understanding of the PPP industry, its functioning and enabling the formation of a comprehensive outlook. The results present the perspective, approaches, and experiences of stakeholders over PPP practices, current trends and scenarios and their take on PPP in housing. It shall aid in understanding the challenges prevalent in the PPP approach for implementation in housing and enable the policymakers and industry stakeholders to make provisions for higher uptake to accelerate housing provision.
Despite digitization and platformization, mass media and established media companies still play a crucial role in the provision of journalistic content in democratic societies. Competition is one key driver of (media) company behavior and is considered to have an impact on the media’s performance. However, theory and empirical research are ambiguous about the relationship. The objective of this article is to empirically analyze the effect of competition on media performance in a cross-national context. We assessed media performance of media companies as the importance of journalistic goals within their stated corporate goal system. We conducted a content analysis of letters to the shareholders in annual reports of more than 50 media companies from 2000 to 2014 to operationalize journalistic goal importance. When employing a fixed effects regression analysis, as well as a fuzzy set qualitative comparative analysis, results suggest that competition has a positive effect on the importance of journalistic goals, while the existence of a strong public service media sector appears to have the effect of “crowding out” commercial media companies.
In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost.
Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.
Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.
For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.
This paper reports the formation and structure of fast setting geopolymers activated by using three sodium silicate solutions with different modules (1.6, 2.0 and 2.4) and a berlinite-type aluminum orthophosphate. By varying the concentration of the aluminum orthophosphate, different Si/Al-ratios were established (6, 3 and 2). Reaction kinetics of binders were determined by isothermal calorimetric measurements at 20 °C. X-ray diffraction analysis as well as nuclear magnetic resonance (NMR) measurements were performed on binders to determine differences in structure by varying the alkalinity of the sodium silicate solutions and the Si/Al-ratio. The calorimetric results indicated that the higher the alkalinity of the sodium silicate solution, the higher the solubility and degree of conversion of the aluminum orthophosphate. The results of X-ray diffraction and Rietveldt analysis, as well as the NMR measurements, confirmed the assumption of the calorimetric experiments that first the aluminum orthophosphate was dissolved and then a polycondensation to an amorphous aluminosilicate network occurred. The different amounts of amorphous phases formed as a function of the alkalinity of the sodium silicate solution, indicate that tetrahydroxoaluminate species were formed during the dissolution of the aluminum orthophosphate, which reduce the pH value. This led to no further dissolution of the aluminum orthophosphate, which remained unreacted.
The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.
Welfare‐state transformation and entrepreneurial urban politics in Western welfare states since the late 1970s have yielded converging trends in the transformation of the dominant Fordist paradigm of social housing in terms of its societal function and institutional and spatial form. In this article I draw from a comparative case study on two cities in Germany to show that the resulting new paradigm is simultaneously shaped by the idiosyncrasies of the country's national housing regime and local housing policies. While German governments have successively limited the societal function of social housing as a legitimate instrument only for addressing exceptional housing crises, local policies on providing and organizing social housing within this framework display significant variation. However, planning and design principles dominating the spatial forms of social housing have been congruent. They may be interpreted as both an expression of the marginalization of social housing within the restructured welfare housing regime and a tool of its implementation according to the logics of entrepreneurial urban politics.
The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.
A new large‐field, high‐sensitivity, single‐mirror coincident schlieren optical instrument has been installed at the Bauhaus‐Universität Weimar for the purpose of indoor air research. Its performance is assessed by the non‐intrusive measurement of the thermal plume of a heated manikin. The schlieren system produces excellent qualitative images of the manikin's thermal plume and also quantitative data, especially schlieren velocimetry of the plume's velocity field that is derived from the digital cross‐correlation analysis of a large time sequence of schlieren images. The quantitative results are compared with thermistor and hot‐wire anemometer data obtained at discrete points in the plume. Good agreement is obtained, once the differences between path‐averaged schlieren data and planar anemometry data are reconciled.
Wind effects can be critical for the design of lifelines such as long-span bridges. The existence of a significant number of aerodynamic force models, used to assess the performance of bridges, poses an important question regarding their comparison and validation. This study utilizes a unified set of metrics for a quantitative comparison of time-histories in bridge aerodynamics with a host of characteristics. Accordingly, nine comparison metrics are included to quantify the discrepancies in local and global signal features such as phase, time-varying frequency and magnitude content, probability density, nonstationarity and nonlinearity. Among these, seven metrics available in the literature are introduced after recasting them for time-histories associated with bridge aerodynamics. Two additional metrics are established to overcome the shortcomings of the existing metrics. The performance of the comparison metrics is first assessed using generic signals with prescribed signal features. Subsequently, the metrics are applied to a practical example from bridge aerodynamics to quantify the discrepancies in the aerodynamic forces and response based on numerical and semi-analytical aerodynamic models. In this context, it is demonstrated how a discussion based on the set of comparison metrics presented here can aid a model evaluation by offering deeper insight. The outcome of the study is intended to provide a framework for quantitative comparison and validation of aerodynamic models based on the underlying physics of fluid-structure interaction. Immediate further applications are expected for the comparison of time-histories that are simulated by data-driven approaches.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
Carrier-bound titanium dioxide catalysts were used in a photocatalytic ozonation reactor for the degradation of micro-pollutants in real wastewater. A photocatalytic immersion rotary body reactor with a 36-cm disk diameter was used, and was irradiated using UV-A light-emitting diodes. The rotating disks were covered with catalysts based on stainless steel grids coated with titanium dioxide. The dosing of ozone was carried out through the liquid phase via an external enrichment and a supply system transverse to the flow direction. The influence of irradiation power and ozone dose on the degradation rate for photocatalytic ozonation was investigated. In addition, the performance of the individual processes photocatalysis and ozonation were studied. The degradation kinetics of the parent compounds were determined using liquid chromatography tandem mass spectrometry. First-order kinetics were determined for photocatalysis and photocatalytic ozonation. A maximum reaction rate of the reactor was determined, which could be achieved by both photocatalysis and photocatalytic ozonation. At a dosage of 0.4 mg /mg DOC, the maximum reaction rate could be achieved using 75% of the irradiation power used for sole photocatalysis, allowing increases in the energetic efficiency of photocatalytic wastewater treatment processes. The process of photocatalytic ozonation is suitable to remove a wide spectrum of micro-pollutants from wastewater.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
The growing complexity of modern practical problems puts high demand on mathematical modelling. Given that various models can be used for modelling one physical phenomenon, the role of model comparison and model choice is becoming particularly important. Methods for model comparison and model choice typically used in practical applications nowadays are computationbased, and thus time consuming and computationally costly. Therefore, it is necessary to develop other approaches to working abstractly, i.e., without computations, with mathematical models. An abstract description of mathematical models can be achieved by the help of abstract mathematics, implying formalisation of models and relations between them. In this paper, a category theory-based approach to mathematical modelling is proposed. In this way, mathematical models are formalised in the language of categories, relations between the models are formally defined and several practically relevant properties are introduced on the level of categories. Finally, an illustrative example is presented, underlying how the category-theory based approach can be used in practice. Further, all constructions presented in this paper are also discussed from a modelling point of view by making explicit the link to concrete modelling scenarios.
The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate.