Refine
Document Type
- Article (1015) (remove)
Institute
- Professur Theorie und Geschichte der modernen Architektur (393)
- Institut für Strukturmechanik (ISM) (254)
- Professur Informatik im Bauwesen (130)
- Professur Stochastik und Optimierung (40)
- Professur Bauphysik (23)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (21)
- Professur Informatik in der Architektur (15)
- Junior-Professur Computational Architecture (12)
- Institut für Europäische Urbanistik (11)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Bauhaus-Kolloquium (395)
- Weimar (395)
- Angewandte Mathematik (186)
- Strukturmechanik (185)
- Architektur (168)
- 1986 (63)
- 1989 (60)
- Design (59)
- Bauhaus (55)
- Raum (55)
Tensile strain and compress strain can greatly affect the thermal conductivity of graphene nanoribbons (GNRs). However, the effect of GNRs under shear strain, which is also one of the main strain effect, has not been studied systematically yet. In this work, we employ reverse nonequilibrium molecular dynamics (RNEMD) to the systematical study of the thermal conductivity of GNRs (with model size of 4 nm × 15 nm) under the shear strain. Our studies show that the thermal conductivity of GNRs is not sensitive to the shear strain, and the thermal conductivity decreases only 12–16% before the pristine structure is broken. Furthermore, the phonon frequency and the change of the micro-structure of GNRs, such as band angel and bond length, are analyzed to explore the tendency of thermal conductivity. The results show that the main influence of shear strain is on the in-plane phonon density of states (PDOS), whose G band (higher frequency peaks) moved to the low frequency, thus the thermal conductivity is decreased. The unique thermal properties of GNRs under shear strains suggest their great potentials for graphene nanodevices and great potentials in the thermal managements and thermoelectric applications.
Im Aufsatz "Zwei Pater noster-Vertonungen von Melchior Vulpius und die Missa V. vocum super Pater noster qui es in Coelis Melchioris Vulpii von Georg Vintz" werden Pater noster-Vertonungen aus dem OPUSCULUM NOVUM SELECTISSIMARVUM CANTIONVM SACRARVM (Erfurt 1610) und aus der PARS PRIMA CANTIONVM SACRARVM (Jena 1602) des Weimarer Kantors und Komponisten Melchior Vulpius (1570 - 1615) besprochen und miteinander verglichen. Ebenso wird untersucht, in welcher Weise der Naumburger Organist Georg Vintz (ca. 1580 - ca. 1635) eine dieser Motetten als Grundlage für die Parodiemesse "MISSA V. VOCUM,Super Pater noster [...] Melchioris Vulpii" genutzt und bearbeitet hat.
Der Aufsatz beruht auf einem Vortrag bei einer Tagung über Vulpius, die 2015 in Meiningen stattfand. Der Tagungsband, den Maren Goltz herausgeben wird, erscheint vermutlich 2018. Da im Band nicht genügend Platz für ausführliche Notenbeispiele ist, werden die besprochenen Kompositionen vollständig als Noten- und Klangdateien durch diesen Permalink zur Verfügung gestellt.
One of the frequently examined design principles recommendations in multimedia learning is the personalization principle. Based on empirical evidence this principle states that using personalised messages in multimedia learning is more beneficial than using formal language (e.g. using ‘you’ instead of ‘the’). Although there is evidence that these slight changes in regard to the language style affect learning, motivation and the perceived cognitive load, it remains unclear, (1) whether the positive effects of personalised language can be transferred to all kinds of content of learning materials (e.g. specific potentially aversive health issues) and (2) which are the underlying processes (e.g. attention allocation) of the personalization effect. German university students (N= 37) learned symptoms and causes of cerebral haemorrhages either with a formal or a personalised version of the learning material. Analysis revealed comparable results to the few existing previous studies, indicating an inverted personalization effect for potentially aversive learning material. This effect was specifically revealed in regard to decreased average fixation duration and the number of fixations exclusively on the images in the personalised compared to the formal version. This result can be seen as indicators for an inverted effect of personalization on the level of visual attention.
Die Idee eines neuen Munizipalismus wird in den linken sozialen Bewegungen Europas und darüber hinaus breit diskutiert. Munizipalistische Bewegungen streben es an, kommunale Regierungen zu übernehmen oder zu beeinflussen, um lokale Institutionen (wieder) gemeinwohlorientiert auszurichten, ein neues Verhältnis zwischen kommunalen Regierungen und sozialen Bewegungen zu schaffen und so die Art wie Politik gestaltet wird von unten her zu demokratisieren und institutionelle Rahmenbedingungen zu verändern. Sie entstehen in Reaktion auf die aktuelle ökonomische und politische Krise – ebenso wie neue rechte und rechtspopulistische Bewegungen, als deren Gegenpart sie sich verstehen. Mit Mut und konkreten Utopien will man der multiplen städtischen Krise begegnen, statt mit Angst und Angstmacherei wie rechte Bewegungen. Deshalb trafen sich im Juni 2017 über 600 Vertreter_innen dieser munizipalistischen Bewegungen auf Einladung Barcelona en Comús.
Dieser Artikel analysiert, in welcher Weise sich die Weltkunstausstellung documenta 14 mit dem öffentlichen Raum in Kassel auseinandersetzte. Als Kritik an globalen Unrechtszuständen konzipiert, ging die diesjährige Documenta nicht auf die lokalen Umstände in Kassel ein und benutzte die Stadt stattdessen als Bühne. Statt sich mit den konkreten Prozessen vor Ort auseinanderzusetzen, wie die Ausstellung dies in Athen getan hat, wird die Tradition der Documenta gebrochen, einen Beitrag zur gesellschaftlichen Stadtentwicklung leisten zu wollen.
Elitenkritik, populare Bündnisse und inklusive Solidaritär. Interview zur Debatte um Linkspopulismus
(2018)
In der aktuellen ökonomischen und politischen Krise haben Debatten um linke Strategien wieder Hochkonjunktur. Besonders kontrovers werden Vorschläge diskutiert, die einen Linkspopulismus als Alternative zum rechten politischen Projekt, zum Neoliberalismus und als Transformationsstrategie hin zu einer sozialistischen Gesellschaft propagieren. Thomas Goes und Violetta Bock haben mit ihrem Buch Ein unanständiges Angebot? Mit linkem Populismus gegen Eliten und Rechte (2017) eine programmatische Aufarbeitung existierender linker Populismuskonzepte und ihre eigene Vorstellung davon, wie ein linker Populismus gelingen kann, vorgelegt. Damit haben sie die Debatte um Linkspopulismus in Deutschland befeuert. Im Interview werden sie nach ihren Positionen und den Kontroversen um das Buch befragt. Das Interview soll als Aufschlag für eine Debatte dienen. Antworten zu den dargestellten Positionen und Bezüge zu städtischen Themen und städtischen sozialen Bewegungen sind sehr willkommen.
Following restructuring of power industry, electricity supply to end-use customers has undergone fundamental changes. In the restructured power system, some of the responsibilities of the vertically integrated distribution companies have been assigned to network managers and retailers. Under the new situation, retailers are in charge of providing electrical energy to electricity consumers who have already signed contract with them. Retailers usually provide the required energy at a variable price, from wholesale electricity markets, forward contracts with energy producers, or distributed energy generators, and sell it at a fixed retail price to its clients. Different strategies are implemented by retailers to reduce the potential financial losses and risks associated with the uncertain nature of wholesale spot electricity market prices and electrical load of the consumers. In this paper, the strategic behavior of retailers in implementing forward contracts, distributed energy sources, and demand-response programs with the aim of increasing their profit and reducing their risk, while keeping their retail prices as low as possible, is investigated. For this purpose, risk management problem of the retailer companies collaborating with wholesale electricity markets, is modeled through bi-level programming approach and a comprehensive framework for retail electricity pricing, considering customers’ constraints, is provided in this paper. In the first level of the proposed bi-level optimization problem, the retailer maximizes its expected profit for a given risk level of profit variability, while in the second level, the customers minimize their consumption costs. The proposed programming problem is modeled as Mixed Integer programming (MIP) problem and can be efficiently solved using available commercial solvers. The simulation results on a test case approve the effectiveness of the proposed demand-response program based on dynamic pricing approach on reducing the retailer’s risk and increasing its profit.
In this paper, the decision-making problem of the retailers under dynamic pricing approach for demand response integration have been investigated. The retailer was supposed to rely on forward contracts, DGs, and spot electricity market to supply the required active and reactive power of its customers. To verify the effectiveness of the proposed model, four schemes for retailer’s scheduling problem are considered and the resulted profit under each scheme are analyzed and compared. The simulation results on a test case indicate that providing more options for the retailer to buy the required power of its customers and increase its flexibility in buying energy from spot electricity market reduces the retailers’ risk and increases its profit. From the customers’ perspective also the retailers’accesstodifferentpowersupplysourcesmayleadtoareductionintheretailelectricityprices. Since the retailer would be able to decrease its electricity selling price to the customers without losing its profitability, with the aim of attracting more customers. Inthiswork,theconditionalvalueatrisk(CVaR)measureisusedforconsideringandquantifying riskinthedecision-makingproblems. Amongallthepossibleoptioninfrontoftheretailertooptimize its profit and risk, demand response programs are the most beneficial option for both retailer and its customers. The simulation results on the case study prove that implementing dynamic pricing approach on retail electricity prices to integrate demand response programs can successfully provoke customers to shift their flexible demand from peak-load hours to mid-load and low-load hours. Comparing the simulation results of the third and fourth schemes evidences the impact of DRPs and customers’ load shifting on the reduction of retailer’s risk, as well as the reduction of retailer’s payment to contract holders, DG owners, and spot electricity market. Furthermore, the numerical results imply on the potential of reducing average retail prices up to 8%, under demand response activation. Consequently, it provides a win–win solution for both retailer and its customers.
Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.
The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.
Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.
As part of an international research project – funded by the European Union – capillary glasses for facades are being developed exploiting storage energy by means of fluids flowing through the capillaries. To meet highest visual demands, acrylate adhesives and EVA films are tested as possible bonding materials for the glass setup. Especially non-destructive methods (visual analysis, analysis of birefringent properties and computed tomographic data) are applied to evaluate failure patterns as well as the long-term behavior considering climatic influences. The experimental investigations are presented after different loading periods, providing information of failure developments. In addition, detailed information and scientific findings on the application of computed tomographic analyses are presented.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar. Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products. By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
The Local Governance of Arrival in Leipzig: Housing of Asylum-Seeking Persons as a Contested Field
(2018)
The article examines how the German city of Leipzig governs the housing of asylum seekers. Leipzig was a frontrunner in organizing the decentralized accommodation of asylum seekers when adopting its accommodation concept in 2012. This concept aimed at integrating asylum-seeking persons in the regular housing market at an early stage of arrival. However, since then, the city of Leipzig faces more and more challenges in implementing the concept. This is particularly due to the increasingly tight situation on the housing market while the number of people seeking protection increased and partly due to discriminating and xenophobic attitudes on the side of house owners and managers. Therefore, we argue that the so-called refugee crisis of 2015–2016 has to be seen in close interaction with a growing general housing shortage in Leipzig like in many other large European cities. Furthermore, we understand the municipal governing of housing as a contested field regarding its entanglement of diverse federal levels and policy scales, the diversity of stakeholders involved, and its dynamic change over the last years. We analyze this contested field set against the current context of arrival and dynamic urban growth on a local level. Based on empirical qualitative research that was conducted by us in 2016, Leipzig’s local specifics will be investigated under the umbrella of our conceptual framework of Governance of Arrival. The issues of a strained housing market and the integration of asylum seekers in it do not apply only to Leipzig, but shed light on similar developments in other European Cities.
Die späten 1960er Jahre und vor allem die 1970er Jahre waren eine Hochphase der Mieter_innenproteste in der BRD. Dieser Beitrag verfolgt die These, dass die Krise der fordistischen Wohnraumversorgung in den 1960er Jahren, bzw. die von der Politik implementierten Lösungsstrategien dieser Krise, eine Klassenallianz in wohnungsbezogenen Protesten ermöglichte und, dass sich diese Klassenallianz im Laufe der 1970er und 1980er Jahre aufspaltete, was zur Einhegung des Protests in das entstehende neoliberale Projekt führte. Im Folgenden beschreibe ich also zunächst die Wohnungsfrage 1968 als Krise der fordistischen Wohnraumproduktion und damit die materielle Basis der Klassenallianz. Daran anschließend illustriere ich anhand von Protesten in den drei Bereichen Massenwohnungsbau, Sanierungsgebiete und Hausbesetzungen die Klassenallianz und vollziehe ich deren Aufspaltung nach. Und schließlich stelle ich die Frage, was heute aus dieser Geschichte gelernt werden kann.
Performance assessment of a ductless personalized ventilation system using a validated CFD model
(2018)
The aim of this study is twofold: to validate a computational fluid dynamics (CFD) model, and then to use the validated model to evaluate the performance of a ductless personalized ventilation (DPV) system. To validate the numerical model, a series of measurements was conducted in a climate chamber equipped with a thermal manikin. Various turbulence models, settings, and options were tested; simulation results were compared to the measured data to determine the turbulence model and solver settings that achieve the best agreement between the measured and simulated values. Subsequently, the validated CFD model was then used to evaluate the thermal environment and indoor air quality in a room equipped with a DPV system combined with displacement ventilation. Results from the numerical model were then used to quantify thermal sensation and comfort using the UC Berkeley thermal comfort model.
This study aims to develop an approach to couple a computational fluid dynamics (CFD) solver to the University of California, Berkeley (UCB) thermal comfort model to accurately evaluate thermal comfort. The coupling was made using an iterative JavaScript to automatically transfer data for each individual segment of the human body back and forth between the CFD solver and the UCB model until reaching convergence defined by a stopping criterion. The location from which data are transferred to the UCB model was determined using a new approach based on the temperature difference between subsequent points on the temperature profile curve in the vicinity of the body surface. This approach was used because the microclimate surrounding the human body differs in thickness depending on the body segment and the surrounding environment. To accurately simulate the thermal environment, the numerical model was validated beforehand using experimental data collected in a climate chamber equipped with a thermal manikin. Furthermore, an example of the practical implementations of this coupling is reported in this paper through radiant floor cooling simulation cases, in which overall and local thermal sensation and comfort were investigated using the coupled UCB model.
Auguste Rodins Weimarer Eva
(2018)
A broadband soil dielectric spectra retrieval approach ( 1 MHz– 2 GHz) has been implemented for a layered half space. The inversion kernel consists of a two-port transmission line forward model in the frequency domain and a constitutive material equation based on a power law soil mixture rule (Complex Refractive Index Model - CRIM). The spatially-distributed retrieval of broadband dielectric spectra was achieved with a global optimization approach based on a Shuffled Complex Evolution (SCE) algorithm using the full set of the scattering parameters. For each layer, the broadband dielectric spectra were retrieved with the corresponding parameters thickness, porosity, water saturation and electrical conductivity of the aqueous pore solution. For the validation of the approach, a coaxial transmission line cell measured with a network analyzer was used. The possibilities and limitations of the inverse parameter estimation were numerically analyzed in four scenarios. Expected and retrieved layer thicknesses, soil properties and broadband dielectric spectra in each scenario were in reasonable agreement. Hence, the model is suitable for an estimation of in-homogeneous material parameter distributions. Moreover, the proposed frequency domain approach allows an automatic adaptation of layer number and thickness or regular grids in time and/or space.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar.
Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products.
By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
Wireless sensor networks have attracted great attention for applications in structural health monitoring due to their ease of use, flexibility of deployment, and cost-effectiveness. This paper presents a software framework for WiFi-based wireless sensor networks composed of low-cost mass market single-board computers. A number of specific system-level software components were developed to enable robust data acquisition, data processing, sensor network communication, and timing with a focus on structural health monitoring (SHM) applications. The framework was validated on Raspberry Pi computers, and its performance was studied in detail. The paper presents several characteristics of the measurement quality such as sampling accuracy and time synchronization and discusses the specific limitations of the system. The implementation includes a complementary smartphone application that is utilized for data acquisition, visualization, and analysis. A prototypical implementation further demonstrates the feasibility of integrating smartphones as data acquisition nodes into the network, utilizing their internal sensors. The measurement system was employed in several monitoring campaigns, three of which are documented in detail. The suitability of the system is evaluated based on comparisons of target quantities with reference measurements. The results indicate that the presented system can robustly achieve a measurement performance commensurate with that required in many typical SHM tasks such as modal identification. As such, it represents a cost-effective alternative to more traditional monitoring solutions.
Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal mehods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility.
For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.
Seit 50 Jahren wird über Erklärungsansätze für Gentrifizierung gestritten. Sehr viel länger schon wandert anlagesuchendes Kapital von einem
Ort zum anderen und hinterlässt dabei Investitionsruinen einerseits und Menschen, die durch Verdrängung ihr Zuhause verlieren, andererseits. Sehr viel kürzer erst wird der Begriff Gentrifizierung hier und da von sozialen Bewegungen aufgegriffen, die sich mit letzterem Phänomen auseinandersetzen.
In diesem Beitrag soll es nicht um die wissenschaftliche Debatte um Erklärungsansätze für Gentrifizierung und auch nicht um die wissenschaftliche Relevanz des Begriffes gehen, sondern um seine Rolle und Funktion in sozialen Bewegungen.
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.
Rapid advancements of modern technologies put high demands on mathematical modelling of engineering systems. Typically, systems are no longer “simple” objects, but rather coupled systems involving multiphysics phenomena, the modelling of which involves coupling of models that describe different phenomena. After constructing a mathematical model, it is essential to analyse the correctness of the coupled models and to detect modelling errors compromising the final modelling result. Broadly, there are two classes of modelling errors: (a) errors related to abstract modelling, eg, conceptual errors concerning the coherence of a model as a whole and (b) errors related to concrete modelling or instance modelling, eg, questions of approximation quality and implementation. Instance modelling errors, on the one hand, are relatively well understood. Abstract modelling errors, on the other, are not appropriately addressed by modern modelling methodologies. The aim of this paper is to initiate a discussion on abstract approaches and their usability for mathematical modelling of engineering systems with the goal of making it possible to catch conceptual modelling errors early and automatically by computer assistant tools. To that end, we argue that it is necessary to identify and employ suitable mathematical abstractions to capture an accurate conceptual description of the process of modelling engineering systems.
This study aims to evaluate a new approach in modeling gully erosion susceptibility (GES) based on a deep learning neural network (DLNN) model and an ensemble particle swarm optimization (PSO) algorithm with DLNN (PSO-DLNN), comparing these approaches with common artificial neural network (ANN) and support vector machine (SVM) models in Shirahan watershed, Iran. For this purpose, 13 independent variables affecting GES in the study area, namely, altitude, slope, aspect, plan curvature, profile curvature, drainage density, distance from a river, land use, soil, lithology, rainfall, stream power index (SPI), and topographic wetness index (TWI), were prepared. A total of 132 gully erosion locations were identified during field visits. To implement the proposed model, the dataset was divided into the two categories of training (70%) and testing (30%). The results indicate that the area under the curve (AUC) value from receiver operating characteristic (ROC) considering the testing datasets of PSO-DLNN is 0.89, which indicates superb accuracy. The rest of the models are associated with optimal accuracy and have similar results to the PSO-DLNN model; the AUC values from ROC of DLNN, SVM, and ANN for the testing datasets are 0.87, 0.85, and 0.84, respectively. The efficiency of the proposed model in terms of prediction of GES was increased. Therefore, it can be concluded that the DLNN model and its ensemble with the PSO algorithm can be used as a novel and practical method to predict gully erosion susceptibility, which can help planners and managers to manage and reduce the risk of this phenomenon.
When it comes to monitoring of huge structures, main issues are limited time, high costs and how to deal with the big amount of data. In order to reduce and manage them, respectively, methods from the field of optimal design of experiments are useful and supportive. Having optimal experimental designs at hand before conducting any measurements is leading to a highly informative measurement concept, where the sensor positions are optimized according to minimal errors in the structures’ models. For the reduction of computational time a combined approach using Fisher Information Matrix and mean-squared error in a two-step procedure is proposed under the consideration of different error types. The error descriptions contain random/aleatoric and systematic/epistemic portions. Applying this combined approach on a finite element model using artificial acceleration time measurement data with artificially added errors leads to the optimized sensor positions. These findings are compared to results from laboratory experiments on the modeled structure, which is a tower-like structure represented by a hollow pipe as the cantilever beam. Conclusively, the combined approach is leading to a sound experimental design that leads to a good estimate of the structure’s behavior and model parameters without the need of preliminary measurements for model updating.
The amount of adsorbed styrene acrylate copolymer (SA) particles on cementitious surfaces at the early stage of hydration was quantitatively determined using three different methodological approaches: the depletion method, the visible spectrophotometry (VIS) and the thermo-gravimetry coupled with mass spectrometry (TG–MS). Considering the advantages and disadvantages of each method, including the respectively required sample preparation, the results for four polymer-modified cement pastes, varying in polymer content and cement fineness, were evaluated.
To some extent, significant discrepancies in the adsorption degrees were observed. There is a tendency that significantly lower amounts of adsorbed polymers were identified using TG-MS compared to values determined with the depletion method. Spectrophotometrically generated values were lying in between these extremes. This tendency was found for three of the four cement pastes examined and is originated in sample preparation and methodical limitations.
The main influencing factor is the falsification of the polymer concentration in the liquid phase during centrifugation. Interactions in the interface between sediment and supernatant are the cause. The newly developed method, using TG–MS for the quantification of SA particles, proved to be suitable for dealing with these revealed issues. Here, instead of the fluid phase, the sediment is examined with regard to the polymer content, on which the influence of centrifugation is considerably lower.
Acoustic travel-time TOMography (ATOM) allows the measurement and reconstruction of air temperature distributions. Due to limiting factors, such as the challenge of travel-time estimation of the early reflections in the room impulse response, which heavily depends on the position of transducers inside the measurement area, ATOM is applied mainly outdoors. To apply ATOM in buildings, this paper presents a numerical solution to optimize the positions of transducers. This optimization avoids reflection overlaps, leading to distinguishable travel-times in the impulse response reflectogram. To increase the accuracy of the measured temperature within tomographic voxels, an additional function is employed to the proposed numerical method to minimize the number of sound-path-free voxels, ensuring the best sound-ray coverage of the room. Subsequently, an experimental set-up has been performed to verify the proposed numerical method. The results indicate the positive impact of the optimal positions of transducers on the distribution of ATOM-temperatures.
Discrete function theory in higher-dimensional setting has been in active development since many years. However, available results focus on studying discrete setting for such canonical domains as half-space, while the case of bounded domains generally remained unconsidered. Therefore, this paper presents the extension of the higher-dimensional function theory to the case of arbitrary bounded domains in Rn. On this way, discrete Stokes’ formula, discrete Borel–Pompeiu formula, as well as discrete Hardy spaces for general bounded domains are constructed. Finally, several discrete Hilbert problems are considered.
This article focuses on further developments of the background-oriented schlieren (BOS) technique to visualize convective indoor air flow, which is usually defined by very small density gradients. Since the light rays deflect when passing through fluids with different densities, BOS can detect the resulting refractive index gradients as integration along a line of sight. In this paper, the BOS technique is used to yield a two-dimensional visualization of small density gradients. The novelty of the described method is the implementation of a highly sensitive BOS setup to visualize the ascending thermal plume from a heated thermal manikin with temperature differences of minimum 1 K. To guarantee steady boundary conditions, the thermal manikin was seated in a climate laboratory. For the experimental investigations, a high-resolution DLSR camera was used capturing a large field of view with sufficient detail accuracy. Several parameters such as various backgrounds, focal lengths, room air temperatures, and distances between the object of investigation, camera, and structured background were tested to find the most suitable parameters to visualize convective indoor air flow. Besides these measurements, this paper presents the analyzing method using cross-correlation algorithms and finally the results of visualizing the convective indoor air flow with BOS. The highly sensitive BOS setup presented in this article complements the commonly used invasive methods that highly influence weak air flows.
The contribution explores the migratory situation on the Balkans and more specifically in the so-called Refugee District in Belgrade from a spatial perspective. By visualizing the areas of tensions in the Refugee District, the city of Belgrade, Serbia and Europe it aims to disentangle the political and socio-spatial levels that lead to the stuck situation of in-betweenness at the gates of the European Union.
Neuartige Sanitärsysteme zielen auf eine ressourcenorientierte Verwertung von Abwasser ab. Erreicht werden soll dies durch die separate Erfassung von Abwasserteilströmen. In den Fachöffentlichkeiten der Wasserwirtschaft und Raumplanung werden neuartige Sanitärsysteme als ein geeigneter Ansatz für die zukünftige
Sicherung der Abwasserentsorgung in ländlichen Räumen betrachtet. Die Praxistauglichkeit dieser Systeme wurde zwar in Forschungsprojekten nachgewiesen, bisher erschweren jedoch für Abwasserentsorger vielfältige Risiken die Einführung einer ressourcenorientierten Abwasserbewirtschaftung. Ausgehend von einer Untersuchung der Kontexte bei der Umsetzung eines neuartigen Sanitärsystems im ländlichen Raum Thüringens wird in diesem Beitrag der Frage nachgegangen, wie auf Landesebene mit dem abwasserwirtschaftlichen Instrumentarium die Einführung von ressourcenorientierten Systemansätzen unterstützt werden kann. Zentrale Elemente des Beitrags sind die Darstellung der wesentlichen Transformationsrisiken in Bezug auf die Einführung innovativer Lösungsansätze, eine Erläuterung der spezifischen abwasserwirtschaftlichen Instrumente sowie die Darlegung von Steuerungsansätzen,mit denen die Einführung von neuartigen Sanitärsystemen gefördert werden kann. Im Ergebnis wird die Realisierbarkeit von neuartigen Sanitärsystemen durch den strategischen Einsatz des Instrumentariums deutlich, gleichwohl die Wasserwirtschaft durch die Erweiterung der bisherigen Systemgrenzen auf die Kooperation mit anderen Bereichen der Daseinsvorsorge angewiesen ist.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
Personalisierte Lüftung (PL) kann die thermische Behaglichkeit sowie die Qualität der eingeatmeten Atemluft verbessern, in dem jedem Arbeitsplatz Frischluft separat zugeführt wird. In diesem Beitrag wird die Wirkung der PL auf die thermische Behaglichkeit der Nutzer unter sommerlichen Randbedingungen untersucht. Hierfür wurden zwei Ansätze zur Bewertung des Kühlungseffekts der PL untersucht: basierend auf (1) der äquivalenten Temperatur und (2) dem thermischen Empfinden. Grundlage der Auswertung sind in einer Klimakammer gemessene sowie numerisch simulierte Daten. Vor der Durchführung der Simulationen wurde das numerische Modell zunächst anhand der gemessenen Daten validiert. Die Ergebnisse zeigen, dass der Ansatz basierend auf dem thermischen Empfinden zur Evaluierung des Kühlungseffekts der PL sinnvoller sein kann, da bei diesem die komplexen physiologischen Faktoren besser berücksichtigt werden.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
Die im Jahr 2020 in Deutschland praktizierte Siedlungs- und Wohnungspolitik erhält in Anbetracht ihrer Auswirkungen auf die soziale und ökologische Lage einen bitteren Beigeschmack. Arm und Reich triften weiter auseinander und einer zielgerichteten ökologischen Transformation der Art und Weise, wie Stadtentwicklung und Wohnungspolitik gestaltet werden,stehen noch immer historisch und systemisch bedingte Pfadabhängigkeiten im Weg. Diese werden nur durch eine integrierte Betrachtung sozialer und ökonomischer Aspekte sichtbar und deuten auf eine der ursprünglichen Fragen linker Gesellschaftsforschung hin: Die Auseinandersetzung mit dem Verhältnis von Eigentum und Gerechtigkeit.
Im Ergebnis stehen drei wesentliche Befunde: Der Diskurs zum Schutz des Klimas und der Biodiversität berührt direkt die Parameter Dichte, Nutzungsmischung und Flächeninanspruchnahme; zweitens steigt letztere relativ mit erhöhtem, individuell verfügbaren Kapital und insbesondere im selbstgenutztem Eigentum gegenüber Mietwohnungen; und drittens wächst der Eigentumsanteil mit fortschreitender Finanzialisierung des Wohnungsmarktes, sodass das Risiko sozialer und ökologischer Krisen sich verschärft.
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
The performance of ductless personalized ventilation (DPV) was compared to the performance of a typical desk fan since they are both stand-alone systems that allow the users to personalize their indoor environment. The two systems were evaluated using a validated computational fluid dynamics (CFD) model of an office room occupied by two users. To investigate the impact of DPV and the fan on the inhaled air quality, two types of contamination sources were modelled in the domain: an active source and a passive source. Additionally, the influence of the compared systems on thermal comfort was assessed using the coupling of CFD with the comfort model developed by the University of California, Berkeley (UCB model). Results indicated that DPV performed generally better than the desk fan. It provided better thermal comfort and showed a superior performance in removing the exhaled contaminants. However, the desk fan performed better in removing the contaminants emitted from a passive source near the floor level. This indicates that the performance of DPV and desk fans depends highly on the location of the contamination source. Moreover, the simulations showed that both systems increased the spread of exhaled contamination when used by the source occupant.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.