Refine
Document Type
- Article (1015) (remove)
Institute
- Professur Theorie und Geschichte der modernen Architektur (393)
- Institut für Strukturmechanik (ISM) (254)
- Professur Informatik im Bauwesen (130)
- Professur Stochastik und Optimierung (40)
- Professur Bauphysik (23)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (21)
- Professur Informatik in der Architektur (15)
- Junior-Professur Computational Architecture (12)
- Institut für Europäische Urbanistik (11)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Bauhaus-Kolloquium (395)
- Weimar (395)
- Angewandte Mathematik (186)
- Strukturmechanik (185)
- Architektur (168)
- 1986 (63)
- 1989 (60)
- Design (59)
- Bauhaus (55)
- Raum (55)
Auguste Rodins Weimarer Eva
(2018)
In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.
This study investigates the performance of two systems: personalized ventilation (PV) and ductless personalized ventilation (DPV). Even though the literature indicates a compelling performance of PV, it is not often used in practice due to its impracticality. Therefore, the present study assesses the possibility of replacing the inflexible PV with DPV in office rooms equipped with displacement ventilation (DV) in the summer season. Numerical simulations were utilized to evaluate the inhaled concentration of pollutants when PV and DPV are used. The systems were compared in a simulated office with two occupants: a susceptible occupant and a source occupant. Three types of pollution were simulated: exhaled infectious air, dermally emitted contamination, and room contamination from a passive source. Results indicated that PV improved the inhaled air quality regardless of the location of the pollution source; a higher PV supply flow rate positively impacted the inhaled air quality. Contrarily, the performance of DPV was highly sensitive to the source location and the personalized flow rate. A higher DPV flow rate tends to decrease the inhaled air quality due to increased mixing of pollutants in the room. Moreover, both systems achieved better results when the personalized system of the source occupant was switched off.
Welfare‐state transformation and entrepreneurial urban politics in Western welfare states since the late 1970s have yielded converging trends in the transformation of the dominant Fordist paradigm of social housing in terms of its societal function and institutional and spatial form. In this article I draw from a comparative case study on two cities in Germany to show that the resulting new paradigm is simultaneously shaped by the idiosyncrasies of the country's national housing regime and local housing policies. While German governments have successively limited the societal function of social housing as a legitimate instrument only for addressing exceptional housing crises, local policies on providing and organizing social housing within this framework display significant variation. However, planning and design principles dominating the spatial forms of social housing have been congruent. They may be interpreted as both an expression of the marginalization of social housing within the restructured welfare housing regime and a tool of its implementation according to the logics of entrepreneurial urban politics.
A new large‐field, high‐sensitivity, single‐mirror coincident schlieren optical instrument has been installed at the Bauhaus‐Universität Weimar for the purpose of indoor air research. Its performance is assessed by the non‐intrusive measurement of the thermal plume of a heated manikin. The schlieren system produces excellent qualitative images of the manikin's thermal plume and also quantitative data, especially schlieren velocimetry of the plume's velocity field that is derived from the digital cross‐correlation analysis of a large time sequence of schlieren images. The quantitative results are compared with thermistor and hot‐wire anemometer data obtained at discrete points in the plume. Good agreement is obtained, once the differences between path‐averaged schlieren data and planar anemometry data are reconciled.
The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.
Wind effects can be critical for the design of lifelines such as long-span bridges. The existence of a significant number of aerodynamic force models, used to assess the performance of bridges, poses an important question regarding their comparison and validation. This study utilizes a unified set of metrics for a quantitative comparison of time-histories in bridge aerodynamics with a host of characteristics. Accordingly, nine comparison metrics are included to quantify the discrepancies in local and global signal features such as phase, time-varying frequency and magnitude content, probability density, nonstationarity and nonlinearity. Among these, seven metrics available in the literature are introduced after recasting them for time-histories associated with bridge aerodynamics. Two additional metrics are established to overcome the shortcomings of the existing metrics. The performance of the comparison metrics is first assessed using generic signals with prescribed signal features. Subsequently, the metrics are applied to a practical example from bridge aerodynamics to quantify the discrepancies in the aerodynamic forces and response based on numerical and semi-analytical aerodynamic models. In this context, it is demonstrated how a discussion based on the set of comparison metrics presented here can aid a model evaluation by offering deeper insight. The outcome of the study is intended to provide a framework for quantitative comparison and validation of aerodynamic models based on the underlying physics of fluid-structure interaction. Immediate further applications are expected for the comparison of time-histories that are simulated by data-driven approaches.
Die im Jahr 2020 in Deutschland praktizierte Siedlungs- und Wohnungspolitik erhält in Anbetracht ihrer Auswirkungen auf die soziale und ökologische Lage einen bitteren Beigeschmack. Arm und Reich triften weiter auseinander und einer zielgerichteten ökologischen Transformation der Art und Weise, wie Stadtentwicklung und Wohnungspolitik gestaltet werden,stehen noch immer historisch und systemisch bedingte Pfadabhängigkeiten im Weg. Diese werden nur durch eine integrierte Betrachtung sozialer und ökonomischer Aspekte sichtbar und deuten auf eine der ursprünglichen Fragen linker Gesellschaftsforschung hin: Die Auseinandersetzung mit dem Verhältnis von Eigentum und Gerechtigkeit.
Im Ergebnis stehen drei wesentliche Befunde: Der Diskurs zum Schutz des Klimas und der Biodiversität berührt direkt die Parameter Dichte, Nutzungsmischung und Flächeninanspruchnahme; zweitens steigt letztere relativ mit erhöhtem, individuell verfügbaren Kapital und insbesondere im selbstgenutztem Eigentum gegenüber Mietwohnungen; und drittens wächst der Eigentumsanteil mit fortschreitender Finanzialisierung des Wohnungsmarktes, sodass das Risiko sozialer und ökologischer Krisen sich verschärft.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
The performance of ductless personalized ventilation (DPV) was compared to the performance of a typical desk fan since they are both stand-alone systems that allow the users to personalize their indoor environment. The two systems were evaluated using a validated computational fluid dynamics (CFD) model of an office room occupied by two users. To investigate the impact of DPV and the fan on the inhaled air quality, two types of contamination sources were modelled in the domain: an active source and a passive source. Additionally, the influence of the compared systems on thermal comfort was assessed using the coupling of CFD with the comfort model developed by the University of California, Berkeley (UCB model). Results indicated that DPV performed generally better than the desk fan. It provided better thermal comfort and showed a superior performance in removing the exhaled contaminants. However, the desk fan performed better in removing the contaminants emitted from a passive source near the floor level. This indicates that the performance of DPV and desk fans depends highly on the location of the contamination source. Moreover, the simulations showed that both systems increased the spread of exhaled contamination when used by the source occupant.
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
Why Do Digital Native News Media Fail? An Investigation of Failure in the Early Start-Up Phase
(2020)
Digital native news media have great potential for improving journalism. Theoretically, they can be the sites where new products, novel revenue streams and alternative ways of organizing digital journalism are discovered, tested, and advanced. In practice, however, the situation appears to be more complicated. Besides the normal pressures facing new businesses, entrepreneurs in digital news are faced with specific challenges. Against the background of general and journalism specific entrepreneurship literature, and in light of a practice–theoretical approach, this qualitative case study research on 15 German digital native news media outlets empirically investigates what barriers curb their innovative capacity in the early start-up phase. In the new media organizations under study here, there are—among other problems—a high degree of homogeneity within founding teams, tensions between journalistic and economic practices, insufficient user orientation, as well as a tendency for organizations to be underfinanced. The patterns of failure investigated in this study can raise awareness, help news start-ups avoid common mistakes before actually entering the market, and help industry experts and investors to realistically estimate the potential of new ventures within the digital news industry.
Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.
Along with environmental pollution, urban planning has been connected to public health. The research indicates that the quality of built environments plays an important role in reducing mental disorders and overall health. The structure and shape of the city are considered as one of the factors influencing happiness and health in urban communities and the type of the daily activities of citizens. The aim of this study was to promote physical activity in the main structure of the city via urban design in a way that the main form and morphology of the city can encourage citizens to move around and have physical activity within the city. Functional, physical, cultural-social, and perceptual-visual features are regarded as the most important and effective criteria in increasing physical activities in urban spaces, based on literature review. The environmental quality of urban spaces and their role in the physical activities of citizens in urban spaces were assessed by using the questionnaire tool and analytical network process (ANP) of structural equation modeling. Further, the space syntax method was utilized to evaluate the role of the spatial integration of urban spaces on improving physical activities. Based on the results, consideration of functional diversity, spatial flexibility and integration, security, and the aesthetic and visual quality of urban spaces plays an important role in improving the physical health of citizens in urban spaces. Further, more physical activities, including motivation for walking and the sense of public health and happiness, were observed in the streets having higher linkage and space syntax indexes with their surrounding texture.
Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)
Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.
Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Temporary changes in precipitation may lead to sustained and severe drought or massive floods in different parts of the world. Knowing the variation in precipitation can effectively help the water resources decision-makers in water resources management. Large-scale circulation drivers have a considerable impact on precipitation in different parts of the world. In this research, the impact of El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and North Atlantic Oscillation (NAO) on seasonal precipitation over Iran was investigated. For this purpose, 103 synoptic stations with at least 30 years of data were utilized. The Spearman correlation coefficient between the indices in the previous 12 months with seasonal precipitation was calculated, and the meaningful correlations were extracted. Then, the month in which each of these indices has the highest correlation with seasonal precipitation was determined. Finally, the overall amount of increase or decrease in seasonal precipitation due to each of these indices was calculated. Results indicate the Southern Oscillation Index (SOI), NAO, and PDO have the most impact on seasonal precipitation, respectively. Additionally, these indices have the highest impact on the precipitation in winter, autumn, spring, and summer, respectively. SOI has a diverse impact on winter precipitation compared to the PDO and NAO, while in the other seasons, each index has its special impact on seasonal precipitation. Generally, all indices in different phases may decrease the seasonal precipitation up to 100%. However, the seasonal precipitation may increase more than 100% in different seasons due to the impact of these indices. The results of this study can be used effectively in water resources management and especially in dam operation.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
Seit 50 Jahren wird über Erklärungsansätze für Gentrifizierung gestritten. Sehr viel länger schon wandert anlagesuchendes Kapital von einem
Ort zum anderen und hinterlässt dabei Investitionsruinen einerseits und Menschen, die durch Verdrängung ihr Zuhause verlieren, andererseits. Sehr viel kürzer erst wird der Begriff Gentrifizierung hier und da von sozialen Bewegungen aufgegriffen, die sich mit letzterem Phänomen auseinandersetzen.
In diesem Beitrag soll es nicht um die wissenschaftliche Debatte um Erklärungsansätze für Gentrifizierung und auch nicht um die wissenschaftliche Relevanz des Begriffes gehen, sondern um seine Rolle und Funktion in sozialen Bewegungen.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
Rapid advancements of modern technologies put high demands on mathematical modelling of engineering systems. Typically, systems are no longer “simple” objects, but rather coupled systems involving multiphysics phenomena, the modelling of which involves coupling of models that describe different phenomena. After constructing a mathematical model, it is essential to analyse the correctness of the coupled models and to detect modelling errors compromising the final modelling result. Broadly, there are two classes of modelling errors: (a) errors related to abstract modelling, eg, conceptual errors concerning the coherence of a model as a whole and (b) errors related to concrete modelling or instance modelling, eg, questions of approximation quality and implementation. Instance modelling errors, on the one hand, are relatively well understood. Abstract modelling errors, on the other, are not appropriately addressed by modern modelling methodologies. The aim of this paper is to initiate a discussion on abstract approaches and their usability for mathematical modelling of engineering systems with the goal of making it possible to catch conceptual modelling errors early and automatically by computer assistant tools. To that end, we argue that it is necessary to identify and employ suitable mathematical abstractions to capture an accurate conceptual description of the process of modelling engineering systems.
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.
Wireless sensor networks have attracted great attention for applications in structural health monitoring due to their ease of use, flexibility of deployment, and cost-effectiveness. This paper presents a software framework for WiFi-based wireless sensor networks composed of low-cost mass market single-board computers. A number of specific system-level software components were developed to enable robust data acquisition, data processing, sensor network communication, and timing with a focus on structural health monitoring (SHM) applications. The framework was validated on Raspberry Pi computers, and its performance was studied in detail. The paper presents several characteristics of the measurement quality such as sampling accuracy and time synchronization and discusses the specific limitations of the system. The implementation includes a complementary smartphone application that is utilized for data acquisition, visualization, and analysis. A prototypical implementation further demonstrates the feasibility of integrating smartphones as data acquisition nodes into the network, utilizing their internal sensors. The measurement system was employed in several monitoring campaigns, three of which are documented in detail. The suitability of the system is evaluated based on comparisons of target quantities with reference measurements. The results indicate that the presented system can robustly achieve a measurement performance commensurate with that required in many typical SHM tasks such as modal identification. As such, it represents a cost-effective alternative to more traditional monitoring solutions.
Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal mehods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility.
For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
Performance assessment of a ductless personalized ventilation system using a validated CFD model
(2018)
The aim of this study is twofold: to validate a computational fluid dynamics (CFD) model, and then to use the validated model to evaluate the performance of a ductless personalized ventilation (DPV) system. To validate the numerical model, a series of measurements was conducted in a climate chamber equipped with a thermal manikin. Various turbulence models, settings, and options were tested; simulation results were compared to the measured data to determine the turbulence model and solver settings that achieve the best agreement between the measured and simulated values. Subsequently, the validated CFD model was then used to evaluate the thermal environment and indoor air quality in a room equipped with a DPV system combined with displacement ventilation. Results from the numerical model were then used to quantify thermal sensation and comfort using the UC Berkeley thermal comfort model.
This study aims to develop an approach to couple a computational fluid dynamics (CFD) solver to the University of California, Berkeley (UCB) thermal comfort model to accurately evaluate thermal comfort. The coupling was made using an iterative JavaScript to automatically transfer data for each individual segment of the human body back and forth between the CFD solver and the UCB model until reaching convergence defined by a stopping criterion. The location from which data are transferred to the UCB model was determined using a new approach based on the temperature difference between subsequent points on the temperature profile curve in the vicinity of the body surface. This approach was used because the microclimate surrounding the human body differs in thickness depending on the body segment and the surrounding environment. To accurately simulate the thermal environment, the numerical model was validated beforehand using experimental data collected in a climate chamber equipped with a thermal manikin. Furthermore, an example of the practical implementations of this coupling is reported in this paper through radiant floor cooling simulation cases, in which overall and local thermal sensation and comfort were investigated using the coupled UCB model.
The Local Governance of Arrival in Leipzig: Housing of Asylum-Seeking Persons as a Contested Field
(2018)
The article examines how the German city of Leipzig governs the housing of asylum seekers. Leipzig was a frontrunner in organizing the decentralized accommodation of asylum seekers when adopting its accommodation concept in 2012. This concept aimed at integrating asylum-seeking persons in the regular housing market at an early stage of arrival. However, since then, the city of Leipzig faces more and more challenges in implementing the concept. This is particularly due to the increasingly tight situation on the housing market while the number of people seeking protection increased and partly due to discriminating and xenophobic attitudes on the side of house owners and managers. Therefore, we argue that the so-called refugee crisis of 2015–2016 has to be seen in close interaction with a growing general housing shortage in Leipzig like in many other large European cities. Furthermore, we understand the municipal governing of housing as a contested field regarding its entanglement of diverse federal levels and policy scales, the diversity of stakeholders involved, and its dynamic change over the last years. We analyze this contested field set against the current context of arrival and dynamic urban growth on a local level. Based on empirical qualitative research that was conducted by us in 2016, Leipzig’s local specifics will be investigated under the umbrella of our conceptual framework of Governance of Arrival. The issues of a strained housing market and the integration of asylum seekers in it do not apply only to Leipzig, but shed light on similar developments in other European Cities.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar. Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products. By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
Die späten 1960er Jahre und vor allem die 1970er Jahre waren eine Hochphase der Mieter_innenproteste in der BRD. Dieser Beitrag verfolgt die These, dass die Krise der fordistischen Wohnraumversorgung in den 1960er Jahren, bzw. die von der Politik implementierten Lösungsstrategien dieser Krise, eine Klassenallianz in wohnungsbezogenen Protesten ermöglichte und, dass sich diese Klassenallianz im Laufe der 1970er und 1980er Jahre aufspaltete, was zur Einhegung des Protests in das entstehende neoliberale Projekt führte. Im Folgenden beschreibe ich also zunächst die Wohnungsfrage 1968 als Krise der fordistischen Wohnraumproduktion und damit die materielle Basis der Klassenallianz. Daran anschließend illustriere ich anhand von Protesten in den drei Bereichen Massenwohnungsbau, Sanierungsgebiete und Hausbesetzungen die Klassenallianz und vollziehe ich deren Aufspaltung nach. Und schließlich stelle ich die Frage, was heute aus dieser Geschichte gelernt werden kann.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar.
Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products.
By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
Overheating is a major problem in many modern buildings due to the utilization of lightweight constructions with low heat storing capacity. A possible answer to this problem is the emplacement of phase change materials (PCM), thereby increasing the thermal mass of a building. These materials change their state of aggregation within a defined temperature range. Useful PCM for buildings show a phase transition from solid to liquid and vice versa. The thermal mass of the materials is increased by the latent heat. A modified gypsum plaster and a salt mixture were chosen as two materials for the study of their impact on room temperature reduction. For realistic investigations, test rooms were erected where measurements were carried out under different conditions such as temporary air change, alternate internal heat gains or clouding. The experimental data was finally reproduced by dint of a mathematical model.
The human body is surrounded by a micro‐climate which results from its convective release of heat. In this study, the air temperature and flow velocity of this micro‐climate were measured in a climate chamber at various room temperatures, using a thermal manikin simulating the heat release of the human being. Different techniques (Particle Streak Tracking, thermography, anemometry, and thermistors) were used for measurement and visualization. The manikin surface temperature was adjusted to the particular indoor climate based on simulations with a thermoregulation model (UCBerkeley Thermal Comfort Model). We found that generally, the micro‐climate is thinner at the lower part of the torso, but expands going up. At the head, there is a relatively thick thermal layer, which results in an ascending plume above the head. However, the micro‐climate shape strongly depends not only on the body segment, but also on boundary conditions: the higher the temperature difference between the surface temperature of the manikin and the air temperature, the faster the air flow in the micro‐climate. Finally, convective heat transfer coefficients strongly increase with falling room temperature, while radiative heat transfer coefficients decrease. The type of body segment strongly influences the convective heat transfer coefficient, while only minimally influencing the radiative heat transfer coefficient.
The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.
Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.
Occupant needs with regard to residential buildings are not well known due to a lack of representative scientific studies. To improve the lack of data, a large scale study was carried out using a Post Occupancy Evaluation of 1,416 building occupants. Several criteria describing the needs of occupants were evaluated with regard to their subjective level of relevance. Additionally, we investigated the degree to which deficiencies subjectively exist, and the degree to which occupants were able to accept them. From the data obtained, a hierarchy of criteria was created. It was found that building occupants ranked the physiological needs of air quality and thermal comfort the highest. Health hazards such as mould and contaminated building materials were unacceptable for occupants, while other deficiencies were more likely to be tolerated. Occupant satisfaction was also investigated. We found that most occupants can be classified as satisfied, although some differences do exist between different populations. To explain the relationship between the constructs of what we call relevance, acceptance, deficiency and satisfaction, we then created an explanatory model. Using correlation and regression analysis, the validity of the model was then confirmed by applying the collected data. The results of the study are both relevant in shaping further research and in providing guidance on how to maximize tenant satisfaction in real estate management.
A broadband soil dielectric spectra retrieval approach ( 1 MHz– 2 GHz) has been implemented for a layered half space. The inversion kernel consists of a two-port transmission line forward model in the frequency domain and a constitutive material equation based on a power law soil mixture rule (Complex Refractive Index Model - CRIM). The spatially-distributed retrieval of broadband dielectric spectra was achieved with a global optimization approach based on a Shuffled Complex Evolution (SCE) algorithm using the full set of the scattering parameters. For each layer, the broadband dielectric spectra were retrieved with the corresponding parameters thickness, porosity, water saturation and electrical conductivity of the aqueous pore solution. For the validation of the approach, a coaxial transmission line cell measured with a network analyzer was used. The possibilities and limitations of the inverse parameter estimation were numerically analyzed in four scenarios. Expected and retrieved layer thicknesses, soil properties and broadband dielectric spectra in each scenario were in reasonable agreement. Hence, the model is suitable for an estimation of in-homogeneous material parameter distributions. Moreover, the proposed frequency domain approach allows an automatic adaptation of layer number and thickness or regular grids in time and/or space.