Refine
Document Type
- Article (1015) (remove)
Institute
- Professur Theorie und Geschichte der modernen Architektur (393)
- Institut für Strukturmechanik (ISM) (254)
- Professur Informatik im Bauwesen (130)
- Professur Stochastik und Optimierung (40)
- Professur Bauphysik (23)
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (21)
- Professur Informatik in der Architektur (15)
- Junior-Professur Computational Architecture (12)
- Institut für Europäische Urbanistik (11)
- Professur Bauchemie und Polymere Werkstoffe (11)
Keywords
- Bauhaus-Kolloquium (395)
- Weimar (395)
- Angewandte Mathematik (186)
- Strukturmechanik (185)
- Architektur (168)
- 1986 (63)
- 1989 (60)
- Design (59)
- Bauhaus (55)
- Raum (55)
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Temporary changes in precipitation may lead to sustained and severe drought or massive floods in different parts of the world. Knowing the variation in precipitation can effectively help the water resources decision-makers in water resources management. Large-scale circulation drivers have a considerable impact on precipitation in different parts of the world. In this research, the impact of El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and North Atlantic Oscillation (NAO) on seasonal precipitation over Iran was investigated. For this purpose, 103 synoptic stations with at least 30 years of data were utilized. The Spearman correlation coefficient between the indices in the previous 12 months with seasonal precipitation was calculated, and the meaningful correlations were extracted. Then, the month in which each of these indices has the highest correlation with seasonal precipitation was determined. Finally, the overall amount of increase or decrease in seasonal precipitation due to each of these indices was calculated. Results indicate the Southern Oscillation Index (SOI), NAO, and PDO have the most impact on seasonal precipitation, respectively. Additionally, these indices have the highest impact on the precipitation in winter, autumn, spring, and summer, respectively. SOI has a diverse impact on winter precipitation compared to the PDO and NAO, while in the other seasons, each index has its special impact on seasonal precipitation. Generally, all indices in different phases may decrease the seasonal precipitation up to 100%. However, the seasonal precipitation may increase more than 100% in different seasons due to the impact of these indices. The results of this study can be used effectively in water resources management and especially in dam operation.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
Seit 50 Jahren wird über Erklärungsansätze für Gentrifizierung gestritten. Sehr viel länger schon wandert anlagesuchendes Kapital von einem
Ort zum anderen und hinterlässt dabei Investitionsruinen einerseits und Menschen, die durch Verdrängung ihr Zuhause verlieren, andererseits. Sehr viel kürzer erst wird der Begriff Gentrifizierung hier und da von sozialen Bewegungen aufgegriffen, die sich mit letzterem Phänomen auseinandersetzen.
In diesem Beitrag soll es nicht um die wissenschaftliche Debatte um Erklärungsansätze für Gentrifizierung und auch nicht um die wissenschaftliche Relevanz des Begriffes gehen, sondern um seine Rolle und Funktion in sozialen Bewegungen.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
Rapid advancements of modern technologies put high demands on mathematical modelling of engineering systems. Typically, systems are no longer “simple” objects, but rather coupled systems involving multiphysics phenomena, the modelling of which involves coupling of models that describe different phenomena. After constructing a mathematical model, it is essential to analyse the correctness of the coupled models and to detect modelling errors compromising the final modelling result. Broadly, there are two classes of modelling errors: (a) errors related to abstract modelling, eg, conceptual errors concerning the coherence of a model as a whole and (b) errors related to concrete modelling or instance modelling, eg, questions of approximation quality and implementation. Instance modelling errors, on the one hand, are relatively well understood. Abstract modelling errors, on the other, are not appropriately addressed by modern modelling methodologies. The aim of this paper is to initiate a discussion on abstract approaches and their usability for mathematical modelling of engineering systems with the goal of making it possible to catch conceptual modelling errors early and automatically by computer assistant tools. To that end, we argue that it is necessary to identify and employ suitable mathematical abstractions to capture an accurate conceptual description of the process of modelling engineering systems.
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.
Wireless sensor networks have attracted great attention for applications in structural health monitoring due to their ease of use, flexibility of deployment, and cost-effectiveness. This paper presents a software framework for WiFi-based wireless sensor networks composed of low-cost mass market single-board computers. A number of specific system-level software components were developed to enable robust data acquisition, data processing, sensor network communication, and timing with a focus on structural health monitoring (SHM) applications. The framework was validated on Raspberry Pi computers, and its performance was studied in detail. The paper presents several characteristics of the measurement quality such as sampling accuracy and time synchronization and discusses the specific limitations of the system. The implementation includes a complementary smartphone application that is utilized for data acquisition, visualization, and analysis. A prototypical implementation further demonstrates the feasibility of integrating smartphones as data acquisition nodes into the network, utilizing their internal sensors. The measurement system was employed in several monitoring campaigns, three of which are documented in detail. The suitability of the system is evaluated based on comparisons of target quantities with reference measurements. The results indicate that the presented system can robustly achieve a measurement performance commensurate with that required in many typical SHM tasks such as modal identification. As such, it represents a cost-effective alternative to more traditional monitoring solutions.
Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal mehods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility.
For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
Performance assessment of a ductless personalized ventilation system using a validated CFD model
(2018)
The aim of this study is twofold: to validate a computational fluid dynamics (CFD) model, and then to use the validated model to evaluate the performance of a ductless personalized ventilation (DPV) system. To validate the numerical model, a series of measurements was conducted in a climate chamber equipped with a thermal manikin. Various turbulence models, settings, and options were tested; simulation results were compared to the measured data to determine the turbulence model and solver settings that achieve the best agreement between the measured and simulated values. Subsequently, the validated CFD model was then used to evaluate the thermal environment and indoor air quality in a room equipped with a DPV system combined with displacement ventilation. Results from the numerical model were then used to quantify thermal sensation and comfort using the UC Berkeley thermal comfort model.
This study aims to develop an approach to couple a computational fluid dynamics (CFD) solver to the University of California, Berkeley (UCB) thermal comfort model to accurately evaluate thermal comfort. The coupling was made using an iterative JavaScript to automatically transfer data for each individual segment of the human body back and forth between the CFD solver and the UCB model until reaching convergence defined by a stopping criterion. The location from which data are transferred to the UCB model was determined using a new approach based on the temperature difference between subsequent points on the temperature profile curve in the vicinity of the body surface. This approach was used because the microclimate surrounding the human body differs in thickness depending on the body segment and the surrounding environment. To accurately simulate the thermal environment, the numerical model was validated beforehand using experimental data collected in a climate chamber equipped with a thermal manikin. Furthermore, an example of the practical implementations of this coupling is reported in this paper through radiant floor cooling simulation cases, in which overall and local thermal sensation and comfort were investigated using the coupled UCB model.
The Local Governance of Arrival in Leipzig: Housing of Asylum-Seeking Persons as a Contested Field
(2018)
The article examines how the German city of Leipzig governs the housing of asylum seekers. Leipzig was a frontrunner in organizing the decentralized accommodation of asylum seekers when adopting its accommodation concept in 2012. This concept aimed at integrating asylum-seeking persons in the regular housing market at an early stage of arrival. However, since then, the city of Leipzig faces more and more challenges in implementing the concept. This is particularly due to the increasingly tight situation on the housing market while the number of people seeking protection increased and partly due to discriminating and xenophobic attitudes on the side of house owners and managers. Therefore, we argue that the so-called refugee crisis of 2015–2016 has to be seen in close interaction with a growing general housing shortage in Leipzig like in many other large European cities. Furthermore, we understand the municipal governing of housing as a contested field regarding its entanglement of diverse federal levels and policy scales, the diversity of stakeholders involved, and its dynamic change over the last years. We analyze this contested field set against the current context of arrival and dynamic urban growth on a local level. Based on empirical qualitative research that was conducted by us in 2016, Leipzig’s local specifics will be investigated under the umbrella of our conceptual framework of Governance of Arrival. The issues of a strained housing market and the integration of asylum seekers in it do not apply only to Leipzig, but shed light on similar developments in other European Cities.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar. Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products. By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
Die späten 1960er Jahre und vor allem die 1970er Jahre waren eine Hochphase der Mieter_innenproteste in der BRD. Dieser Beitrag verfolgt die These, dass die Krise der fordistischen Wohnraumversorgung in den 1960er Jahren, bzw. die von der Politik implementierten Lösungsstrategien dieser Krise, eine Klassenallianz in wohnungsbezogenen Protesten ermöglichte und, dass sich diese Klassenallianz im Laufe der 1970er und 1980er Jahre aufspaltete, was zur Einhegung des Protests in das entstehende neoliberale Projekt führte. Im Folgenden beschreibe ich also zunächst die Wohnungsfrage 1968 als Krise der fordistischen Wohnraumproduktion und damit die materielle Basis der Klassenallianz. Daran anschließend illustriere ich anhand von Protesten in den drei Bereichen Massenwohnungsbau, Sanierungsgebiete und Hausbesetzungen die Klassenallianz und vollziehe ich deren Aufspaltung nach. Und schließlich stelle ich die Frage, was heute aus dieser Geschichte gelernt werden kann.
Für eine Abschätzung des Heizwärmebedarfs von Gebäuden und Quartieren können thermisch-energetische Simulationen eingesetzt werden. Grundlage dieser Simulationen sind geometrische und physikalische Gebäudemodelle. Die Erstellung des geometrischen Modells erfolgt in der Regel auf Basis von Bauplänen oder Vor-Ort-Begehungen, was mit einem großen Recherche- und Modellierungsaufwand verbunden ist. Spätere bauliche Veränderungen des Gebäudes müssen häufig manuell in das Modell eingearbeitet werden, was den Arbeitsaufwand zusätzlich erhöht. Das physikalische Modell stellt die Menge an Parametern und Randbedingungen dar, welche durch Materialeigenschaften, Lage und Umgebungs-einflüsse gegeben sind. Die Verknüpfung beider Modelle wird innerhalb der entsprechenden Simulations-software realisiert und ist meist nicht in andere Softwareprodukte überführbar.
Mithilfe des Building Information Modeling (BIM) können Simulationsdaten sowohl konsistent gespeichert als auch über Schnittstellen mit entsprechenden Anwendungen ausgetauscht werden. Hierfür wird eine Methode vorgestellt, die thermisch-energetische Simulationen auf Basis des standardisierten Übergabe-formats Industry Foundation Classes (IFC) inklusive anschließender Auswertungen ermöglicht. Dabei werden geometrische und physikalische Parameter direkt aus einem über den gesamten Lebenszyklus aktuellen Gebäudemodell extrahiert und an die Simulation übergeben. Dies beschleunigt den Simulations-prozess hinsichtlich der Gebäudemodellierung und nach späteren baulichen Veränderungen. Die erarbeite-te Methode beruht hierbei auf einfachen Modellierungskonventionen bei der Erstellung des Bauwerksinformationsmodells und stellt eine vollständige Übertragbarkeit der Eingangs- und Ausgangswerte sicher.
Thermal building simulation based on BIM-models. Thermal energetic simulations are used for the estimation of the heating demand of buildings and districts. These simulations are based on building models containing geometrical and physical information. The creation of geometrical models is usually based on existing construction plans or in situ assessments which demand a comparatively big effort of investigation and modeling. Alterations, which are later applied to the structure, request manual changes of the related model, which increases the effort additionally. The physical model represents the total amount of parameters and boundary conditions that are influenced by material properties, location and environmental influences on the building. The link between both models is realized within the correspondent simulation soft-ware and is usually not transferable to other software products.
By Applying Building Information Modeling (BIM) simulation data is stored consistently and an exchange to other software is enabled. Therefore, a method which allows a thermal energetic simulation based on the exchange format Industry Foundation Classes (IFC) including an evaluation is presented. All geometrical and physical information are extracted directly from the building model that is kept up-to-date during its life cycle and transferred to the simulation. This accelerates the simulation process regarding the geometrical modeling and adjustments after later changes of the building. The developed method is based on simple conventions for the creation of the building model and ensures a complete transfer of all simulation data.
Overheating is a major problem in many modern buildings due to the utilization of lightweight constructions with low heat storing capacity. A possible answer to this problem is the emplacement of phase change materials (PCM), thereby increasing the thermal mass of a building. These materials change their state of aggregation within a defined temperature range. Useful PCM for buildings show a phase transition from solid to liquid and vice versa. The thermal mass of the materials is increased by the latent heat. A modified gypsum plaster and a salt mixture were chosen as two materials for the study of their impact on room temperature reduction. For realistic investigations, test rooms were erected where measurements were carried out under different conditions such as temporary air change, alternate internal heat gains or clouding. The experimental data was finally reproduced by dint of a mathematical model.
The human body is surrounded by a micro‐climate which results from its convective release of heat. In this study, the air temperature and flow velocity of this micro‐climate were measured in a climate chamber at various room temperatures, using a thermal manikin simulating the heat release of the human being. Different techniques (Particle Streak Tracking, thermography, anemometry, and thermistors) were used for measurement and visualization. The manikin surface temperature was adjusted to the particular indoor climate based on simulations with a thermoregulation model (UCBerkeley Thermal Comfort Model). We found that generally, the micro‐climate is thinner at the lower part of the torso, but expands going up. At the head, there is a relatively thick thermal layer, which results in an ascending plume above the head. However, the micro‐climate shape strongly depends not only on the body segment, but also on boundary conditions: the higher the temperature difference between the surface temperature of the manikin and the air temperature, the faster the air flow in the micro‐climate. Finally, convective heat transfer coefficients strongly increase with falling room temperature, while radiative heat transfer coefficients decrease. The type of body segment strongly influences the convective heat transfer coefficient, while only minimally influencing the radiative heat transfer coefficient.
The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.
Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.
Occupant needs with regard to residential buildings are not well known due to a lack of representative scientific studies. To improve the lack of data, a large scale study was carried out using a Post Occupancy Evaluation of 1,416 building occupants. Several criteria describing the needs of occupants were evaluated with regard to their subjective level of relevance. Additionally, we investigated the degree to which deficiencies subjectively exist, and the degree to which occupants were able to accept them. From the data obtained, a hierarchy of criteria was created. It was found that building occupants ranked the physiological needs of air quality and thermal comfort the highest. Health hazards such as mould and contaminated building materials were unacceptable for occupants, while other deficiencies were more likely to be tolerated. Occupant satisfaction was also investigated. We found that most occupants can be classified as satisfied, although some differences do exist between different populations. To explain the relationship between the constructs of what we call relevance, acceptance, deficiency and satisfaction, we then created an explanatory model. Using correlation and regression analysis, the validity of the model was then confirmed by applying the collected data. The results of the study are both relevant in shaping further research and in providing guidance on how to maximize tenant satisfaction in real estate management.
A broadband soil dielectric spectra retrieval approach ( 1 MHz– 2 GHz) has been implemented for a layered half space. The inversion kernel consists of a two-port transmission line forward model in the frequency domain and a constitutive material equation based on a power law soil mixture rule (Complex Refractive Index Model - CRIM). The spatially-distributed retrieval of broadband dielectric spectra was achieved with a global optimization approach based on a Shuffled Complex Evolution (SCE) algorithm using the full set of the scattering parameters. For each layer, the broadband dielectric spectra were retrieved with the corresponding parameters thickness, porosity, water saturation and electrical conductivity of the aqueous pore solution. For the validation of the approach, a coaxial transmission line cell measured with a network analyzer was used. The possibilities and limitations of the inverse parameter estimation were numerically analyzed in four scenarios. Expected and retrieved layer thicknesses, soil properties and broadband dielectric spectra in each scenario were in reasonable agreement. Hence, the model is suitable for an estimation of in-homogeneous material parameter distributions. Moreover, the proposed frequency domain approach allows an automatic adaptation of layer number and thickness or regular grids in time and/or space.
Elitenkritik, populare Bündnisse und inklusive Solidaritär. Interview zur Debatte um Linkspopulismus
(2018)
In der aktuellen ökonomischen und politischen Krise haben Debatten um linke Strategien wieder Hochkonjunktur. Besonders kontrovers werden Vorschläge diskutiert, die einen Linkspopulismus als Alternative zum rechten politischen Projekt, zum Neoliberalismus und als Transformationsstrategie hin zu einer sozialistischen Gesellschaft propagieren. Thomas Goes und Violetta Bock haben mit ihrem Buch Ein unanständiges Angebot? Mit linkem Populismus gegen Eliten und Rechte (2017) eine programmatische Aufarbeitung existierender linker Populismuskonzepte und ihre eigene Vorstellung davon, wie ein linker Populismus gelingen kann, vorgelegt. Damit haben sie die Debatte um Linkspopulismus in Deutschland befeuert. Im Interview werden sie nach ihren Positionen und den Kontroversen um das Buch befragt. Das Interview soll als Aufschlag für eine Debatte dienen. Antworten zu den dargestellten Positionen und Bezüge zu städtischen Themen und städtischen sozialen Bewegungen sind sehr willkommen.
Following restructuring of power industry, electricity supply to end-use customers has undergone fundamental changes. In the restructured power system, some of the responsibilities of the vertically integrated distribution companies have been assigned to network managers and retailers. Under the new situation, retailers are in charge of providing electrical energy to electricity consumers who have already signed contract with them. Retailers usually provide the required energy at a variable price, from wholesale electricity markets, forward contracts with energy producers, or distributed energy generators, and sell it at a fixed retail price to its clients. Different strategies are implemented by retailers to reduce the potential financial losses and risks associated with the uncertain nature of wholesale spot electricity market prices and electrical load of the consumers. In this paper, the strategic behavior of retailers in implementing forward contracts, distributed energy sources, and demand-response programs with the aim of increasing their profit and reducing their risk, while keeping their retail prices as low as possible, is investigated. For this purpose, risk management problem of the retailer companies collaborating with wholesale electricity markets, is modeled through bi-level programming approach and a comprehensive framework for retail electricity pricing, considering customers’ constraints, is provided in this paper. In the first level of the proposed bi-level optimization problem, the retailer maximizes its expected profit for a given risk level of profit variability, while in the second level, the customers minimize their consumption costs. The proposed programming problem is modeled as Mixed Integer programming (MIP) problem and can be efficiently solved using available commercial solvers. The simulation results on a test case approve the effectiveness of the proposed demand-response program based on dynamic pricing approach on reducing the retailer’s risk and increasing its profit.
In this paper, the decision-making problem of the retailers under dynamic pricing approach for demand response integration have been investigated. The retailer was supposed to rely on forward contracts, DGs, and spot electricity market to supply the required active and reactive power of its customers. To verify the effectiveness of the proposed model, four schemes for retailer’s scheduling problem are considered and the resulted profit under each scheme are analyzed and compared. The simulation results on a test case indicate that providing more options for the retailer to buy the required power of its customers and increase its flexibility in buying energy from spot electricity market reduces the retailers’ risk and increases its profit. From the customers’ perspective also the retailers’accesstodifferentpowersupplysourcesmayleadtoareductionintheretailelectricityprices. Since the retailer would be able to decrease its electricity selling price to the customers without losing its profitability, with the aim of attracting more customers. Inthiswork,theconditionalvalueatrisk(CVaR)measureisusedforconsideringandquantifying riskinthedecision-makingproblems. Amongallthepossibleoptioninfrontoftheretailertooptimize its profit and risk, demand response programs are the most beneficial option for both retailer and its customers. The simulation results on the case study prove that implementing dynamic pricing approach on retail electricity prices to integrate demand response programs can successfully provoke customers to shift their flexible demand from peak-load hours to mid-load and low-load hours. Comparing the simulation results of the third and fourth schemes evidences the impact of DRPs and customers’ load shifting on the reduction of retailer’s risk, as well as the reduction of retailer’s payment to contract holders, DG owners, and spot electricity market. Furthermore, the numerical results imply on the potential of reducing average retail prices up to 8%, under demand response activation. Consequently, it provides a win–win solution for both retailer and its customers.
As part of an international research project – funded by the European Union – capillary glasses for facades are being developed exploiting storage energy by means of fluids flowing through the capillaries. To meet highest visual demands, acrylate adhesives and EVA films are tested as possible bonding materials for the glass setup. Especially non-destructive methods (visual analysis, analysis of birefringent properties and computed tomographic data) are applied to evaluate failure patterns as well as the long-term behavior considering climatic influences. The experimental investigations are presented after different loading periods, providing information of failure developments. In addition, detailed information and scientific findings on the application of computed tomographic analyses are presented.
Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.
One of the frequently examined design principles recommendations in multimedia learning is the personalization principle. Based on empirical evidence this principle states that using personalised messages in multimedia learning is more beneficial than using formal language (e.g. using ‘you’ instead of ‘the’). Although there is evidence that these slight changes in regard to the language style affect learning, motivation and the perceived cognitive load, it remains unclear, (1) whether the positive effects of personalised language can be transferred to all kinds of content of learning materials (e.g. specific potentially aversive health issues) and (2) which are the underlying processes (e.g. attention allocation) of the personalization effect. German university students (N= 37) learned symptoms and causes of cerebral haemorrhages either with a formal or a personalised version of the learning material. Analysis revealed comparable results to the few existing previous studies, indicating an inverted personalization effect for potentially aversive learning material. This effect was specifically revealed in regard to decreased average fixation duration and the number of fixations exclusively on the images in the personalised compared to the formal version. This result can be seen as indicators for an inverted effect of personalization on the level of visual attention.
Dieser Artikel analysiert, in welcher Weise sich die Weltkunstausstellung documenta 14 mit dem öffentlichen Raum in Kassel auseinandersetzte. Als Kritik an globalen Unrechtszuständen konzipiert, ging die diesjährige Documenta nicht auf die lokalen Umstände in Kassel ein und benutzte die Stadt stattdessen als Bühne. Statt sich mit den konkreten Prozessen vor Ort auseinanderzusetzen, wie die Ausstellung dies in Athen getan hat, wird die Tradition der Documenta gebrochen, einen Beitrag zur gesellschaftlichen Stadtentwicklung leisten zu wollen.
Die Idee eines neuen Munizipalismus wird in den linken sozialen Bewegungen Europas und darüber hinaus breit diskutiert. Munizipalistische Bewegungen streben es an, kommunale Regierungen zu übernehmen oder zu beeinflussen, um lokale Institutionen (wieder) gemeinwohlorientiert auszurichten, ein neues Verhältnis zwischen kommunalen Regierungen und sozialen Bewegungen zu schaffen und so die Art wie Politik gestaltet wird von unten her zu demokratisieren und institutionelle Rahmenbedingungen zu verändern. Sie entstehen in Reaktion auf die aktuelle ökonomische und politische Krise – ebenso wie neue rechte und rechtspopulistische Bewegungen, als deren Gegenpart sie sich verstehen. Mit Mut und konkreten Utopien will man der multiplen städtischen Krise begegnen, statt mit Angst und Angstmacherei wie rechte Bewegungen. Deshalb trafen sich im Juni 2017 über 600 Vertreter_innen dieser munizipalistischen Bewegungen auf Einladung Barcelona en Comús.
Welche Zukünfte?
(2017)
We propose an enhanced iterative scheme for the precise reconstruction of piezoelectric material parameters from electric impedance and mechanical displacement measurements. It is based on finite-element simulations of the full three-dimensional piezoelectric equations, combined with an inexact Newton or nonlinear Landweber iterative inversion scheme. We apply our method to two piezoelectric materials and test its performance. For the first material, the manufacturer provides a full data set; for the second one, no material data set is available. For both cases, our inverse scheme, using electric impedance measurements as input data, performs well.
Maturation and Structure Formation Processes in Binders with Aqueous Alkali-Silicate Solutions
(2017)
Maturation and structure formation processes can lead to crack formation in silicate and aluminosilicate binders (e.g. for coating materials...) through restricted deformation, loss of strength and thus to loss of durability. These processes are evaluated with silicate materials with an outlook on aluminosilicate binders.