Refine
Institute
Keywords
- Maschinelles Lernen (15)
- Machine learning (13)
- Deep learning (5)
- OA-Publikationsfonds2020 (5)
- big data (5)
- machine learning (5)
- OA-Publikationsfonds2018 (4)
- artificial intelligence (3)
- Biodiesel (2)
- Intelligente Stadt (2)
- Internet of things (2)
- OA-Publikationsfonds2019 (2)
- Solar (2)
- artificial neural networks (2)
- clustering (2)
- data science (2)
- extreme learning machine (2)
- mathematical modeling (2)
- random forest (2)
- smart cities (2)
- urban morphology (2)
- wireless sensor networks (2)
- ANN modeling (1)
- Algorithmus (1)
- Artificial Intelligence (1)
- Bodentemperatur (1)
- Bubble column reactor (1)
- CFD (1)
- ContikiMAC (1)
- Data Mining (1)
- Defect generation (1)
- ELM (1)
- Electric Energy Consumption (1)
- Erdbeben (1)
- Erneuerbare Energien (1)
- Fernerkung (1)
- Fluid (1)
- Fotovoltaik (1)
- Funktechnik (1)
- Gaussian process regression (1)
- Gesundheitsinformationssystem (1)
- Gesundheitswesen (1)
- Hydrological drought (1)
- IOT (1)
- Internet der Dinge (1)
- Internet der dinge (1)
- Internet of Things (1)
- K-nearest neighbors (1)
- KNN (1)
- Kühlkörper (1)
- Künstliche Intelligenz (1)
- Land surface temperature (1)
- M5 model tree (1)
- Machine Learning (1)
- Membrane contactors (1)
- Modeling (1)
- Molecular Liquids (1)
- Morphologie (1)
- Nachhaltigkeit (1)
- Nanomaterials (1)
- Nanostrukturiertes Material (1)
- Nasskühlung (1)
- Neuronales Netz (1)
- Oberflächentemperatur (1)
- Optimierung (1)
- Perovskite (1)
- Prediction (1)
- RSSI (1)
- Renewable energy (1)
- Risikomanagement (1)
- Sensor (1)
- Simulation (1)
- Solar cells (1)
- Solarzelle (1)
- Sustainable production (1)
- Time-dependent (1)
- Tsallis entropy (1)
- Vernetzung (1)
- adaptive neuro-fuzzy inference system (ANFIS) (1)
- ant colony optimization algorithm (ACO) (1)
- artificial neural networks (ANN) (1)
- back-pressure (1)
- biodiesel (1)
- buildings (1)
- classification (1)
- classifier (1)
- clear channel assessments (1)
- cluster density (1)
- cluster shape (1)
- computation (1)
- computational fluid dynamics (CFD) (1)
- computational hydraulics (1)
- congestion control (1)
- coronary artery disease (1)
- demand response programs (1)
- diesel engines (1)
- duty-cycles (1)
- earthquake (1)
- earthquake safety assessment (1)
- energy consumption (1)
- energy efficiency (1)
- energy, exergy (1)
- ensemble model (1)
- estimation (1)
- extreme events (1)
- extreme pressure (1)
- firefly optimization algorithm (1)
- flow pattern (1)
- fog computing (1)
- food informatics (1)
- forecasting (1)
- forward contracts (1)
- fuzzy decision making (1)
- genetic programming (1)
- growth mode (1)
- health (1)
- health informatics (1)
- heart disease diagnosis (1)
- heat sink (1)
- hybrid machine learning (1)
- hybrid machine learning model (1)
- hydraulic jump (1)
- hydrology (1)
- image processing (1)
- industry 4.0 (1)
- least square support vector machine (LSSVM) (1)
- longitudinal dispersion coefficient (1)
- mitigation (1)
- nanofluid (1)
- natural hazard (1)
- neural networks (NNs) (1)
- optimization (1)
- photovoltaic-thermal (PV/T) (1)
- physical activities (1)
- precipitation (1)
- prediction (1)
- predictive model (1)
- public health (1)
- public space (1)
- rapid visual screening (1)
- received signal strength indicator (1)
- reinforcement learning (1)
- remote sensing (1)
- residential buildings (1)
- response surface methodology (1)
- retailer (1)
- rice (1)
- risk management (1)
- rivers (1)
- seasonal precipitation (1)
- seismic assessment (1)
- signal processing (1)
- smart sensors (1)
- smooth rectangular channel (1)
- soil temperature (1)
- spatial analysis (1)
- spatiotemporal database (1)
- spearman correlation coefficient (1)
- standard deviation of pressure fluctuations (1)
- statistical coeffcient of the probability distribution (1)
- stilling basin (1)
- stochastic programming (1)
- sugarcane (1)
- support vector machine (1)
- support vector machine (SVM) (1)
- support vector regression (1)
- sustainability (1)
- urban health (1)
- urban sustainability (1)
- wireless sensor network (1)
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.
We proposed two different time-dependent modeling approaches for variation of device characteristics of perovskite solar cells under stress conditions. The first approach follows Sah-Noyce-Shockley (SNS) model based on Shockley–Read–Hall recombination/generation across the depletion width of pn junction and the second approach is based on thermionic emission model for Schottky diodes. The connecting point of these approaches to time variation is the time-dependent defect generation in depletion width (W) of the junction. We have fitted the two models with experimental data reported in the literature to perovskite solar cell and found out that each model has a superior explanation for degradation of device metrics e.g. current density and efficiency by time under stress conditions. Nevertheless, the Sah-Noyce-Shockley model is more reliable than thermionic emission at least for solar cells.
Following restructuring of power industry, electricity supply to end-use customers has undergone fundamental changes. In the restructured power system, some of the responsibilities of the vertically integrated distribution companies have been assigned to network managers and retailers. Under the new situation, retailers are in charge of providing electrical energy to electricity consumers who have already signed contract with them. Retailers usually provide the required energy at a variable price, from wholesale electricity markets, forward contracts with energy producers, or distributed energy generators, and sell it at a fixed retail price to its clients. Different strategies are implemented by retailers to reduce the potential financial losses and risks associated with the uncertain nature of wholesale spot electricity market prices and electrical load of the consumers. In this paper, the strategic behavior of retailers in implementing forward contracts, distributed energy sources, and demand-response programs with the aim of increasing their profit and reducing their risk, while keeping their retail prices as low as possible, is investigated. For this purpose, risk management problem of the retailer companies collaborating with wholesale electricity markets, is modeled through bi-level programming approach and a comprehensive framework for retail electricity pricing, considering customers’ constraints, is provided in this paper. In the first level of the proposed bi-level optimization problem, the retailer maximizes its expected profit for a given risk level of profit variability, while in the second level, the customers minimize their consumption costs. The proposed programming problem is modeled as Mixed Integer programming (MIP) problem and can be efficiently solved using available commercial solvers. The simulation results on a test case approve the effectiveness of the proposed demand-response program based on dynamic pricing approach on reducing the retailer’s risk and increasing its profit.
In this paper, the decision-making problem of the retailers under dynamic pricing approach for demand response integration have been investigated. The retailer was supposed to rely on forward contracts, DGs, and spot electricity market to supply the required active and reactive power of its customers. To verify the effectiveness of the proposed model, four schemes for retailer’s scheduling problem are considered and the resulted profit under each scheme are analyzed and compared. The simulation results on a test case indicate that providing more options for the retailer to buy the required power of its customers and increase its flexibility in buying energy from spot electricity market reduces the retailers’ risk and increases its profit. From the customers’ perspective also the retailers’accesstodifferentpowersupplysourcesmayleadtoareductionintheretailelectricityprices. Since the retailer would be able to decrease its electricity selling price to the customers without losing its profitability, with the aim of attracting more customers. Inthiswork,theconditionalvalueatrisk(CVaR)measureisusedforconsideringandquantifying riskinthedecision-makingproblems. Amongallthepossibleoptioninfrontoftheretailertooptimize its profit and risk, demand response programs are the most beneficial option for both retailer and its customers. The simulation results on the case study prove that implementing dynamic pricing approach on retail electricity prices to integrate demand response programs can successfully provoke customers to shift their flexible demand from peak-load hours to mid-load and low-load hours. Comparing the simulation results of the third and fourth schemes evidences the impact of DRPs and customers’ load shifting on the reduction of retailer’s risk, as well as the reduction of retailer’s payment to contract holders, DG owners, and spot electricity market. Furthermore, the numerical results imply on the potential of reducing average retail prices up to 8%, under demand response activation. Consequently, it provides a win–win solution for both retailer and its customers.
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)
Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.