Filtern
Dokumenttyp
- Artikel (Wissenschaftlicher) (254) (entfernen)
Institut
- Institut für Strukturmechanik (ISM) (254) (entfernen)
Schlagworte
- Angewandte Mathematik (183)
- Strukturmechanik (183)
- Stochastik (40)
- Maschinelles Lernen (26)
- OA-Publikationsfonds2020 (19)
- Machine learning (15)
- machine learning (8)
- Deep learning (6)
- Erdbeben (6)
- OA-Publikationsfonds2022 (6)
- big data (6)
- Wärmeleitfähigkeit (5)
- Finite-Elemente-Methode (4)
- rapid visual screening (4)
- Neuronales Netz (3)
- OA-Publikationsfonds2018 (3)
- OA-Publikationsfonds2021 (3)
- Optimierung (3)
- artificial intelligence (3)
- artificial neural networks (3)
- damaged buildings (3)
- earthquake safety assessment (3)
- random forest (3)
- support vector machine (3)
- Biodiesel (2)
- Bruchmechanik (2)
- Elastizität (2)
- Fahrleitung (2)
- Fehlerabschätzung (2)
- Fotovoltaik (2)
- Fuzzy-Logik (2)
- Intelligente Stadt (2)
- Internet of things (2)
- Künstliche Intelligenz (2)
- OA-Publikationsfonds2019 (2)
- Schaden (2)
- Tragfähigkeit (2)
- Transfer learning (2)
- Vulnerability assessment (2)
- buildings (2)
- data science (2)
- earthquake (2)
- extreme learning machine (2)
- mathematical modeling (2)
- smart cities (2)
- soft computing techniques (2)
- urban morphology (2)
- variational principle (2)
- vulnerability assessment (2)
- wireless sensor networks (2)
- 3D printing (1)
- 3D reinforced concrete buildings (1)
- 3D-Druck (1)
- ANN modeling (1)
- Activation function (1)
- Adaptive Pushover (1)
- Algorithmus (1)
- Arc-direct energy deposition (1)
- Artificial Intelligence (1)
- Artificial neural network (1)
- Batterie (1)
- Baustahl (1)
- Bayes-Verfahren (1)
- Bayesian inference (1)
- Beam-to-column connection; semi-rigid; flush end-plate connection; moment-rotation curve (1)
- Beton (1)
- Bildanalyse (1)
- Bodenmechanik (1)
- Bodentemperatur (1)
- Bornitrid (1)
- Bubble column reactor (1)
- Building safety assessment (1)
- Catenary poles (1)
- Chirurgie (1)
- Collocation method (1)
- ContikiMAC (1)
- Damage accumulation (1)
- Damm (1)
- Defekt (1)
- Design Spectra (1)
- Domain Adaptation (1)
- Dreidimensionales Modell (1)
- Druckluft (1)
- Dual phase steel (1)
- Dual-support (1)
- ELM (1)
- Earthquake (1)
- Elektrostatische Welle (1)
- Empire XPU 8.01 (1)
- Energieeffizienz (1)
- Energiespeicherung (1)
- Erbeben (1)
- Erdbebensicherheit (1)
- Erneuerbare Energien (1)
- Fachwerkbau (1)
- Fahrleitungsmast (1)
- Fatigue life (1)
- Fernerkung (1)
- Feststoff (1)
- Fluid (1)
- Funktechnik (1)
- Fuzzy Logic (1)
- Fuzzy-Regelung (1)
- Gaussian process regression (1)
- Gebäude (1)
- Geoinformatik (1)
- Geometrie (1)
- Gesundheitsinformationssystem (1)
- Gesundheitswesen (1)
- Graphen (1)
- Graphene (1)
- Grundwasser (1)
- Größenverhältnis (1)
- High-speed electric train (1)
- Holzkonstruktion (1)
- Hydrological drought (1)
- IOT (1)
- Implicit (1)
- Infrastructures (1)
- Internet der Dinge (1)
- Internet der dinge (1)
- Internet of Things (1)
- K-nearest neighbors (1)
- KNN (1)
- Kaverne (1)
- Kollokationsmethode (1)
- Körper (1)
- Kühlkörper (1)
- Land surface temperature (1)
- Lebenszyklus (1)
- Loading sequence (1)
- M5 model tree (1)
- MATLAB (1)
- MDLSM method (1)
- Machine Learning (1)
- Marmara Region (1)
- Materialverhalten (1)
- Matlab (1)
- Mechanische Eigenschaft (1)
- Mensch (1)
- Mikrokapsel (1)
- Mild steel (1)
- MoS2 (1)
- Model-free status monitoring (1)
- Modellierung (1)
- Molekülstruktur (1)
- Morphologie (1)
- Multi-criteria decision making (1)
- Multi-objective Evolutionary Optimization, Elitist Non- Dominated Sorting Evolution Strategy (ENSES), Sandwich Structure, Pareto-Optimal Solutions, Evolutionary Algorithm (1)
- NURBS (1)
- NURBS geometry (1)
- Nachhaltigkeit (1)
- Nanofluid (1)
- Nanomaterials (1)
- Nanomechanik (1)
- Nanopore (1)
- Nanoporöser Stoff (1)
- Nanoribbons, thermal conductivity (1)
- Nanostrukturiertes Material (1)
- Nasskühlung (1)
- Naturkatastrophe (1)
- Navier–Stokes equations (1)
- Neuronales Lernen (1)
- Nitratbelastung (1)
- Nonlocal operator method (1)
- OA-Publikationsfonds2023 (1)
- Oberflächentemperatur (1)
- Operator energy functional (1)
- Peridynamik (1)
- Physikalische Eigenschaft (1)
- Polymere (1)
- Polymorphie (1)
- Potential problem (1)
- RSSI (1)
- Rainflow counting algorithm (1)
- Rapid Visual Screening (1)
- Renewable energy (1)
- Riss (1)
- Rissausbreitung (1)
- Rotorblatt (1)
- SHM (1)
- Schadensakkumulation (1)
- Schadenserkennung (1)
- Schubspannung (1)
- Schwellenwert (1)
- Schwingung (1)
- Seismic risk (1)
- Sensor (1)
- Sigmoid function (1)
- Solar (1)
- Spannungs-Dehnungs-Beziehung (1)
- Stahlbau (1)
- Stahlbetonkonstruktion (1)
- Steifigkeit (1)
- Stiffness matrix (1)
- Stoffeigenschaft (1)
- Stress-strain curve (1)
- Strukturanalyse (1)
- Stütze (1)
- Sustainability (1)
- Sustainable production (1)
- TPOGS (1)
- Taylor series expansion (1)
- Thermal conductivity (1)
- Träger (1)
- Tsallis entropy (1)
- Variational principle (1)
- Vernetzung (1)
- Vulnerability (1)
- Werkstoff (1)
- Wind load (1)
- Windkraftwerk (1)
- Windlast (1)
- action recognition (1)
- adaptive neuro-fuzzy inference system (ANFIS) (1)
- adaptive pushover (1)
- ant colony optimization algorithm (ACO) (1)
- artificial neural network (1)
- back-pressure (1)
- battery (1)
- biodiesel (1)
- capsular clustering (1)
- circumferential contact length (1)
- classification (1)
- classifier (1)
- clear channel assessments (1)
- cluster density (1)
- cluster shape (1)
- clustering (1)
- computation (1)
- computational fluid dynamics (CFD) (1)
- computational hydraulics (1)
- congestion control (1)
- coronary artery disease (1)
- crack detection (1)
- damage identification (1)
- dams (1)
- deep learning (1)
- deep learning neural network (1)
- diesel engines (1)
- dimensionality reduction (1)
- dual-support (1)
- duty-cycles (1)
- earthquake damage (1)
- earthquake vulnerability assessment (1)
- electromagnetic waves (1)
- energy consumption (1)
- energy efficiency (1)
- energy form (1)
- energy, exergy (1)
- ensemble model (1)
- estimation (1)
- explicit time integration (1)
- extreme events (1)
- extreme pressure (1)
- firefly optimization algorithm (1)
- flow pattern (1)
- fog computing (1)
- food informatics (1)
- fractional-order control (1)
- full-waveform inversion (1)
- fused filament fabrication (1)
- fuzzy decision making (1)
- genetic programming (1)
- geoinformatics (1)
- ground structure (1)
- ground water contamination (1)
- growth mode (1)
- gully erosion susceptibility (1)
- health (1)
- health informatics (1)
- heart disease diagnosis (1)
- heat sink (1)
- human blob (1)
- human body proportions (1)
- hybrid machine learning (1)
- hybrid machine learning model (1)
- hybride Werkstoffe (1)
- hydraulic jump (1)
- hydrological model (1)
- hydrology (1)
- image processing (1)
- industry 4.0 (1)
- inverse analysis (1)
- least square support vector machine (LSSVM) (1)
- longitudinal dispersion coefficient (1)
- microcapsule (1)
- mitigation (1)
- molecular dynamics (1)
- nanofluid (1)
- nanoreinforced composites (1)
- natural hazard (1)
- neural architecture search (1)
- neural networks (NNs) (1)
- nonlocal Hessian operator (1)
- nonlocal operator method (1)
- numerical modelling (1)
- operator energy functional (1)
- partical swarm optimization (1)
- peridynamics (1)
- photovoltaic (1)
- photovoltaic-thermal (PV/T) (1)
- physical activities (1)
- polymorphe Unschärfemodellierung (1)
- precipitation (1)
- prediction (1)
- predictive model (1)
- principal component analysis (1)
- public health (1)
- public space (1)
- randomized spectral representation (1)
- rapid assessment (1)
- rapid classification (1)
- received signal strength indicator (1)
- reinforcement learning (1)
- remote sensing (1)
- residential buildings (1)
- response surface methodology (1)
- rice (1)
- rivers (1)
- rule based classification (1)
- seasonal precipitation (1)
- seismic assessment (1)
- seismic hazard analysis (1)
- seismic risk estimation (1)
- seismic vulnerability (1)
- self-healing concrete (1)
- signal processing (1)
- site-specific spectrum (1)
- smart sensors (1)
- smooth rectangular channel (1)
- soil temperature (1)
- spatial analysis (1)
- spatiotemporal database (1)
- spearman correlation coefficient (1)
- square root cubature calman filter (1)
- standard deviation of pressure fluctuations (1)
- statistical analysis (1)
- statistical coeffcient of the probability distribution (1)
- stilling basin (1)
- sugarcane (1)
- supervised learning (1)
- support vector regression (1)
- sustainability (1)
- three-dimensional truss structures (1)
- topology optimization (1)
- type-3 fuzzy systems (1)
- urban health (1)
- urban sustainability (1)
- vibration-based damage identification (1)
- vibration-based methodology (1)
- water quality (1)
- wave propagation (1)
- wavelet transform (1)
- weak form (1)
- wind turbine rotor blades (1)
- wireless sensor network (1)
We investigate the thermal conductivity in the armchair and zigzag MoS2 nanoribbons, by combining the non-equilibrium Green's function approach and the first-principles method. A strong orientation dependence is observed in the thermal conductivity. Particularly, the thermal conductivity for the armchair MoS2 nanoribbon is about 673.6 Wm−1 K−1 in the armchair nanoribbon, and 841.1 Wm−1 K−1 in the zigzag nanoribbon at room temperature. By calculating the Caroli transmission, we disclose the underlying mechanism for this strong orientation dependence to be the fewer phonon transport channels in the armchair MoS2 nanoribbon in the frequency range of [150, 200] cm−1. Through the scaling of the phonon dispersion, we further illustrate that the thermal conductivity calculated for the MoS2 nanoribbon is esentially in consistent with the superior thermal conductivity found for graphene.
Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)
Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.
This paper presents several aspects of characterization of welding heat source parameters in Goldak’s double ellipsoidal model using Sysweld simulation of welding of two overlapping beads on a substrate steel plate. The overlap percentages ranged from 40% to 80% in increments of 10%. The new material properties of the fused metal were characterized using Weldware and their continuous cooling transformation curves. The convective and radiative heat transfer coefficients as well as the cooling time t8/5 were estimated using numerical formulations from relevant standards. The effects of the simulation geometry and mesh discretization were evaluated in terms of the factor F provided in Sysweld. Eventually, the parameters of Goldak’s double ellipsoidal heat source model were determined for the welding simulation of overlapping beads on the plate and the simulated bead geometry, extent of the molten pool and the HAZ were compared with the macrographs of cross-sections of the experimental weldments. The results showed excellent matching, thus verifying this methodology for determination of welding heat source parameters.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.
The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency
One of the most important subjects of hydraulic engineering is the reliable estimation of the transverse distribution in the rectangular channel of bed and wall shear stresses. This study makes use of the Tsallis entropy, genetic programming (GP) and adaptive neuro-fuzzy inference system (ANFIS) methods to assess the shear stress distribution (SSD) in the rectangular channel.
To evaluate the results of the Tsallis entropy, GP and ANFIS models, laboratory observations were used in which shear stress was measured using an optimized Preston tube. This is then used to measure the SSD in various aspect ratios in the rectangular channel. To investigate the shear stress percentage, 10 data series with a total of 112 different data for were used. The results of the sensitivity analysis show that the most influential parameter for the SSD in smooth rectangular channel is the dimensionless parameter B/H, Where the transverse coordinate is B, and the flow depth is H. With the parameters (b/B), (B/H) for the bed and (z/H), (B/H) for the wall as inputs, the modeling of the GP was better than the other one. Based on the analysis, it can be concluded that the use of GP and ANFIS algorithms is more effective in estimating shear stress in smooth rectangular channels than the Tsallis entropy-based equations.
The study presents a Machine Learning (ML)-based framework designed to forecast the stress-strain relationship of arc-direct energy deposited mild steel. Based on microstructural characteristics previously extracted using microscopy and X-ray diffraction, approximately 1000 new parameter sets are generated by applying the Latin Hypercube Sampling Method (LHSM). For each parameter set, a Representative Volume Element (RVE) is synthetically created via Voronoi Tessellation. Input raw data for ML-based algorithms comprises these parameter sets or RVE-images, while output raw data includes their corresponding stress-strain relationships calculated after a Finite Element (FE) procedure. Input data undergoes preprocessing involving standardization, feature selection, and image resizing. Similarly, the stress-strain curves, initially unsuitable for training traditional ML algorithms, are preprocessed using cubic splines and occasionally Principal Component Analysis (PCA). The later part of the study focuses on employing multiple ML algorithms, utilizing two main models. The first model predicts stress-strain curves based on microstructural parameters, while the second model does so solely from RVE images. The most accurate prediction yields a Root Mean Squared Error of around 5 MPa, approximately 1% of the yield stress. This outcome suggests that ML models offer precise and efficient methods for characterizing dual-phase steels, establishing a framework for accurate results in material analysis.
Polylactic acid (PLA) is a highly applicable material that is used in 3D printers due to some significant features such as its deformation property and affordable cost. For improvement of the end-use quality, it is of significant importance to enhance the quality of fused filament fabrication (FFF)-printed objects in PLA. The purpose of this investigation was to boost toughness and to reduce the production cost of the FFF-printed tensile test samples with the desired part thickness. To remove the need for numerous and idle printing samples, the response surface method (RSM) was used. Statistical analysis was performed to deal with this concern by considering extruder temperature (ET), infill percentage (IP), and layer thickness (LT) as controlled factors. The artificial intelligence method of artificial neural network (ANN) and ANN-genetic algorithm (ANN-GA) were further developed to estimate the toughness, part thickness, and production-cost-dependent variables. Results were evaluated by correlation coefficient and RMSE values. According to the modeling results, ANN-GA as a hybrid machine learning (ML) technique could enhance the accuracy of modeling by about 7.5, 11.5, and 4.5% for toughness, part thickness, and production cost, respectively, in comparison with those for the single ANN method. On the other hand, the optimization results confirm that the optimized specimen is cost-effective and able to comparatively undergo deformation, which enables the usability of printed PLA objects.
In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.
We conducted extensive molecular dynamics simulations to investigate the thermal conductivity of polycrystalline hexagonal boron-nitride (h-BN) films. To this aim, we constructed large atomistic models of polycrystalline h-BN sheets with random and uniform grain configuration. By performing equilibrium molecular dynamics (EMD) simulations, we investigated the influence of the average grain size on the thermal conductivity of polycrystalline h-BN films at various temperatures. Using the EMD results, we constructed finite element models of polycrystalline h-BN sheets to probe the thermal conductivity of samples with larger grain sizes. Our multiscale investigations not only provide a general viewpoint regarding the heat conduction in h-BN films but also propose that polycrystalline h-BN sheets present high thermal conductivity comparable to monocrystalline sheets.
For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.
In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost.
Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.
Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
We present an extended finite element formulation for dynamic fracture of piezo-electric materials. The method is developed in the context of linear elastic fracture mechanics. It is applied to mode I and mixed mode-fracture for quasi-steady cracks. An implicit time integration scheme is exploited. The results are compared to results obtained with the boundary element method and show excellent agreement.
This paper extends further the strain smoothing technique in finite elements to 8-noded hexahedral elements (CS-FEM-H8). The idea behind the present method is similar to the cell-based smoothed 4-noded quadrilateral finite elements (CS-FEM-Q4). In CSFEM, the smoothing domains are created based on elements, and each element can be further subdivided into 1 or several smoothing cells. It is observed that: 1) The CS-FEM using a single smoothing cell can produce higher stress accuracy, but insufficient rank and poor displacement accuracy; 2) The CS-FEM using several smoothing cells has proper rank, good displacement accuracy, but lower stress accuracy, especially for nearly incompressible and bending dominant problems. We therefore propose 1) an extension of strain smoothing to 8-noded hexahedral elements and 2) an alternative CS-FEM form, which associates the single smoothing cell issue with multi-smoothing cell one via a stabilization technique. Several numerical examples are provided to show the reliability and accuracy of the present formulation.
This paper presents a novel numerical procedure for computing limit and shakedown loads of structures using a node-based smoothed FEM in combination with a primal–dual algorithm. An associated primal–dual form based on the von Mises yield criterion is adopted. The primal-dual algorithm together with a Newton-like iteration are then used to solve this associated primal–dual form to determine simultaneously both approximate upper and quasi-lower bounds of the plastic collapse limit and the shakedown limit. The present formulation uses only linear approximations and its implementation into finite element programs is quite simple. Several numerical examples are given to show the reliability, accuracy, and generality of the present formulation compared with other available methods.
In this work, extensive reactive molecular dynamics simulations are conducted to analyze the nanopore creation by nano-particles impact over single-layer molybdenum disulfide (MoS2) with 1T and 2H phases. We also compare the results with graphene monolayer. In our simulations, nanosheets are exposed to a spherical rigid carbon projectile with high initial velocities ranging from 2 to 23 km/s. Results for three different structures are compared to examine the most critical factors in the perforation and resistance force during the impact. To analyze the perforation and impact resistance, kinetic energy and displacement time history of the projectile as well as perforation resistance force of the projectile are investigated.
Interestingly, although the elasticity module and tensile strength of the graphene are by almost five times higher than those of MoS2, the results demonstrate that 1T and 2H-MoS2 phases are more resistive to the impact loading and perforation than graphene. For the MoS2nanosheets, we realize that the 2H phase is more resistant to impact loading than the 1T counterpart.
Our reactive molecular dynamics results highlight that in addition to the strength and toughness, atomic structure is another crucial factor that can contribute substantially to impact resistance of 2D materials. The obtained results can be useful to guide the experimental setups for the nanopore creation in MoS2or other 2D lattices.
Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.
We present a stochastic deep collocation method (DCM) based on neural architecture search (NAS) and transfer learning for heterogeneous porous media. We first carry out a sensitivity analysis to determine the key hyper-parameters of the network to reduce the search space and subsequently employ hyper-parameter optimization to finally obtain the parameter values. The presented NAS based DCM also saves the weights and biases of the most favorable architectures, which is then used in the fine-tuning process. We also employ transfer learning techniques to drastically reduce the computational cost. The presented DCM is then applied to the stochastic analysis of heterogeneous porous material. Therefore, a three dimensional stochastic flow model is built providing a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers. The performance of the presented NAS based DCM is verified in different dimensions using the method of manufactured solutions. We show that it significantly outperforms finite difference methods in both accuracy and computational cost.
The derivation of nonlocal strong forms for many physical problems remains cumbersome in traditional methods. In this paper, we apply the variational principle/weighted residual method based on nonlocal operator method for the derivation of nonlocal forms for elasticity, thin plate, gradient elasticity, electro-magneto-elasticity and phase-field fracture method. The nonlocal governing equations are expressed as an integral form on support and dual-support. The first example shows that the nonlocal elasticity has the same form as dual-horizon non-ordinary state-based peridynamics. The derivation is simple and general and it can convert efficiently many local physical models into their corresponding nonlocal forms. In addition, a criterion based on the instability of the nonlocal gradient is proposed for the fracture modelling in linear elasticity. Several numerical examples are presented to validate nonlocal elasticity and the nonlocal thin plate.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification
(2020)
Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.
Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.