Institut für Strukturmechanik (ISM)
Refine
Document Type
- Article (254) (remove)
Institute
Keywords
- Angewandte Mathematik (183)
- Strukturmechanik (183)
- Stochastik (40)
- Maschinelles Lernen (26)
- OA-Publikationsfonds2020 (19)
- Machine learning (15)
- machine learning (8)
- Deep learning (6)
- Erdbeben (6)
- OA-Publikationsfonds2022 (6)
- big data (6)
- Wärmeleitfähigkeit (5)
- Finite-Elemente-Methode (4)
- rapid visual screening (4)
- Neuronales Netz (3)
- OA-Publikationsfonds2018 (3)
- OA-Publikationsfonds2021 (3)
- Optimierung (3)
- artificial intelligence (3)
- artificial neural networks (3)
- damaged buildings (3)
- earthquake safety assessment (3)
- random forest (3)
- support vector machine (3)
- Biodiesel (2)
- Bruchmechanik (2)
- Elastizität (2)
- Fahrleitung (2)
- Fehlerabschätzung (2)
- Fotovoltaik (2)
- Fuzzy-Logik (2)
- Intelligente Stadt (2)
- Internet of things (2)
- Künstliche Intelligenz (2)
- OA-Publikationsfonds2019 (2)
- Schaden (2)
- Tragfähigkeit (2)
- Transfer learning (2)
- Vulnerability assessment (2)
- buildings (2)
- data science (2)
- earthquake (2)
- extreme learning machine (2)
- mathematical modeling (2)
- smart cities (2)
- soft computing techniques (2)
- urban morphology (2)
- variational principle (2)
- vulnerability assessment (2)
- wireless sensor networks (2)
- 3D printing (1)
- 3D reinforced concrete buildings (1)
- 3D-Druck (1)
- ANN modeling (1)
- Activation function (1)
- Adaptive Pushover (1)
- Algorithmus (1)
- Arc-direct energy deposition (1)
- Artificial Intelligence (1)
- Artificial neural network (1)
- Batterie (1)
- Baustahl (1)
- Bayes-Verfahren (1)
- Bayesian inference (1)
- Beam-to-column connection; semi-rigid; flush end-plate connection; moment-rotation curve (1)
- Beton (1)
- Bildanalyse (1)
- Bodenmechanik (1)
- Bodentemperatur (1)
- Bornitrid (1)
- Bubble column reactor (1)
- Building safety assessment (1)
- Catenary poles (1)
- Chirurgie (1)
- Collocation method (1)
- ContikiMAC (1)
- Damage accumulation (1)
- Damm (1)
- Defekt (1)
- Design Spectra (1)
- Domain Adaptation (1)
- Dreidimensionales Modell (1)
- Druckluft (1)
- Dual phase steel (1)
- Dual-support (1)
- ELM (1)
- Earthquake (1)
- Elektrostatische Welle (1)
- Empire XPU 8.01 (1)
- Energieeffizienz (1)
- Energiespeicherung (1)
- Erbeben (1)
- Erdbebensicherheit (1)
- Erneuerbare Energien (1)
- Fachwerkbau (1)
- Fahrleitungsmast (1)
- Fatigue life (1)
- Fernerkung (1)
- Feststoff (1)
- Fluid (1)
- Funktechnik (1)
- Fuzzy Logic (1)
- Fuzzy-Regelung (1)
- Gaussian process regression (1)
- Gebäude (1)
- Geoinformatik (1)
- Geometrie (1)
- Gesundheitsinformationssystem (1)
- Gesundheitswesen (1)
- Graphen (1)
- Graphene (1)
- Grundwasser (1)
- Größenverhältnis (1)
- High-speed electric train (1)
- Holzkonstruktion (1)
- Hydrological drought (1)
- IOT (1)
- Implicit (1)
- Infrastructures (1)
- Internet der Dinge (1)
- Internet der dinge (1)
- Internet of Things (1)
- K-nearest neighbors (1)
- KNN (1)
- Kaverne (1)
- Kollokationsmethode (1)
- Körper (1)
- Kühlkörper (1)
- Land surface temperature (1)
- Lebenszyklus (1)
- Loading sequence (1)
- M5 model tree (1)
- MATLAB (1)
- MDLSM method (1)
- Machine Learning (1)
- Marmara Region (1)
- Materialverhalten (1)
- Matlab (1)
- Mechanische Eigenschaft (1)
- Mensch (1)
- Mikrokapsel (1)
- Mild steel (1)
- MoS2 (1)
- Model-free status monitoring (1)
- Modellierung (1)
- Molekülstruktur (1)
- Morphologie (1)
- Multi-criteria decision making (1)
- Multi-objective Evolutionary Optimization, Elitist Non- Dominated Sorting Evolution Strategy (ENSES), Sandwich Structure, Pareto-Optimal Solutions, Evolutionary Algorithm (1)
- NURBS (1)
- NURBS geometry (1)
- Nachhaltigkeit (1)
- Nanofluid (1)
- Nanomaterials (1)
- Nanomechanik (1)
- Nanopore (1)
- Nanoporöser Stoff (1)
- Nanoribbons, thermal conductivity (1)
- Nanostrukturiertes Material (1)
- Nasskühlung (1)
- Naturkatastrophe (1)
- Navier–Stokes equations (1)
- Neuronales Lernen (1)
- Nitratbelastung (1)
- Nonlocal operator method (1)
- OA-Publikationsfonds2023 (1)
- Oberflächentemperatur (1)
- Operator energy functional (1)
- Peridynamik (1)
- Physikalische Eigenschaft (1)
- Polymere (1)
- Polymorphie (1)
- Potential problem (1)
- RSSI (1)
- Rainflow counting algorithm (1)
- Rapid Visual Screening (1)
- Renewable energy (1)
- Riss (1)
- Rissausbreitung (1)
- Rotorblatt (1)
- SHM (1)
- Schadensakkumulation (1)
- Schadenserkennung (1)
- Schubspannung (1)
- Schwellenwert (1)
- Schwingung (1)
- Seismic risk (1)
- Sensor (1)
- Sigmoid function (1)
- Solar (1)
- Spannungs-Dehnungs-Beziehung (1)
- Stahlbau (1)
- Stahlbetonkonstruktion (1)
- Steifigkeit (1)
- Stiffness matrix (1)
- Stoffeigenschaft (1)
- Stress-strain curve (1)
- Strukturanalyse (1)
- Stütze (1)
- Sustainability (1)
- Sustainable production (1)
- TPOGS (1)
- Taylor series expansion (1)
- Thermal conductivity (1)
- Träger (1)
- Tsallis entropy (1)
- Variational principle (1)
- Vernetzung (1)
- Vulnerability (1)
- Werkstoff (1)
- Wind load (1)
- Windkraftwerk (1)
- Windlast (1)
- action recognition (1)
- adaptive neuro-fuzzy inference system (ANFIS) (1)
- adaptive pushover (1)
- ant colony optimization algorithm (ACO) (1)
- artificial neural network (1)
- back-pressure (1)
- battery (1)
- biodiesel (1)
- capsular clustering (1)
- circumferential contact length (1)
- classification (1)
- classifier (1)
- clear channel assessments (1)
- cluster density (1)
- cluster shape (1)
- clustering (1)
- computation (1)
- computational fluid dynamics (CFD) (1)
- computational hydraulics (1)
- congestion control (1)
- coronary artery disease (1)
- crack detection (1)
- damage identification (1)
- dams (1)
- deep learning (1)
- deep learning neural network (1)
- diesel engines (1)
- dimensionality reduction (1)
- dual-support (1)
- duty-cycles (1)
- earthquake damage (1)
- earthquake vulnerability assessment (1)
- electromagnetic waves (1)
- energy consumption (1)
- energy efficiency (1)
- energy form (1)
- energy, exergy (1)
- ensemble model (1)
- estimation (1)
- explicit time integration (1)
- extreme events (1)
- extreme pressure (1)
- firefly optimization algorithm (1)
- flow pattern (1)
- fog computing (1)
- food informatics (1)
- fractional-order control (1)
- full-waveform inversion (1)
- fused filament fabrication (1)
- fuzzy decision making (1)
- genetic programming (1)
- geoinformatics (1)
- ground structure (1)
- ground water contamination (1)
- growth mode (1)
- gully erosion susceptibility (1)
- health (1)
- health informatics (1)
- heart disease diagnosis (1)
- heat sink (1)
- human blob (1)
- human body proportions (1)
- hybrid machine learning (1)
- hybrid machine learning model (1)
- hybride Werkstoffe (1)
- hydraulic jump (1)
- hydrological model (1)
- hydrology (1)
- image processing (1)
- industry 4.0 (1)
- inverse analysis (1)
- least square support vector machine (LSSVM) (1)
- longitudinal dispersion coefficient (1)
- microcapsule (1)
- mitigation (1)
- molecular dynamics (1)
- nanofluid (1)
- nanoreinforced composites (1)
- natural hazard (1)
- neural architecture search (1)
- neural networks (NNs) (1)
- nonlocal Hessian operator (1)
- nonlocal operator method (1)
- numerical modelling (1)
- operator energy functional (1)
- partical swarm optimization (1)
- peridynamics (1)
- photovoltaic (1)
- photovoltaic-thermal (PV/T) (1)
- physical activities (1)
- polymorphe Unschärfemodellierung (1)
- precipitation (1)
- prediction (1)
- predictive model (1)
- principal component analysis (1)
- public health (1)
- public space (1)
- randomized spectral representation (1)
- rapid assessment (1)
- rapid classification (1)
- received signal strength indicator (1)
- reinforcement learning (1)
- remote sensing (1)
- residential buildings (1)
- response surface methodology (1)
- rice (1)
- rivers (1)
- rule based classification (1)
- seasonal precipitation (1)
- seismic assessment (1)
- seismic hazard analysis (1)
- seismic risk estimation (1)
- seismic vulnerability (1)
- self-healing concrete (1)
- signal processing (1)
- site-specific spectrum (1)
- smart sensors (1)
- smooth rectangular channel (1)
- soil temperature (1)
- spatial analysis (1)
- spatiotemporal database (1)
- spearman correlation coefficient (1)
- square root cubature calman filter (1)
- standard deviation of pressure fluctuations (1)
- statistical analysis (1)
- statistical coeffcient of the probability distribution (1)
- stilling basin (1)
- sugarcane (1)
- supervised learning (1)
- support vector regression (1)
- sustainability (1)
- three-dimensional truss structures (1)
- topology optimization (1)
- type-3 fuzzy systems (1)
- urban health (1)
- urban sustainability (1)
- vibration-based damage identification (1)
- vibration-based methodology (1)
- water quality (1)
- wave propagation (1)
- wavelet transform (1)
- weak form (1)
- wind turbine rotor blades (1)
- wireless sensor network (1)
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
Identification of modal parameters of a space frame structure is a complex assignment due to a large number of degrees of freedom, close natural frequencies, and different vibrating mechanisms. Research has been carried out on the modal identification of rather simple truss structures. So far, less attention has been given to complex three-dimensional truss structures. This work develops a vibration-based methodology for determining modal information of three-dimensional space truss structures. The method uses a relatively complex space truss structure for its verification. Numerical modelling of the system gives modal information about the expected vibration behaviour. The identification process involves closely spaced modes that are characterised by local and global vibration mechanisms. To distinguish between local and global vibrations of the system, modal strain energies are used as an indicator. The experimental validation, which incorporated a modal analysis employing the stochastic subspace identification method, has confirmed that considering relatively high model orders is required to identify specific mode shapes. Especially in the case of the determination of local deformation modes of space truss members, higher model orders have to be taken into account than in the modal identification of most other types of structures.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
One of the most important renewable energy technologies used nowadays are wind power turbines. In this paper, we are interested in identifying the operating status of wind turbines, especially rotor blades, by means of multiphysical models. It is a state-of-the-art technology to test mechanical structures with ultrasonic-based methods. However, due to the density and the required high resolution, the testing is performed with high-frequency waves, which cannot penetrate the structure in depth. Therefore, there is a need to adopt techniques in the fields of multiphysical model-based inversion schemes or data-driven structural health monitoring. Before investing effort in the development of such approaches, further insights and approaches are necessary to make the techniques applicable to structures such as wind power plants (blades). Among the expected developments, further accelerations of the so-called “forward codes” for a more efficient implementation of the wave equation could be envisaged. Here, we employ electromagnetic waves for the early detection of cracks. Because in many practical situations, it is not possible to apply techniques from tomography (characterized by multiple sources and sensor pairs), we focus here on the question of whether the existence of cracks can be determined by using only one source for the sent waves.
For the safe and efficient operation of dams, frequent monitoring and maintenance are required. These are usually expensive, time consuming, and cumbersome. To alleviate these issues, we propose applying a wave-based scheme for the location and quantification of damages in dams.
To obtain high-resolution “interpretable” images of the damaged regions, we drew inspiration from non-linear full-multigrid methods for inverse problems and applied a new cyclic multi-stage full-waveform inversion (FWI) scheme. Our approach is less susceptible to the stability issues faced by the standard FWI scheme when dealing with ill-posed problems. In this paper, we first selected an optimal acquisition setup and then applied synthetic data to demonstrate the capability of our approach in identifying a series of anomalies in dams by a mixture of reflection and transmission tomography. The results had sufficient robustness, showing the prospects of application in the field of non-destructive testing of dams.
As an optimization that starts from a randomly selected structure generally does not guarantee reasonable optimality, the use of a systemic approach, named the ground structure, is widely accepted in steel-made truss and frame structural design. However, in the case of reinforced concrete (RC) structural optimization, because of the orthogonal orientation of structural members, randomly chosen or architect-sketched framing is used. Such a one-time fixed layout trend, in addition to its lack of a systemic approach, does not necessarily guarantee optimality. In this study, an approach for generating a candidate ground structure to be used for cost or weight minimization of 3D RC building structures with included slabs is developed. A multiobjective function at the floor optimization stage and a single objective function at the frame optimization stage are considered. A particle swarm optimization (PSO) method is employed for selecting the optimal ground structure. This method enables generating a simple, yet potential, real-world representation of topologically preoptimized ground structure while both structural and main architectural requirements are considered. This is supported by a case study for different floor domain sizes.
Electric trains are considered one of the most eco-friendly and safest means of transportation. Catenary poles are used worldwide to support overhead power lines for electric trains. The performance of the catenary poles has an extensive influence on the integrity of the train systems and, consequently, the connected human services. It became a must nowadays to develop SHM systems that provide the instantaneous status of catenary poles in- service, making the decision-making processes to keep or repair the damaged poles more feasible. This study develops a data-driven, model-free approach for status monitoring of cantilever structures, focusing on pre-stressed, spun-cast ultrahigh-strength concrete catenary poles installed along high-speed train tracks. The pro-posed approach evaluates multiple damage features in an unfied damage index, which leads to straightforward interpretation and comparison of the output. Besides, it distinguishes between multiple damage scenarios of the poles, either the ones caused by material degradation of the concrete or by the cracks that can be propagated during the life span of the given structure. Moreover, using a logistic function to classify the integrity of structure avoids the expensive learning step in the existing damage detection approaches, namely, using the modern machine and deep learning methods. The findings of this study look very promising when applied to other types of cantilever structures, such as the poles that support the power transmission lines, antenna masts, chimneys, and wind turbines.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM) method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM) method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
This work describes an algorithm and corresponding software for incorporating general nonlinear multiple-point equality constraints in a implicit sparse direct solver. It is shown that direct addressing of sparse matrices is possible in general circumstances, circumventing the traditional linear or binary search for introducing (generalized) constituents to a sparse matrix. Nested and arbitrarily interconnected multiple-point constraints are introduced by processing of multiplicative constituents with a built-in topological ordering of the resulting directed graph. A classification of discretization methods is performed and some re-classified problems are described and solved under this proposed perspective. The dependence relations between solution methods, algorithms and constituents becomes apparent. Fracture algorithms can be naturally casted in this framework. Solutions based on control equations are also directly incorporated as equality constraints. We show that arbitrary constituents can be used as long as the resulting directed graph is acyclic. It is also shown that graph partitions and orderings should be performed in the innermost part of the algorithm, a fact with some peculiar consequences. The core of our implicit code is described, specifically new algorithms for direct access of sparse matrices (by means of the clique structure) and general constituent processing. It is demonstrated that the graph structure of the second derivatives of the equality constraints are cliques (or pseudo-elements) and are naturally included as such. A complete algorithm is presented which allows a complete automation of equality constraints, avoiding the need of pre-sorting. Verification applications in four distinct areas are shown: single and multiple rigid body dynamics, solution control and computational fracture.
Prediction of the groundwater nitrate concentration is of utmost importance for pollution control and water resource management. This research aims to model the spatial groundwater nitrate concentration in the Marvdasht watershed, Iran, based on several artificial intelligence methods of support vector machine (SVM), Cubist, random forest (RF), and Bayesian artificial neural network (Baysia-ANN) machine learning models. For this purpose, 11 independent variables affecting groundwater nitrate changes include elevation, slope, plan curvature, profile curvature, rainfall, piezometric depth, distance from the river, distance from residential, Sodium (Na), Potassium (K), and topographic wetness index (TWI) in the study area were prepared. Nitrate levels were also measured in 67 wells and used as a dependent variable for modeling. Data were divided into two categories of training (70%) and testing (30%) for modeling. The evaluation criteria coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency (NSE) were used to evaluate the performance of the models used. The results of modeling the susceptibility of groundwater nitrate concentration showed that the RF (R2 = 0.89, RMSE = 4.24, NSE = 0.87) model is better than the other Cubist (R2 = 0.87, RMSE = 5.18, NSE = 0.81), SVM (R2 = 0.74, RMSE = 6.07, NSE = 0.74), Bayesian-ANN (R2 = 0.79, RMSE = 5.91, NSE = 0.75) models. The results of groundwater nitrate concentration zoning in the study area showed that the northern parts of the case study have the highest amount of nitrate, which is higher in these agricultural areas than in other areas. The most important cause of nitrate pollution in these areas is agriculture activities and the use of groundwater to irrigate these crops and the wells close to agricultural areas, which has led to the indiscriminate use of chemical fertilizers by irrigation or rainwater of these fertilizers is washed and penetrates groundwater and pollutes the aquifer.
This study aims to evaluate a new approach in modeling gully erosion susceptibility (GES) based on a deep learning neural network (DLNN) model and an ensemble particle swarm optimization (PSO) algorithm with DLNN (PSO-DLNN), comparing these approaches with common artificial neural network (ANN) and support vector machine (SVM) models in Shirahan watershed, Iran. For this purpose, 13 independent variables affecting GES in the study area, namely, altitude, slope, aspect, plan curvature, profile curvature, drainage density, distance from a river, land use, soil, lithology, rainfall, stream power index (SPI), and topographic wetness index (TWI), were prepared. A total of 132 gully erosion locations were identified during field visits. To implement the proposed model, the dataset was divided into the two categories of training (70%) and testing (30%). The results indicate that the area under the curve (AUC) value from receiver operating characteristic (ROC) considering the testing datasets of PSO-DLNN is 0.89, which indicates superb accuracy. The rest of the models are associated with optimal accuracy and have similar results to the PSO-DLNN model; the AUC values from ROC of DLNN, SVM, and ANN for the testing datasets are 0.87, 0.85, and 0.84, respectively. The efficiency of the proposed model in terms of prediction of GES was increased. Therefore, it can be concluded that the DLNN model and its ensemble with the PSO algorithm can be used as a novel and practical method to predict gully erosion susceptibility, which can help planners and managers to manage and reduce the risk of this phenomenon.
Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.
The point collocation method of finite spheres (PCMFS) is used to model the hyperelastic response of soft biological tissue in real time within the framework of virtual surgery simulation. The proper orthogonal decomposition (POD) model order reduction (MOR) technique was used to achieve reduced-order model of the problem, minimizing computational cost. The PCMFS is a physics-based meshfree numerical technique for real-time simulation of surgical procedures where the approximation functions are applied directly on the strong form of the boundary value problem without the need for integration, increasing computational efficiency. Since computational speed has a significant role in simulation of surgical procedures, the proposed technique was able to model realistic nonlinear behavior of organs in real time. Numerical results are shown to demonstrate the effectiveness of the new methodology through a comparison between full and reduced analyses for several nonlinear problems. It is shown that the proposed technique was able to achieve good agreement with the full model; moreover, the computational and data storage costs were significantly reduced.
In the context of finite element model updating using output-only vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the correct pairing of experimentally obtained and numerically derived natural frequencies and mode shapes is important. In many cases, only limited spatial information is available and noise is present in the measurements. Therefore, the automatic selection of the most likely numerical mode shape corresponding to a particular experimentally identified mode shape can be a difficult task. The most common criterion for indicating corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases and is not reliable for automatic approaches. In this paper, the purely mathematical modal assurance criterion will be enhanced by additional physical information from the numerical model in terms of modal strain energies. A numerical example and a benchmark study with experimental data are presented to show the advantages of the proposed energy-based criterion in comparison to the traditional modal assurance criterion.
This paper proposes an adaptive atomistic- continuum numerical method for quasi-static crack growth. The phantom node method is used to model the crack in the continuum region and a molecular statics model is used near the crack tip. To ensure self-consistency in the bulk, a virtual atom cluster is used to model the material of the coarse scale. The coupling between the coarse scale and fine scale is realized through ghost atoms. The ghost atom positions are interpolated from the coarse scale solution and enforced as boundary conditions on the fine scale. The fine scale region is adaptively enlarged as the crack propagates and the region behind the crack tip is adaptively coarsened. An energy criterion is used to detect the crack tip location. The triangular lattice in the fine scale region corresponds to the lattice structure of the (111) plane of an FCC crystal. The Lennard-Jones potential is used to model the atom–atom interactions. The method is implemented in two dimensions. The results are compared to pure atomistic simulations; they show excellent agreement.
In machine learning, if the training data is independently and identically distributed as the test data then a trained model can make an accurate predictions for new samples of data. Conventional machine learning has a strong dependence on massive amounts of training data which are domain specific to understand their latent patterns. In contrast, Domain adaptation and Transfer learning methods are sub-fields within machine learning that are concerned with solving the inescapable problem of insufficient training data by relaxing the domain dependence hypothesis. In this contribution, this issue has been addressed and by making a novel combination of both the methods we develop a computationally efficient and practical algorithm to solve boundary value problems based on nonlinear partial differential equations. We adopt a meshfree analysis framework to integrate the prevailing geometric modelling techniques based on NURBS and present an enhanced deep collocation approach that also plays an important role in the accuracy of solutions. We start with a brief introduction on how these methods expand upon this framework. We observe an excellent agreement between these methods and have shown that how fine-tuning a pre-trained network to a specialized domain may lead to an outstanding performance compare to the existing ones. As proof of concept, we illustrate the performance of our proposed model on several benchmark problems.
A phantom-node method is developed for three-node shell elements to describe cracks. This method can treat arbitrary cracks independently of the mesh. The crack may cut elements completely or partially. Elements are overlapped on the position of the crack, and they are partially integrated to implement the discontinuous displacement across the crack. To consider the element containing a crack tip, a new kinematical relation between the overlapped elements is developed. There is no enrichment function for the discontinuous displacement field. Several numerical examples are presented to illustrate the proposed method.
This paper presents a strain smoothing procedure for the extended finite element method (XFEM). The resulting “edge-based” smoothed extended finite element method (ESm-XFEM) is tailored to linear elastic fracture mechanics and, in this context, to outperform the standard XFEM. In the XFEM, the displacement-based approximation is enriched by the Heaviside and asymptotic crack tip functions using the framework of partition of unity. This eliminates the need for the mesh alignment with the crack and re-meshing, as the crack evolves. Edge-based smoothing (ES) relies on a generalized smoothing operation over smoothing domains associated with edges of simplex meshes, and produces a softening effect leading to a close-to-exact stiffness, “super-convergence” and “ultra-accurate” solutions. The present method takes advantage of both the ES-FEM and the XFEM. Thanks to the use of strain smoothing, the subdivision of elements intersected by discontinuities and of integrating the (singular) derivatives of the approximation functions is suppressed via transforming interior integration into boundary integration. Numerical examples show that the proposed method improves significantly the accuracy of stress intensity factors and achieves a near optimal convergence rate in the energy norm even without geometrical enrichment or blending correction.
Bolted connections are widely employed in structures like transmission poles, wind turbines, and television (TV) towers. The behaviour of bolted connections is often complex and plays a significant role in the overall dynamic characteristics of the structure. The goal of this work is to conduct a fatigue lifecycle assessment of such a bolted connection block of a 193 m tall TV tower, for which 205 days of real measurement data have been obtained from the installed monitoring devices. Based on the recorded data, the best-fit stochastic wind distribution for 50 years, the decisive wind action, and the locations to carry out the fatigue analysis have been decided. A 3D beam model of the entire tower is developed to extract the nodal forces corresponding to the connection block location under various mean wind speeds, which is later coupled with a detailed complex finite element model of the connection block, with over three million degrees of freedom, for acquiring stress histories on some pre-selected bolts. The random stress histories are analysed using the rainflow counting algorithm (RCA) and the damage is estimated using Palmgren-Miner's damage accumulation law. A modification is proposed to integrate the loading sequence effect into the RCA, which otherwise is ignored, and the differences between the two RCAs are investigated in terms of the accumulated damage.
Temporary changes in precipitation may lead to sustained and severe drought or massive floods in different parts of the world. Knowing the variation in precipitation can effectively help the water resources decision-makers in water resources management. Large-scale circulation drivers have a considerable impact on precipitation in different parts of the world. In this research, the impact of El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and North Atlantic Oscillation (NAO) on seasonal precipitation over Iran was investigated. For this purpose, 103 synoptic stations with at least 30 years of data were utilized. The Spearman correlation coefficient between the indices in the previous 12 months with seasonal precipitation was calculated, and the meaningful correlations were extracted. Then, the month in which each of these indices has the highest correlation with seasonal precipitation was determined. Finally, the overall amount of increase or decrease in seasonal precipitation due to each of these indices was calculated. Results indicate the Southern Oscillation Index (SOI), NAO, and PDO have the most impact on seasonal precipitation, respectively. Additionally, these indices have the highest impact on the precipitation in winter, autumn, spring, and summer, respectively. SOI has a diverse impact on winter precipitation compared to the PDO and NAO, while in the other seasons, each index has its special impact on seasonal precipitation. Generally, all indices in different phases may decrease the seasonal precipitation up to 100%. However, the seasonal precipitation may increase more than 100% in different seasons due to the impact of these indices. The results of this study can be used effectively in water resources management and especially in dam operation.
The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.
The current study attempts to recognise an adequate classification for a semi-rigid beam-to-column connection by investigating strength, stiffness and ductility. For this purpose, an experimental test was carried out to investigate the moment-rotation (M-theta) features of flush end-plate (FEP) connections including variable parameters like size and number of bolts, thickness of end-plate, and finally, size of beams and columns. The initial elastic stiffness and ultimate moment capacity of connections were determined by an extensive analytical procedure from the proposed method prescribed by ANSI/AISC 360-10, and Eurocode 3 Part 1-8 specifications. The behaviour of beams with partially restrained or semi-rigid connections were also studied by incorporating classical analysis methods. The results confirmed that thickness of the column flange and end-plate substantially govern over the initial rotational stiffness of of flush end-plate connections. The results also clearly showed that EC3 provided a more reliable classification index for flush end-plate (FEP) connections. The findings from this study make significant contributions to the current literature as the actual response characteristics of such connections are non-linear. Therefore, such semirigid behaviour should be used to for an analysis and design method.
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
Along with environmental pollution, urban planning has been connected to public health. The research indicates that the quality of built environments plays an important role in reducing mental disorders and overall health. The structure and shape of the city are considered as one of the factors influencing happiness and health in urban communities and the type of the daily activities of citizens. The aim of this study was to promote physical activity in the main structure of the city via urban design in a way that the main form and morphology of the city can encourage citizens to move around and have physical activity within the city. Functional, physical, cultural-social, and perceptual-visual features are regarded as the most important and effective criteria in increasing physical activities in urban spaces, based on literature review. The environmental quality of urban spaces and their role in the physical activities of citizens in urban spaces were assessed by using the questionnaire tool and analytical network process (ANP) of structural equation modeling. Further, the space syntax method was utilized to evaluate the role of the spatial integration of urban spaces on improving physical activities. Based on the results, consideration of functional diversity, spatial flexibility and integration, security, and the aesthetic and visual quality of urban spaces plays an important role in improving the physical health of citizens in urban spaces. Further, more physical activities, including motivation for walking and the sense of public health and happiness, were observed in the streets having higher linkage and space syntax indexes with their surrounding texture.
Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.
In this work, we present a deep collocation method (DCM) for three-dimensional potential problems in non-homogeneous media. This approach utilizes a physics-informed neural network with material transfer learning reducing the solution of the non-homogeneous partial differential equations to an optimization problem. We tested different configurations of the physics-informed neural network including smooth activation functions, sampling methods for collocation points generation and combined optimizers. A material transfer learning technique is utilized for non-homogeneous media with different material gradations and parameters, which enhance the generality and robustness of the proposed method. In order to identify the most influential parameters of the network configuration, we carried out a global sensitivity analysis. Finally, we provide a convergence proof of our DCM. The approach is validated through several benchmark problems, also testing different material variations.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
Determining the earthquake hazard of any settlement is one of the primary studies for reducing earthquake damage. Therefore, earthquake hazard maps used for this purpose must be renewed over time. Turkey Earthquake Hazard Map has been used instead of Turkey Earthquake Zones Map since 2019. A probabilistic seismic hazard was performed by using these last two maps and different attenuation relationships for Bitlis Province (Eastern Turkey) were located in the Lake Van Basin, which has a high seismic risk. The earthquake parameters were determined by considering all districts and neighborhoods in the province. Probabilistic seismic hazard analyses were carried out for these settlements using seismic sources and four different attenuation relationships. The obtained values are compared with the design spectrum stated in the last two earthquake maps. Significant differences exist between the design spectrum obtained according to the different exceedance probabilities. In this study, adaptive pushover analyses of sample-reinforced concrete buildings were performed using the design ground motion level. Structural analyses were carried out using three different design spectra, as given in the last two seismic design codes and the mean spectrum obtained from attenuation relationships. Different design spectra significantly change the target displacements predicted for the performance levels of the buildings.
Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.
A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)
Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.
A vast number of existing buildings were constructed before the development and enforcement of seismic design codes, which run into the risk of being severely damaged under the action of seismic excitations. This poses not only a threat to the life of people but also affects the socio-economic stability in the affected area. Therefore, it is necessary to assess such buildings’ present vulnerability to make an educated decision regarding risk mitigation by seismic strengthening techniques such as retrofitting. However, it is economically and timely manner not feasible to inspect, repair, and augment every old building on an urban scale. As a result, a reliable rapid screening methods, namely Rapid Visual Screening (RVS), have garnered increasing interest among researchers and decision-makers alike. In this study, the effectiveness of five different Machine Learning (ML) techniques in vulnerability prediction applications have been investigated. The damage data of four different earthquakes from Ecuador, Haiti, Nepal, and South Korea, have been utilized to train and test the developed models. Eight performance modifiers have been implemented as variables with a supervised ML. The investigations on this paper illustrate that the assessed vulnerability classes by ML techniques were very close to the actual damage levels observed in the buildings.
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.
Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.
The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.
In this study, an application of evolutionary multi-objective optimization algorithms on the optimization of sandwich structures is presented. The solution strategy is known as Elitist Non-Dominated Sorting Evolution Strategy (ENSES) wherein Evolution Strategies (ES) as Evolutionary Algorithm (EA) in the elitist Non-dominated Sorting Genetic algorithm (NSGA-II) procedure. Evolutionary algorithm seems a compatible approach to resolve multi-objective optimization problems because it is inspired by natural evolution, which closely linked to Artificial Intelligence (AI) techniques and elitism has shown an important factor for improving evolutionary multi-objective search. In order to evaluate the notion of performance by ENSES, the well-known study case of sandwich structures are reconsidered. For Case 1, the goals of the multi-objective optimization are minimization of the deflection and the weight of the sandwich structures. The length, the core and skin thicknesses are the design variables of Case 1. For Case 2, the objective functions are the fabrication cost, the beam weight and the end deflection of the sandwich structures. There are four design variables i.e., the weld height, the weld length, the beam depth and the beam width in Case 2. Numerical results are presented in terms of Paretooptimal solutions for both evaluated cases.
The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.
The distinguishing structural feature of single-layered black phosphorus is its puckered structure, which leads to many novel physical properties. In this work, we first present a new parameterization of the Stillinger–Weber potential for single-layered black phosphorus. In doing so, we reveal the importance of a cross-pucker interaction term in capturing its unique mechanical properties, such as a negative Poisson's ratio. In particular, we show that the cross-pucker interaction enables the pucker to act as a re-entrant hinge, which expands in the lateral direction when it is stretched in the longitudinal direction. As a consequence, single-layered black phosphorus has a negative Poisson's ratio in the direction perpendicular to the atomic plane. As an additional demonstration of the impact of the cross-pucker interaction, we show that it is also the key factor that enables capturing the edge stress-induced bending of single-layered black phosphorus that has been reported in ab initio calculations.
The lattice dynamics properties are investigated for twisting bilayer graphene. There are big jumps for the inter-layer potential at twisting angle θ=0° and 60°, implying the stability of Bernal-stacking and the instability of AA-stacking structures, while a long platform in [8,55]° indicates the ease of twisting bilayer graphene in this wide angle range. Significant frequency shifts are observed for the z breathing mode around θ=0° and 60°, while the frequency is a constant in a wide range [8,55]°. Using the z breathing mode, a mechanical nanoresonator is proposed to operate on a robust resonant frequency in terahertz range.
We perform both classical molecular dynamics simulations and beam model calculations to investigate the Young's modulus of kinked silicon nanowires (KSiNWs). The Young's modulus is found to be highly sensitive to the arm length of the kink and is essentially inversely proportional to the arm length. The mechanism underlying the size dependence is found to be the interplay between the kink angle potential and the arm length potential, where we obtain an analytic relationship between the Young's modulus and the arm length of the KSiNW. Our results provide insight into the application of this novel building block in nanomechanical devices.
The upper limit of the thermal conductivity and the mechanical strength are predicted for the polyethylene chain, by performing the ab initio calculation and applying the quantum mechanical non-equilibrium Green’s function approach. Specially, there are two main findings from our calculation: (1) the thermal conductivity can reach a high value of 310 Wm−1 K−1 in a 100 nm polyethylene chain at room temperature and the thermal conductivity increases with the length of the chain; (2) the Young’s modulus in the polyethylene chain is as high as 374.5 GPa, and the polyethylene chain can sustain 32.85%±0.05% (ultimate) strain before undergoing structural phase transition into gaseous ethylene.