### Refine

#### Has Fulltext

- yes (124) (remove)

#### Document Type

- Article (50)
- Doctoral Thesis (41)
- Conference Proceeding (23)
- Master's Thesis (5)
- Preprint (4)
- Habilitation (1)

#### Institute

- Institut für Strukturmechanik (124) (remove)

#### Keywords

- Computerunterstütztes Verfahren (22)
- Maschinelles Lernen (19)
- Architektur <Informatik> (17)
- Finite-Elemente-Methode (14)
- Machine learning (13)
- Angewandte Informatik (12)
- Angewandte Mathematik (12)
- CAD (10)
- machine learning (8)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (7)
- Erdbeben (7)
- Optimierung (6)
- Wärmeleitfähigkeit (6)
- Deep learning (5)
- big data (5)
- Building Information Modeling (4)
- Modellierung (4)
- Neuronales Netz (4)
- finite element method (4)
- rapid visual screening (4)
- Batterie (3)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (3)
- Isogeometric Analysis (3)
- Mehrskalenmodell (3)
- Simulation (3)
- Strukturmechanik (3)
- earthquake (3)
- earthquake safety assessment (3)
- random forest (3)
- support vector machine (3)
- Abaqus (2)
- Artificial neural network (2)
- Beton (2)
- Biodiesel (2)
- Bridges (2)
- Dynamik (2)
- FEM (2)
- Fluid (2)
- Fotovoltaik (2)
- Fracture mechanics (2)
- Fuzzy-Logik (2)
- Intelligente Stadt (2)
- Internet of things (2)
- Isogeometrische Analyse (2)
- Künstliche Intelligenz (2)
- Mechanische Eigenschaft (2)
- Mehrgitterverfahren (2)
- Mehrskalenanalyse (2)
- Modalanalyse (2)
- Multiscale modeling (2)
- NURBS (2)
- Nanostrukturiertes Material (2)
- Nichtlineare Finite-Elemente-Methode (2)
- Optimization (2)
- Polymere (2)
- Staumauer (2)
- Strukturdynamik (2)
- Unsicherheit (2)
- artificial intelligence (2)
- artificial neural networks (2)
- buildings (2)
- clustering (2)
- continuum mechanics (2)
- damaged buildings (2)
- data science (2)
- extreme learning machine (2)
- mathematical modeling (2)
- multiphase (2)
- optimization (2)
- reinforcement learning (2)
- smart cities (2)
- soft computing techniques (2)
- urban morphology (2)
- vulnerability assessment (2)
- wireless sensor networks (2)
- 2D/3D Adaptive Mesh Refinement (1)
- ANN modeling (1)
- Adaptives Verfahren (1)
- Aerodynamic Stability (1)
- Aerodynamic derivatives (1)
- Aerodynamik (1)
- Akkumulator (1)
- Algorithmus (1)
- Artificial Intelligence (1)
- Autogenous (1)
- Autonomous (1)
- B-Spline (1)
- B-Spline Finite Elemente (1)
- B-spline (1)
- Battery (1)
- Battery development (1)
- Bayes (1)
- Bayes neuronale Netze (1)
- Bayes-Verfahren (1)
- Bayesian method (1)
- Bayesian neural networks (1)
- Beam-to-column connection; semi-rigid; flush end-plate connection; moment-rotation curve (1)
- Berechnung (1)
- Beschleunigungsmessung (1)
- Beschädigung (1)
- Bildanalyse (1)
- Biomechanics (1)
- Biomechanik (1)
- Bodentemperatur (1)
- Bornitrid (1)
- Bridge (1)
- Bridge aerodynamics (1)
- Bruch (1)
- Bruchmechanik (1)
- Brustkorb (1)
- Brücke (1)
- Brückenbau (1)
- Bubble column reactor (1)
- CFD (1)
- Carbon nanotubes (1)
- Chirurgie (1)
- Computersimulation (1)
- Concrete (1)
- ContikiMAC (1)
- Continuous-Time Markov Chain (1)
- Control system (1)
- Cost-Benefit Analysis (1)
- Damage (1)
- Damage identification (1)
- Damping (1)
- Dams (1)
- Data, information and knowledge modeling in civil engineering (1)
- Deal ii C++ code (1)
- Diskontinuumsmechanik (1)
- Diskrete-Elemente-Methode (1)
- Dissertation (1)
- Druckluft (1)
- ELM (1)
- Earthquake (1)
- Elastizität (1)
- Electrochemical properties (1)
- Elektrochemische Eigenschaft (1)
- Elektrode (1)
- Elektrodenmaterial (1)
- Energieeffizienz (1)
- Energiespeichersystem (1)
- Energiespeicherung (1)
- Entropie (1)
- Erdbebensicherheit (1)
- Erneuerbare Energien (1)
- Fehlerabschätzung (1)
- Fernerkung (1)
- Festkörpermechanik (1)
- Feststoff (1)
- Fiber Reinforced Composite (1)
- Finite Element Method (1)
- Finite Element Model (1)
- Flattern (1)
- Flexoelectricity (1)
- Fluid-Structure Interaction (1)
- Flutter (1)
- Fracture (1)
- Full waveform inversion (1)
- Function theoretic methods and PDE in engineering sciences (1)
- Funktechnik (1)
- Fuzzy Logic (1)
- Fuzzy-Regelung (1)
- Gaussian process regression (1)
- Gebäude (1)
- Geoinformatik (1)
- Geometric Modeling (1)
- Geometrie (1)
- Geometry Independent Field Approximation (1)
- Geschwindigkeit (1)
- Gesundheitsinformationssystem (1)
- Gesundheitswesen (1)
- Gewebeverbundwerkstoff (1)
- Goal-oriented A Posteriori Error Estimation (1)
- Graphen (1)
- Grundwasser (1)
- Größenverhältnis (1)
- HPC (1)
- Healing (1)
- High-speed railway bridge (1)
- Hochbau (1)
- Homogenisieren (1)
- Homogenisierung (1)
- Homogenization (1)
- Hydrological drought (1)
- IOT (1)
- Incompressibility (1)
- Infrastructures (1)
- Ingenieurwissenschaften (1)
- Instandhaltung (1)
- Internet der Dinge (1)
- Internet der dinge (1)
- Internet of Things (1)
- Inverse analysis (1)
- Inverse problems (1)
- Isogeometrc Analysis (1)
- K-nearest neighbors (1)
- KNN (1)
- Kaverne (1)
- Keramik (1)
- Kirchoff--love theory (1)
- Klüftung (1)
- Kohlenstoff Nanoröhre (1)
- Konjugierte-Gradienten-Methode (1)
- Kontinuierliche Simul (1)
- Kontinuumsmechanik (1)
- Kosten-Nutzen-Analyse (1)
- Körper (1)
- Kühlkörper (1)
- Land surface temperature (1)
- Local maximum entropy approximants (1)
- Lufttemperatur (1)
- Lösungsverfahren (1)
- M5 model tree (1)
- MDLSM method (1)
- Machine Learning (1)
- Markov-Kette mit stetiger Zeit (1)
- Marmara Region (1)
- Maschinenbau (1)
- Mass Tuned Damper (1)
- Material (1)
- Materialversagen (1)
- Mathematical methods for (robotics and) computer vision (1)
- Mechanical properties (1)
- Mechanik (1)
- Membrane contactors (1)
- Mensch (1)
- Mesh Refinement (1)
- Meso-Scale (1)
- Messtechnik (1)
- Mikro-Scale (1)
- Model assessment (1)
- Modellbildung (1)
- Modellkalibrierung (1)
- Modezuordung (1)
- Molecular Liquids (1)
- Monte-Carlo-Integration (1)
- Monte-Carlo-Simulation (1)
- Morphologie (1)
- Motion-induced forces (1)
- Multi-criteria decision making (1)
- Multi-objective Evolutionary Optimization, Elitist Non- Dominated Sorting Evolution Strategy (ENSES), Sandwich Structure, Pareto-Optimal Solutions, Evolutionary Algorithm (1)
- Multi-scale modeling (1)
- Muscle model (1)
- Muskel (1)
- Nachhaltigkeit (1)
- Nanocomposite materials (1)
- Nanofluid (1)
- Nanomaterial (1)
- Nanomaterials (1)
- Nanomechanical Resonators (1)
- Nanomechanik (1)
- Nanoribbons, thermal conductivity (1)
- Nanostructures (1)
- Nanoverbundstruktur (1)
- Nasskühlung (1)
- Naturkatastrophe (1)
- Nitratbelastung (1)
- Numerical modeling in engineering (1)
- Numerische Berechnung (1)
- Numerische Mathematik (1)
- Oberflächentemperatur (1)
- Operante Konditionierung (1)
- Operational modal analysis (1)
- Optimization in engineering applications (1)
- Optimization problems (1)
- PU Enrichment method (1)
- Parameteridentification (1)
- Passive damper (1)
- Phase field model (1)
- Phase-field model (1)
- Phase-field modeling (1)
- Phasenfeldmodell (1)
- Physics informed neural network (1)
- Physikalische Eigenschaft (1)
- Piezoelectricity (1)
- Polykristall (1)
- Polymer nanocomposites (1)
- Polymers (1)
- Polynomial Splines over Hierarchical T-meshes (1)
- Railway bridges (1)
- Rapid Visual Screening (1)
- Recovery Based Error Estimator (1)
- Referenzfläche (1)
- Rehabilitation (1)
- Reliability Analysis (1)
- Reliability Theory (1)
- Renewable energy (1)
- Resonator (1)
- Riss (1)
- Rissausbreitung (1)
- Schaden (1)
- Schadensdetektionsverfahren (1)
- Schadensmechanik (1)
- Schubspannung (1)
- Schwingung (1)
- Schwingungsanalyse (1)
- Schwingungsdämpfer (1)
- Schädigung (1)
- Schätztheorie (1)
- Selbstheilung (1)
- Semi-active damper (1)
- Sensitivity (1)
- Sensitivitätsanalyse (1)
- Sensor (1)
- Simulationsprozess (1)
- Solar (1)
- Sprödbruch (1)
- Stabilität (1)
- Stahlbau (1)
- Standsicherheit (1)
- Staudamm (1)
- Steifigkeit (1)
- Stochastic Subspace Identification (1)
- Stochastic analysis (1)
- Strukturoptimierung (1)
- Strömungsmechanik (1)
- Stütze (1)
- Super Healing (1)
- Surface effects (1)
- Sustainability (1)
- Sustainable production (1)
- System Identification (1)
- Systemidentifikation (1)
- Talsperre (1)
- Thermal Fluid-Structure Interaction (1)
- Thermal conductivity (1)
- Thermoelasticity (1)
- Thermoelastizität (1)
- Thin shell (1)
- Thorax (1)
- Tragfähigkeit (1)
- Träger (1)
- Uncertainty (1)
- Uncertainty analysis (1)
- Verbundwerkstoff (1)
- Vernetzung (1)
- Vortex Induced Vibration (1)
- Vulnerability (1)
- Vulnerability assessment (1)
- Wasserbau (1)
- Wave propagation (1)
- Wechselwirkung (1)
- Werkstoff (1)
- Wind Energy (1)
- Wind Turbines (1)
- Windenergie (1)
- Windturbine (1)
- XFEM (1)
- Zementbeton (1)
- Zustandsraummodell (1)
- Zuverlässigkeitsanalyse (1)
- Zuverlässigkeitstheorie (1)
- action recognition (1)
- adaptive neuro-fuzzy inference system (ANFIS) (1)
- adaptive pushover (1)
- adaptive simulation (1)
- ant colony optimization algorithm (ACO) (1)
- artificial neural network (1)
- atomistic simulation methods (1)
- automatic modal analysis (1)
- back-pressure (1)
- battery (1)
- biodiesel (1)
- brittle fracture (1)
- buckling (1)
- building information modelling (1)
- ceramics (1)
- classification (1)
- classifier (1)
- clear channel assessments (1)
- cluster density (1)
- cluster shape (1)
- composite (1)
- computation (1)
- computational fluid dynamics (CFD) (1)
- concrete (1)
- congestion control (1)
- conjugate gradient method (1)
- continuum damage mechanics (1)
- coronary artery disease (1)
- crack (1)
- crack identification (1)
- cylindrical shell structures (1)
- damage (1)
- dams (1)
- deep learning neural network (1)
- deep neural network (1)
- diesel engines (1)
- dimensionality reduction (1)
- diskontinuum mechanics (1)
- dissimilarity measures (1)
- domain decomposition (1)
- duty-cycles (1)
- earthquake damage (1)
- earthquake vulnerability assessment (1)
- effective properties (1)
- energy consumption (1)
- energy efficiency (1)
- energy, exergy (1)
- ensemble model (1)
- estimation (1)
- extreme events (1)
- extreme pressure (1)
- finite element (1)
- firefly optimization algorithm (1)
- flow pattern (1)
- fog computing (1)
- food informatics (1)
- fractional-order control (1)
- fuzzy decision making (1)
- genetic algorithm (1)
- geoinformatics (1)
- grid-based (1)
- ground water contamination (1)
- growth mode (1)
- gully erosion susceptibility (1)
- health (1)
- health informatics (1)
- heart disease diagnosis (1)
- heat sink (1)
- heterogeneous material (1)
- high-performance computing (1)
- human blob (1)
- human body proportions (1)
- hybrid machine learning (1)
- hybrid machine learning model (1)
- hydraulic jump (1)
- hydrological model (1)
- hydrology (1)
- image processing (1)
- industry 4.0 (1)
- intergranular damage (1)
- isogeometric analysis (1)
- isogeometric methods (1)
- jointed rock (1)
- least square support vector machine (LSSVM) (1)
- level set method (1)
- longitudinal dispersion coefficient (1)
- material failure (1)
- matrix-free (1)
- mehrphasig (1)
- mitigation (1)
- modal analysis (1)
- modal parameter estimation (1)
- modal tracking (1)
- mode pairing (1)
- model updating (1)
- mortar method (1)
- multigrid (1)
- multigrid method (1)
- multiscale (1)
- multiscale method (1)
- nanocomposite (1)
- nanofluid (1)
- nanoreinforced composites (1)
- natural hazard (1)
- neural networks (NNs) (1)
- optimal sensor positions (1)
- optimale Sensorpositionierung (1)
- parameter identification (1)
- partical swarm optimization (1)
- passive control (1)
- phase field (1)
- photovoltaic (1)
- photovoltaic-thermal (PV/T) (1)
- physical activities (1)
- precipitation (1)
- prediction (1)
- predictive model (1)
- principal component analysis (1)
- public health (1)
- public space (1)
- quasicontinuum method (1)
- received signal strength indicator (RSSI) (1)
- recovery-based and residual-based error estimators (1)
- remote sensing (1)
- residential buildings (1)
- response surface methodology (1)
- rice (1)
- rivers (1)
- rule based classiﬁcation (1)
- scalable smeared crack analysis (1)
- scale transition (1)
- seasonal precipitation (1)
- seismic assessment (1)
- seismic control (1)
- seismic hazard analysis (1)
- seismic risk estimation (1)
- seismic vulnerability (1)
- signal processing (1)
- site-specific spectrum (1)
- smart sensors (1)
- soil temperature (1)
- solver (1)
- spatial analysis (1)
- spatiotemporal database (1)
- spearman correlation coefficient (1)
- square root cubature calman filter (1)
- standard deviation of pressure fluctuations (1)
- statistical analysis (1)
- statistical coeffcient of the probability distribution (1)
- stilling basin (1)
- stochastic (1)
- stochastic subspace identification (1)
- structural control (1)
- structural dynamics (1)
- sugarcane (1)
- support vector regression (1)
- sustainability (1)
- tall buildings (1)
- thermal conductivity (1)
- tuned mass damper (1)
- tuned mass dampers (1)
- type-3 fuzzy systems (1)
- urban health (1)
- urban sustainability (1)
- water quality (1)
- wavelet transform (1)
- wireless sensor network (1)
- woven composites (1)

This thesis presents the advances and applications of phase field modeling in fracture analysis. In this approach, the sharp crack surface topology in a solid is approximated by a diffusive crack zone governed by a scalar auxiliary variable. The uniqueness of phase field modeling is that the crack paths are automatically determined as part of the solution and no interface tracking is required. The damage parameter varies continuously over the domain. But this flexibility comes with associated difficulties: (1) a very fine spatial discretization is required to represent sharp local gradients correctly; (2) fine discretization results in high computational cost; (3) computation of higher-order derivatives for improved convergence rates and (4) curse of dimensionality in conventional numerical integration techniques. As a consequence, the practical applicability of phase field models is severely limited.
The research presented in this thesis addresses the difficulties of the conventional numerical integration techniques for phase field modeling in quasi-static brittle fracture analysis. The first method relies on polynomial splines over hierarchical T-meshes (PHT-splines) in the framework of isogeometric analysis (IGA). An adaptive h-refinement scheme is developed based on the variational energy formulation of phase field modeling. The fourth-order phase field model provides increased regularity in the exact solution of the phase field equation and improved convergence rates for numerical solutions on a coarser discretization, compared to the second-order model. However, second-order derivatives of the phase field are required in the fourth-order model. Hence, at least a minimum of C1 continuous basis functions are essential, which is achieved using hierarchical cubic B-splines in IGA. PHT-splines enable the refinement to remain local at singularities and high gradients, consequently reducing the computational cost greatly. Unfortunately, when modeling complex geometries, multiple parameter spaces (patches) are joined together to describe the physical domain and there is typically a loss of continuity at the patch boundaries. This decrease of smoothness is dictated by the geometry description, where C0 parameterizations are normally used to deal with kinks and corners in the domain. Hence, the application of the fourth-order model is severely restricted. To overcome the high computational cost for the second-order model, we develop a dual-mesh adaptive h-refinement approach. This approach uses a coarser discretization for the elastic field and a finer discretization for the phase field. Independent refinement strategies have been used for each field.
The next contribution is based on physics informed deep neural networks. The network is trained based on the minimization of the variational energy of the system described by general non-linear partial differential equations while respecting any given law of physics, hence the name physics informed neural network (PINN). The developed approach needs only a set of points to define the geometry, contrary to the conventional mesh-based discretization techniques. The concept of `transfer learning' is integrated with the developed PINN approach to improve the computational efficiency of the network at each displacement step. This approach allows a numerically stable crack growth even with larger displacement steps. An adaptive h-refinement scheme based on the generation of more quadrature points in the damage zone is developed in this framework. For all the developed methods, displacement-controlled loading is considered. The accuracy and the efficiency of both methods are studied numerically showing that the developed methods are powerful and computationally efficient tools for accurately predicting fractures.

The concept of information entropy together with the principle of maximum entropy to open channel flow is essentially based on some physical consideration of the problem under consideration. This paper is a discussion on Yeganeh and Heidari (2020)’s paper, who proposed a new approach for measuring vertical distribution of streamwise velocity in open channels. The discussers argue that their approach is conceptually incorrect and thus leads to a physically unrealistic situation. In addition, the discussers found some wrong mathematical expressions (which are assumed to be typos) written in the paper, and also point out that the authors did not cite some of the original papers on the topic.

Prediction of the groundwater nitrate concentration is of utmost importance for pollution control and water resource management. This research aims to model the spatial groundwater nitrate concentration in the Marvdasht watershed, Iran, based on several artificial intelligence methods of support vector machine (SVM), Cubist, random forest (RF), and Bayesian artificial neural network (Baysia-ANN) machine learning models. For this purpose, 11 independent variables affecting groundwater nitrate changes include elevation, slope, plan curvature, profile curvature, rainfall, piezometric depth, distance from the river, distance from residential, Sodium (Na), Potassium (K), and topographic wetness index (TWI) in the study area were prepared. Nitrate levels were also measured in 67 wells and used as a dependent variable for modeling. Data were divided into two categories of training (70%) and testing (30%) for modeling. The evaluation criteria coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency (NSE) were used to evaluate the performance of the models used. The results of modeling the susceptibility of groundwater nitrate concentration showed that the RF (R2 = 0.89, RMSE = 4.24, NSE = 0.87) model is better than the other Cubist (R2 = 0.87, RMSE = 5.18, NSE = 0.81), SVM (R2 = 0.74, RMSE = 6.07, NSE = 0.74), Bayesian-ANN (R2 = 0.79, RMSE = 5.91, NSE = 0.75) models. The results of groundwater nitrate concentration zoning in the study area showed that the northern parts of the case study have the highest amount of nitrate, which is higher in these agricultural areas than in other areas. The most important cause of nitrate pollution in these areas is agriculture activities and the use of groundwater to irrigate these crops and the wells close to agricultural areas, which has led to the indiscriminate use of chemical fertilizers by irrigation or rainwater of these fertilizers is washed and penetrates groundwater and pollutes the aquifer.

Tall buildings have become an integral part of cities despite all their pros and cons. Some current tall buildings have several problems because of their unsuitable location; the problems include increasing density, imposing traffic on urban thoroughfares, blocking view corridors, etc. Some of these buildings have destroyed desirable views of the city. In this research, different criteria have been chosen, such as environment, access, social-economic, land-use, and physical context. These criteria and sub-criteria are prioritized and weighted by the analytic network process (ANP) based on experts’ opinions, using Super Decisions V2.8 software. On the other hand, layers corresponding to sub-criteria were made in ArcGIS 10.3 simultaneously, then via a weighted overlay (map algebra), a locating plan was created. In the next step seven hypothetical tall buildings (20 stories), in the best part of the locating plan, were considered to evaluate how much of theses hypothetical buildings would be visible (fuzzy visibility) from the street and open spaces throughout the city. These processes have been modeled by MATLAB software, and the final fuzzy visibility plan was created by ArcGIS. Fuzzy visibility results can help city managers and planners to choose which location is suitable for a tall building and how much visibility may be appropriate. The proposed model can locate tall buildings based on technical and visual criteria in the future development of the city and it can be widely used in any city as long as the criteria and weights are localized.

This study aims to evaluate a new approach in modeling gully erosion susceptibility (GES) based on a deep learning neural network (DLNN) model and an ensemble particle swarm optimization (PSO) algorithm with DLNN (PSO-DLNN), comparing these approaches with common artificial neural network (ANN) and support vector machine (SVM) models in Shirahan watershed, Iran. For this purpose, 13 independent variables affecting GES in the study area, namely, altitude, slope, aspect, plan curvature, profile curvature, drainage density, distance from a river, land use, soil, lithology, rainfall, stream power index (SPI), and topographic wetness index (TWI), were prepared. A total of 132 gully erosion locations were identified during field visits. To implement the proposed model, the dataset was divided into the two categories of training (70%) and testing (30%). The results indicate that the area under the curve (AUC) value from receiver operating characteristic (ROC) considering the testing datasets of PSO-DLNN is 0.89, which indicates superb accuracy. The rest of the models are associated with optimal accuracy and have similar results to the PSO-DLNN model; the AUC values from ROC of DLNN, SVM, and ANN for the testing datasets are 0.87, 0.85, and 0.84, respectively. The efficiency of the proposed model in terms of prediction of GES was increased. Therefore, it can be concluded that the DLNN model and its ensemble with the PSO algorithm can be used as a novel and practical method to predict gully erosion susceptibility, which can help planners and managers to manage and reduce the risk of this phenomenon.

For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.

In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost.

Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.

In recent decades, a multitude of concepts and models were developed to understand, assess and predict muscular mechanics in the context of physiological and pathological events.
Most of these models are highly specialized and designed to selectively address fields in, e.g., medicine, sports science, forensics, product design or CGI; their data are often not transferable to other ranges of application. A single universal model, which covers the details of biochemical and neural processes, as well as the development of internal and external force and motion patterns and appearance could not be practical with regard to the diversity of the questions to be investigated and the task to find answers efficiently. With reasonable limitations though, a generalized approach is feasible.
The objective of the work at hand was to develop a model for muscle simulation which covers the phenomenological aspects, and thus is universally applicable in domains where up until now specialized models were utilized. This includes investigations on active and passive motion, structural interaction of muscles within the body and with external elements, for example in crash scenarios, but also research topics like the verification of in vivo experiments and parameter identification. For this purpose, elements for the simulation of incompressible deformations were studied, adapted and implemented into the finite element code SLang. Various anisotropic, visco-elastic muscle models were developed or enhanced. The applicability was demonstrated on the base of several examples, and a general base for the implementation of further material models was developed and elaborated.

In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential.
Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods.
This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed.
As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available.
After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using
adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples.
After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest.
The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion.
In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison.

The Marmara Region (NW Turkey) has experienced significant earthquakes (M > 7.0) to date. A destructive earthquake is also expected in the region. To determine the effect of the specific design spectrum, eleven provinces located in the region were chosen according to the Turkey Earthquake Building Code updated in 2019. Additionally, the differences between the previous and updated regulations of the country were investigated. Peak Ground Acceleration (PGA) and Peak Ground Velocity (PGV) were obtained for each province by using earthquake ground motion levels with 2%, 10%, 50%, and 68% probability of exceedance in 50-year periods. The PGA values in the region range from 0.16 to 0.7 g for earthquakes with a return period of 475 years. For each province, a sample of a reinforced-concrete building having two different numbers of stories with the same ground and structural characteristics was chosen. Static adaptive pushover analyses were performed for the sample reinforced-concrete building using each province’s design spectrum. The variations in the earthquake and structural parameters were investigated according to different geographical locations. It was determined that the site-specific design spectrum significantly influences target displacements for performance-based assessments of buildings due to seismicity characteristics of the studied geographic location.

A Machine Learning Framework for Assessing Seismic Hazard Safety of Reinforced Concrete Buildings
(2020)

Although averting a seismic disturbance and its physical, social, and economic disruption is practically impossible, using the advancements in computational science and numerical modeling shall equip humanity to predict its severity, understand the outcomes, and equip for post-disaster management. Many buildings exist amidst the developed metropolitan areas, which are senile and still in service. These buildings were also designed before establishing national seismic codes or without the introduction of construction regulations. In that case, risk reduction is significant for developing alternatives and designing suitable models to enhance the existing structure’s performance. Such models will be able to classify risks and casualties related to possible earthquakes through emergency preparation. Thus, it is crucial to recognize structures that are susceptible to earthquake vibrations and need to be prioritized for retrofitting. However, each building’s behavior under seismic actions cannot be studied through performing structural analysis, as it might be unrealistic because of the rigorous computations, long period, and substantial expenditure. Therefore, it calls for a simple, reliable, and accurate process known as Rapid Visual Screening (RVS), which serves as a primary screening platform, including an optimum number of seismic parameters and predetermined performance damage conditions for structures. In this study, the damage classification technique was studied, and the efficacy of the Machine Learning (ML) method in damage prediction via a Support Vector Machine (SVM) model was explored. The ML model is trained and tested separately on damage data from four different earthquakes, namely Ecuador, Haiti, Nepal, and South Korea. Each dataset consists of varying numbers of input data and eight performance modifiers. Based on the study and the results, the ML model using SVM classifies the given input data into the belonging classes and accomplishes the performance on hazard safety evaluation of buildings.

Recently, the demand for residence and usage of urban infrastructure has been increased, thereby resulting in the elevation of risk levels of human lives over natural calamities. The occupancy demand has rapidly increased the construction rate, whereas the inadequate design of structures prone to more vulnerability. Buildings constructed before the development of seismic codes have an additional susceptibility to earthquake vibrations. The structural collapse causes an economic loss as well as setbacks for human lives. An application of different theoretical methods to analyze the structural behavior is expensive and time-consuming. Therefore, introducing a rapid vulnerability assessment method to check structural performances is necessary for future developments. The process, as mentioned earlier, is known as Rapid Visual Screening (RVS). This technique has been generated to identify, inventory, and screen structures that are potentially hazardous. Sometimes, poor construction quality does not provide some of the required parameters; in this case, the RVS process turns into a tedious scenario. Hence, to tackle such a situation, multiple-criteria decision-making (MCDM) methods for the seismic vulnerability assessment opens a new gateway. The different parameters required by RVS can be taken in MCDM. MCDM evaluates multiple conflicting criteria in decision making in several fields. This paper has aimed to bridge the gap between RVS and MCDM. Furthermore, to define the correlation between these techniques, implementation of the methodologies from Indian, Turkish, and Federal Emergency Management Agency (FEMA) codes has been done. The effects of seismic vulnerability of structures have been observed and compared.

This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.

Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classiﬁcation
(2020)

Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.

Self-healing materials have recently become more popular due to their capability to autonomously and autogenously repair the damage in cementitious materials. The concept of self-healing gives the damaged material the ability to recover its stiffness. This gives a difference in comparing with a material that is not subjected to healing. Once this material is damaged, it cannot sustain loading due to the stiffness degradation. Numerical modeling of self-healing materials is still in its infancy. Multiple experimental researches were conducted in literature to describe the behavior of self-healing of cementitious materials. However, few numerical investigations were undertaken.
The thesis presents an analytical framework of self-healing and super healing materials based on continuum damage-healing mechanics. Through this framework, we aim to describe the recovery and strengthening of material stiffness and strength. A simple damage healing law is proposed and applied on concrete material. The proposed damage-healing law is based on a new time-dependent healing variable. The damage-healing model is applied on isotropic concrete material at the macroscale under tensile load. Both autonomous and autogenous self-healing mechanisms are simulated under different loading conditions. These two mechanisms are denoted in the present work by coupled and uncoupled self-healing mechanisms, respectively. We assume in the coupled self-healing that the healing occurs at the same time with damage evolution, while we assume in the uncoupled self-healing that the healing occurs when the material is deformed and subjected to a rest period (damage is constant). In order to describe both coupled and uncoupled healing mechanisms, a one-dimensional element is subjected to different types of loading history.
In the same context, derivation of nonlinear self-healing theory is given, and comparison of linear and nonlinear damage-healing models is carried out using both coupled and uncoupled self-healing mechanisms. The nonlinear healing theory includes generalized nonlinear and quadratic healing models. The healing efficiency is studied by varying the values of the healing rest period and the parameter describing the material characteristics. In addition, theoretical formulation of different self-healing variables is presented for both isotropic and anisotropic maerials. The healing variables are defined based on the recovery in elastic modulus, shear modulus, Poisson's ratio, and bulk modulus. The evolution of the healing variable calculated based on cross-section as function of the healing variable calculated based on elastic stiffness is presented in both hypotheses of elastic strain equivalence and elastic energy equivalence. The components of the fourth-rank healing tensor are also obtained in the case of isotropic elasticity, plane stress and plane strain.
Recent research revealed that self-healing presents a crucial solution also for the strengthening of the materials. This new concept has been termed ``Super Healing``. Once the stiffness of the material is recovered, further healing can result as a strengthening material. In the present thesis, new theory of super healing materials is defined in isotropic and anisotropic cases using sound mathematical and mechanical principles which are applied in linear and nonlinear super healing theories. Additionally, the link of the proposed theory with the theory of undamageable materials is outlined. In order to describe the super healing efficiency in linear and nonlinear theories, the ratio of effective stress to nominal stress is calculated as function of the super healing variable. In addition, the hypotheses of elastic strain and elastic energy equivalence are applied. In the same context, new super healing matrix in plane strain is proposed based on continuum damage-healing mechanics.
In the present work, we also focus on numerical modeling of impact behavior of reinforced concrete slabs using the commercial finite element package Abaqus/Explicit. Plain and reinforced concrete slabs of unconfined compressive strength 41 MPa are simulated under impact of ogive-nosed hard projectile. The constitutive material modeling of the concrete and steel reinforcement bars is performed using the Johnson-Holmquist-2 damage and the Johnson-Cook plasticity material models, respectively. Damage diameters and residual velocities obtained by the numerical model are compared with the experimental results and effect of steel reinforcement and projectile diameter is studied.

The purpose of this study is to develop self-contained methods for obtaining smooth meshes which are compatible with isogeometric analysis (IGA). The study contains three main parts. We start by developing a better understanding of shapes and splines through the study of an image-related problem. Then we proceed towards obtaining smooth volumetric meshes of the given voxel-based images. Finally, we treat the smoothness issue on the multi-patch domains with C1 coupling. Following are the highlights of each part.
First, we present a B-spline convolution method for boundary representation of voxel-based images. We adopt the filtering technique to compute the B-spline coefficients and gradients of the images effectively. We then implement the B-spline convolution for developing a non-rigid images registration method. The proposed method is in some sense of “isoparametric”, for which all the computation is done within the B-splines framework. Particularly, updating the images by using B-spline composition promote smooth transformation map between the images. We show the possible medical applications of our method by applying it for registration of brain images.
Secondly, we develop a self-contained volumetric parametrization method based on the B-splines boundary representation. We aim to convert a given voxel-based data to a matching C1 representation with hierarchical cubic splines. The concept of the osculating circle is employed to enhance the geometric approximation, where it is done by a single template and linear transformations (scaling, translations, and rotations) without the need for solving an optimization problem. Moreover, we use the Laplacian smoothing and refinement techniques to avoid irregular meshes and to improve mesh quality. We show with several examples that the method is capable of handling complex 2D and 3D configurations. In particular, we parametrize the 3D Stanford bunny which contains irregular shapes and voids.
Finally, we propose the B´ezier ordinates approach and splines approach for C1 coupling. In the first approach, the new basis functions are defined in terms of the B´ezier Bernstein polynomials. For the second approach, the new basis is defined as a linear combination of C0 basis functions. The methods are not limited to planar or bilinear mappings. They allow the modeling of solutions to fourth order partial differential equations (PDEs) on complex geometric domains, provided that the given patches are G1
continuous. Both methods have their advantages. In particular, the B´ezier approach offer more degree of freedoms, while the spline approach is more computationally efficient. In addition, we proposed partial degree elevation to overcome the C1-locking issue caused by the over constraining of the solution space. We demonstrate the potential of the resulting C1 basis functions for application in IGA which involve fourth order PDEs such as those appearing in Kirchhoff-Love shell models, Cahn-Hilliard phase field application, and biharmonic problems.

In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.

The economic losses from earthquakes tend to hit the national economy considerably; therefore, models that are capable of estimating the vulnerability and losses of future earthquakes are highly consequential for emergency planners with the purpose of risk mitigation. This demands a mass prioritization filtering of structures to identify vulnerable buildings for retrofitting purposes. The application of advanced structural analysis on each building to study the earthquake response is impractical due to complex calculations, long computational time, and exorbitant cost. This exhibits the need for a fast, reliable, and rapid method, commonly known as Rapid Visual Screening (RVS). The method serves as a preliminary screening platform, using an optimum number of seismic parameters of the structure and predefined output damage states. In this study, the efficacy of the Machine Learning (ML) application in damage prediction through a Support Vector Machine (SVM) model as the damage classification technique has been investigated. The developed model was trained and examined based on damage data from the 1999 Düzce Earthquake in Turkey, where the building’s data consists of 22 performance modifiers that have been implemented with supervised machine learning.

Material properties play a critical role in durable products manufacturing. Estimation of the precise characteristics in different scales requires complex and expensive experimental measurements. Potentially, computational methods can provide a platform to determine the fundamental properties before the final experiment. Multi-scale computational modeling leads to the modeling of the various time, and length scales include nano, micro, meso, and macro scales. These scales can be modeled separately or in correlation with coarser scales. Depend on the interested scales modeling, the right selection of multi-scale methods leads to reliable results and affordable computational cost. The present dissertation deals with the problems in various length and time scales using computational methods include density functional theory (DFT), molecular mechanics (MM), molecular dynamics (MD), and finite element (FE) methods.
Physical and chemical interactions in lower scales determine the coarser scale properties. Particles interaction modeling and exploring fundamental properties are significant challenges of computational science. Downscale modelings need more computational effort due to a large number of interacted atoms/particles. To deal with this problem and bring up a fine-scale (nano) as a coarse-scale (macro) problem, we extended an atomic-continuum framework. The discrete atomic models solve as a continuum problem using the computationally efficient FE method. MM or force field method based on a set of assumptions approximates a solution on the atomic scale. In this method, atoms and bonds model as a harmonic oscillator with a system of mass and springs. The negative gradient of the potential energy equal to the forces on each atom. In this way, each bond's total potential energy includes bonded, and non-bonded energies are simulated as equivalent structural strain energies. Finally, the chemical nature of the atomic bond is modeled as a piezoelectric beam element that solves by the FE method.
Exploring novel materials with unique properties is a demand for various industrial applications. During the last decade, many two-dimensional (2D) materials have been synthesized and shown outstanding properties. Investigation of the probable defects during the formation/fabrication process and studying their strength under severe service life are the critical tasks to explore performance prospects. We studied various defects include nano crack, notch, and point vacancy (Stone-Wales defect) defects employing MD analysis. Classical MD has been used to simulate a considerable amount of molecules at micro-, and meso- scales. Pristine and defective nanosheet structures considered under the uniaxial tensile loading at various temperatures using open-source LAMMPS codes. The results were visualized with the open-source software of OVITO and VMD.
Quantum based first principle calculations have been conducting at electronic scales and known as the most accurate Ab initio methods. However, they are computationally expensive to apply for large systems. We used density functional theory (DFT) to estimate the mechanical and electrochemical response of the 2D materials. Many-body Schrödinger's equation describes the motion and interactions of the solid-state particles. Solid describes as a system of positive nuclei and negative electrons, all electromagnetically interacting with each other, where the wave function theory describes the quantum state of the set of particles. However, dealing with the 3N coordinates of the electrons, nuclei, and N coordinates of the electrons spin components makes the governing equation unsolvable for just a few interacted atoms. Some assumptions and theories like Born Oppenheimer and Hartree-Fock mean-field and Hohenberg-Kohn theories are needed to treat with this equation. First, Born Oppenheimer approximation reduces it to the only electronic coordinates. Then Kohn and Sham, based on Hartree-Fock and Hohenberg-Kohn theories, assumed an equivalent fictitious non-interacting electrons system as an electron density functional such that their ground state energies are equal to a set of interacting electrons. Exchange-correlation energy functionals are responsible for satisfying the equivalency between both systems. The exact form of the exchange-correlation functional is not known. However, there are widely used methods to derive functionals like local density approximation (LDA), Generalized gradient approximation (GGA), and hybrid functionals (e.g., B3LYP). In our study, DFT performed using VASP codes within the GGA/PBE approximation, and visualization/post-processing of the results realized via open-source software of VESTA.
The extensive DFT calculations are conducted 2D nanomaterials prospects as anode/cathode electrode materials for batteries. Metal-ion batteries' performance strongly depends on the design of novel electrode material. Two-dimensional (2D) materials have developed a remarkable interest in using as an electrode in battery cells due to their excellent properties. Desirable battery energy storage systems (BESS) must satisfy the high energy density, safe operation, and efficient production costs. Batteries have been using in electronic devices and provide a solution to the environmental issues and store the discontinuous energies generated from renewable wind or solar power plants. Therefore, exploring optimal electrode materials can improve storage capacity and charging/discharging rates, leading to the design of advanced batteries.
Our results in multiple scales highlight not only the proposed and employed methods' efficiencies but also promising prospect of recently synthesized nanomaterials and their applications as an anode material. In this way, first, a novel approach developed for the modeling of the 1D nanotube as a continuum piezoelectric beam element. The results converged and matched closely with those from experiments and other more complex models. Then mechanical properties of nanosheets estimated and the failure mechanisms results provide a useful guide for further use in prospect applications. Our results indicated a comprehensive and useful vision concerning the mechanical properties of nanosheets with/without defects. Finally, mechanical and electrochemical properties of the several 2D nanomaterials are explored for the first time—their application performance as an anode material illustrates high potentials in manufacturing super-stretchable and ultrahigh-capacity battery energy storage systems (BESS). Our results exhibited better performance in comparison to the available commercial anode materials.

This study permits a reliability analysis to solve the mechanical behaviour issues existing in the current structural design of fabric structures. Purely predictive material models are highly desirable to facilitate an optimized design scheme and to significantly reduce time and cost at the design stage, such as experimental characterization.
The present study examined the role of three major tasks; a) single-objective optimization, b) sensitivity analyses and c) multi-objective optimization on proposed weave structures for woven fabric composites. For single-objective optimization task, the first goal is to optimize the elastic properties of proposed complex weave structure under unit cells basis based on periodic boundary conditions.
We predict the geometric characteristics towards skewness of woven fabric composites via Evolutionary Algorithm (EA) and a parametric study. We also demonstrate the effect of complex weave structures on the fray tendency in woven fabric composites via tightness evaluation. We utilize a procedure which does not require a numerical averaging process for evaluating the elastic properties of woven fabric composites. The fray tendency and skewness of woven fabrics depends upon the behaviour of the floats which is related to the factor of weave. Results of this study may suggest a broader view for further research into the effects of complex weave structures or may provide an alternative to the fray and skewness problems of current weave structure in woven fabric composites.
A comprehensive study is developed on the complex weave structure model which adopts the dry woven fabric of the most potential pattern in singleobjective optimization incorporating the uncertainties parameters of woven fabric composites. The comprehensive study covers the regression-based and variance-based sensitivity analyses. The second task goal is to introduce the fabric uncertainties parameters and elaborate how they can be incorporated into finite element models on macroscopic material parameters such as elastic modulus and shear modulus of dry woven fabric subjected to uni-axial and biaxial deformations. Significant correlations in the study, would indicate the need for a thorough investigation of woven fabric composites under uncertainties parameters. The study describes here could serve as an alternative to identify effective material properties without prolonged time consumption and expensive experimental tests.
The last part focuses on a hierarchical stochastic multi-scale optimization approach (fine-scale and coarse-scale optimizations) under geometrical uncertainties parameters for hybrid composites considering complex weave structure. The fine-scale optimization is to determine the best lamina pattern that maximizes its macroscopic elastic properties, conducted by EA under the following uncertain mesoscopic parameters: yarn spacing, yarn height, yarn width and misalignment of yarn angle. The coarse-scale optimization has been carried out to optimize the stacking sequences of symmetric hybrid laminated composite plate with uncertain mesoscopic parameters by employing the Ant Colony Algorithm (ACO). The objective functions of the coarse-scale optimization are to minimize the cost (C) and weight (W) of the hybrid laminated composite plate considering the fundamental frequency and the buckling load factor as the design constraints.
Based on the uncertainty criteria of the design parameters, the appropriate variation required for the structural design standards can be evaluated using the reliability tool, and then an optimized design decision in consideration of cost can be subsequently determined.

The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.

In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text

Abstract In the first part of this research, the utilization of tuned mass dampers in the vibration control of tall buildings during earthquake excitations is studied. The main issues such as optimizing the parameters of the dampers and studying the effects of frequency content of the target earthquakes are addressed.
Abstract The non-dominated sorting genetic algorithm method is improved by upgrading generic operators, and is utilized to develop a framework for determining the optimum placement and parameters of dampers in tall buildings. A case study is presented in which the optimal placement and properties of dampers are determined for a model of a tall building under different earthquake excitations through computer simulations.
Abstract In the second part, a novel framework for the brain learning-based intelligent seismic control of smart structures is developed. In this approach, a deep neural network learns how to improve structural responses during earthquake excitations using feedback control.
Abstract Reinforcement learning method is improved and utilized to develop a framework for training the deep neural network as an intelligent controller. The efficiency of the developed framework is examined through two case studies including a single-degree-of-freedom system and a high-rise building under different earthquake excitation records.
Abstract The results show that the controller gradually develops an optimum control policy to reduce the vibrations of a structure under an earthquake excitation through a cyclical process of actions and observations.
Abstract It is shown that the controller efficiently improves the structural responses under new earthquake excitations for which it was not trained. Moreover, it is shown that the controller has a stable performance under uncertainties.

Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.

Along with environmental pollution, urban planning has been connected to public health. The research indicates that the quality of built environments plays an important role in reducing mental disorders and overall health. The structure and shape of the city are considered as one of the factors influencing happiness and health in urban communities and the type of the daily activities of citizens. The aim of this study was to promote physical activity in the main structure of the city via urban design in a way that the main form and morphology of the city can encourage citizens to move around and have physical activity within the city. Functional, physical, cultural-social, and perceptual-visual features are regarded as the most important and effective criteria in increasing physical activities in urban spaces, based on literature review. The environmental quality of urban spaces and their role in the physical activities of citizens in urban spaces were assessed by using the questionnaire tool and analytical network process (ANP) of structural equation modeling. Further, the space syntax method was utilized to evaluate the role of the spatial integration of urban spaces on improving physical activities. Based on the results, consideration of functional diversity, spatial flexibility and integration, security, and the aesthetic and visual quality of urban spaces plays an important role in improving the physical health of citizens in urban spaces. Further, more physical activities, including motivation for walking and the sense of public health and happiness, were observed in the streets having higher linkage and space syntax indexes with their surrounding texture.

Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)

Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.

Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.

Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.

In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.

A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.

The assessment of wind-induced vibrations is considered vital for the design of long-span bridges. The aim of this research is to develop a methodological framework for robust and efficient prediction strategies for complex aerodynamic phenomena using hybrid models that employ numerical analyses as well as meta-models. Here, an approach to predict motion-induced aerodynamic forces is developed using artificial neural network (ANN). The ANN is implemented in the classical formulation and trained with a comprehensive dataset which is obtained from computational fluid dynamics forced vibration simulations. The input to the ANN is the response time histories of a bridge section, whereas the output is the motion-induced forces. The developed ANN has been tested for training and test data of different cross section geometries which provide promising predictions. The prediction is also performed for an ambient response input with multiple frequencies. Moreover, the trained ANN for aerodynamic forcing is coupled with the structural model to perform fully-coupled fluid--structure interaction analysis to determine the aeroelastic instability limit. The sensitivity of the ANN parameters to the model prediction quality and the efficiency has also been highlighted. The proposed methodology has wide application in the analysis and design of long-span bridges.

The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.

Rechargeable lithium ion batteries (LIBs) play a very significant role in power supply and storage. In recent decades, LIBs have caught tremendous attention in mobile communication, portable electronics, and electric vehicles. Furthermore, global warming has become a worldwide issue due to the ongoing production of greenhouse gases. It motivates solutions such as renewable sources of energy. Solar and wind energies are the most important ones in renewable energy sources. By technology progress, they will definitely require batteries to store the produced power to make a balance between power generation and consumption. Nowadays,rechargeable batteries such as LIBs are considered as one of the best solutions. They provide high specific energy and high rate performance while their rate of self-discharge is low.
Performance of LIBs can be improved through the modification of battery characteristics. The size of solid particles in electrodes can impact the specific energy and the cyclability of batteries. It can improve the amount of lithium content in the electrode which is a vital parameter in capacity and capability of a battery. There exist diferent sources of heat generation in LIBs such as heat produced during electrochemical reactions, internal resistance in battery. The size of electrode's electroactive particles can directly affect the produced heat in battery. It will be shown that the smaller size of solid particle enhance the thermal characteristics of LIBs.
Thermal issues such as overheating, temperature maldistribution in the battery, and thermal runaway have confined applications of LIBs. Such thermal challenges reduce the Life cycle of LIBs. As well, they may lead to dangerous conditions such as fire or even explosion in batteries. However, recent advances in fabrication of advanced materials such as graphene and carbon nanotubes with extraordinary thermal conductivity and electrical properties propose new opportunities to enhance their performance. Since experimental works are expensive, our objective is to use computational methods to investigate the thermal issues in LIBS. Dissipation of the heat produced in the battery can improve the cyclability and specific capacity of LIBs. In real applications, packs of LIB consist several battery cells that are used as the power source. Therefore, it is worth to investigate thermal characteristic of battery packs under their cycles of charging/discharging operations at different applied current rates. To remove the produced heat in batteries, they can be surrounded by materials with high thermal conductivity. Parafin wax absorbs high energy since it has a high latent heat. Absorption high amounts of energy occurs at constant temperature without phase change. As well, thermal conductivity of parafin can be magnified with nano-materials such as graphene, CNT, and fullerene to form a nano-composite medium. Improving the thermal conductivity of LIBs increase the heat dissipation from batteries which is a vital issue in systems of battery thermal management. The application of two-dimensional (2D) materials has been on the rise since exfoliation the graphene from bulk graphite. 2D materials are single-layered in an order of nanosizes which show superior thermal, mechanical, and optoelectronic properties. They are potential candidates for energy storage and supply, particularly in lithium ion batteries as electrode material. The high thermal conductivity of graphene and graphene-like materials can play a significant role in thermal management of batteries. However, defects always exist in nano-materials since there is no ideal fabrication process. One of the most important defects in materials are nano-crack which can dramatically weaken the mechanical properties of the materials. Newly synthesized crystalline carbon nitride with the stoichiometry of C3N have attracted many attentions due to its extraordinary mechanical and thermal properties. The other nano-material is phagraphene which shows anisotropic mechanical characteristics which is ideal in production of nanocomposite.
It shows ductile fracture behavior when subjected under uniaxial loadings. It is worth to investigate their thermo-mechanical properties in its pristine and defective states. We hope that the findings of our work not only be useful for both experimental and theoretical researches but also help to design advanced electrodes for LIBs.

Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40.

The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms.

Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)

The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).

Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)

Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.

The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.

Temporary changes in precipitation may lead to sustained and severe drought or massive floods in different parts of the world. Knowing the variation in precipitation can effectively help the water resources decision-makers in water resources management. Large-scale circulation drivers have a considerable impact on precipitation in different parts of the world. In this research, the impact of El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), and North Atlantic Oscillation (NAO) on seasonal precipitation over Iran was investigated. For this purpose, 103 synoptic stations with at least 30 years of data were utilized. The Spearman correlation coefficient between the indices in the previous 12 months with seasonal precipitation was calculated, and the meaningful correlations were extracted. Then, the month in which each of these indices has the highest correlation with seasonal precipitation was determined. Finally, the overall amount of increase or decrease in seasonal precipitation due to each of these indices was calculated. Results indicate the Southern Oscillation Index (SOI), NAO, and PDO have the most impact on seasonal precipitation, respectively. Additionally, these indices have the highest impact on the precipitation in winter, autumn, spring, and summer, respectively. SOI has a diverse impact on winter precipitation compared to the PDO and NAO, while in the other seasons, each index has its special impact on seasonal precipitation. Generally, all indices in different phases may decrease the seasonal precipitation up to 100%. However, the seasonal precipitation may increase more than 100% in different seasons due to the impact of these indices. The results of this study can be used effectively in water resources management and especially in dam operation.

Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.

Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.

The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.

Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.

Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.

FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)

Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.

This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines).
In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required.
The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost.

In recent years the demand on dynamic analyses of existing structures in civil engineering has remarkably increased. These analyses are mainly based on numerical models. Accordingly, the generated results depend on the quality of the used models. Therefore it is very important that the models describe the considered systems such that the behaviour of the physical structure is realistically represented. As any model is based on assumptions, there is always a certain degree of uncertainty present in the results of a simulation based on the respective numerical model. To minimise these uncertainties in the prediction of the response of a structure to a certain loading, it has become common practice to update or calibrate the parameters of a numerical model based on observations of the structural behaviour of the respective existing system.
The determination of the behaviour of an existing structure requires experimental investigations. If the numerical analyses concern the dynamic response of a structure it is sensible to direct the experimental investigations towards the identification of the dynamic structural behaviour which is determined by the modal parameters of the system. In consequence, several methods for the experimental identification of modal parameters have been developed since the 1980ies.
Due to various technical restraints in civil engineering which limit the possibilities to excitate a structure with economically reasonable effort, several methods have been developed that allow a modal identification form tests with an ambient excitation. The approach of identifying modal parameters only from measurements of the structural response without precise knowledge of the excitation is known as output-only or operational modal analysis.
Since operational modal analysis (OMA) can be considered as a link between numerical modelling and simulation on the one hand and the dynamic behaviour of an existing structure on the other hand, the respective algorithms connect both the concepts of structural dynamics and mathematical tools applied within the processing of experimental data. Accordingly, the related theoretical topics are revised after an introduction into the topic.
Several OMA methods have been developed over the last decades. The most established algorithms are presented here and their application is illustrated by means of both a small numerical and an experimental example. Since experimentally obtained results always underly manifold influences, an appropriate postprocessing of the results is necessary for a respective quality assessment. This quality assessment does not only require respective indicators but should also include the quantification of uncertainties.
One special feature in modal testing is that it is common to instrument the structure in different sensor setups to improve the spacial resolution of identified mode shapes. The modal information identified from tests in several setups needs to be merged a posteriori. Algorithms to cope with this problem are also presented.
Due to the fact that the amount of data generated in modal tests can become very large, manual processing can become extremely expensive or even impossible, for example in the case of a long-term continuous structural monitoring. In these situations an automated analysis and postprocessing are essential. Descriptions of respective methodologies are therefore also included in this work.
Every structural system in civil engineering is unique and so also every identification of modal parameters has its specific challenges. Some aspects that can be faced in practical applications of operational modal analysis are presented and discussed in a chapter that is dedicated specific problems that an analyst may have to overcome. Case studies of systems with very close modes, with limited accessibility as well as the application of different OMA methods are described and discussed. In this context the focus is put on several types of uncertainty that may occur in the multiple stages of an operational modal analysis. In literature only very specific uncertainties at certain stages of the analysis are addressed. Here, the topic of uncertainties has been considered in a broader sense and approaches for treating respective problems are suggested.
Eventually, it is concluded that the methodologies of operatinal modal analysis and related technical solutions have been well-engineered already. However, as in any discipline that includes experiments, a certain degree of uncertainty always remains in the results. From these conclusions has been derived a demand for further research and development that should be directed towards the minimisation of these uncertainties and to a respective optimisation of the steps and corresponding parameters included in an operational modal analysis.

Renewable energy use is on the rise and these alternative resources of energy can help combat with the climate change. Around 80% of the world's electricity comes from coal and petroleum however, the renewables are the fastest growing source of energy in the world. Solar, wind, hydro, geothermal and biogas are the most common forms of renewable energy. Among them, wind energy is emerging as a reliable and large-scaled source of power production. The recent research and confidence in the performance has led to the construction of more and bigger wind turbines around the world. As wind turbines are getting bigger, a concern regarding their safety is also in discussion. Wind turbines are expensive machinery to construct and the enormous capital investment is one of the main reasons, why many countries are unable to adopt to the wind energy. Generally, a reliable wind turbine will result in better performance and assist in minimizing the cost of operation. If a wind turbine fails, it's a loss of investment and can be harmful for the surrounding habitat. This thesis aims towards estimating the reliability of an offshore wind turbine. A model of Jacket type offshore wind turbine is prepared by using finite element software package ABAQUS and is compared with the structural failure criteria of the wind turbine tower. UQLab, which is a general uncertainty quantification framework developed at ETH Zürich, is used for the reliability analysis. Several probabilistic methods are included in the framework of UQLab, which include Monte Carlo, First Order Reliability Analysis and Adaptive Kriging Monte Carlo simulation. This reliability study is performed only for the structural failure of the wind turbine but it can be extended to many other forms of failures e.g. reliability for power production, or reliability for different component failures etc. It's a useful tool that can be utilized to estimate the reliability of future wind turbines, that could result in more safer and better performance of wind turbines.

The underlying goal of this work is to reduce the uncertainty related to thermally induced stress prediction. This is accomplished by considering use of non-linear material behavior, notably path dependent thermal hysteresis behavior in the elastic properties.
Primary novel factors of this work center on two aspects.
1. Broad material characterization and mechanistic material understanding, giving insight into why this class of material behaves in characteristic manners.
2. Development and implementation of a thermal hysteresis material model and its use to determine impact on overall macroscopic stress predictions.
Results highlight microcracking evolution and behavior as the dominant mechanism for material property complexity in this class of materials. Additionally, it was found that for the cases studied, thermal hysteresis behavior impacts relevant peak stress predictions of a heavy-duty diesel particulate filter undergoing a drop-to-idle regeneration by less than ~15% for all conditions tested. It is also found that path independent heating curves may be utilized for a linear solution assumption to simplify analysis.
This work brings forth a newly conceived concept of a 3 state, 4 path, thermally induced microcrack evolution process; demonstrates experimental behavior that is consistent with the proposed mechanisms, develops a mathematical framework that describes the process and quantifies the impact in a real world application space.

Turbomachinery plays an important role in many cases of energy generation or conversion. Therefore, turbomachinery is a promising approaching point for optimization in order to increase the efficiency of energy use. In recent years, the use of automated optimization strategies in combination with numerical simulation has become increasingly popular in many fields of engineering. The complex interactions between fluid and solid mechanics encountered in turbomachines on the one hand and the high computational expense needed to calculate the performance on the other hand, have, however, prevented a widespread use of these techniques in this field of engineering. The objective of this work was the development of a strategy for efficient metamodel based optimization of centrifugal compressor impellers. In this context, the main focus is the reduction of the required numerical expense. The central idea followed in this research was the incorporation of preliminary information acquired from low-fidelity computation methods and empirical correlations into the sampling process to identify promising regions of the parameter space. This information was then used to concentrate the numerically expensive high-fidelity computations of the fluid dynamic and structure mechanic performance of the impeller in these regions while still maintaining a good coverage of the whole parameter space. The development of the optimization strategy can be divided into three main tasks. Firstly, the available preliminary information had to be researched and rated. This research identified loss models based on one dimensional flow physics and empirical correlations as the best suited method to predict the aerodynamic performance. The loss models were calibrated using available performance data to obtain a high prediction quality. As no sufficiently exact models for the prediction of the mechanical loading of the impellercould be identified, a metamodel based on finite element computations was chosen for this estimation. The second task was the development of a sampling method which concentrates samples in regions of the parameter space where high quality designs are predicted by the preliminary information while maintaining a good overall coverage. As available methods like rejection sampling or Markov-chain Monte-Carlo methods did not meet the requirements in terms of sample distribution and input correlation, a new multi-fidelity sampling method called “Filtered Sampling“has been developed. The last task was the development of an automated computational workflow. This workflow encompasses geometry parametrization, geometry generation, grid generation and computation of the aerodynamic performance and the structure mechanic loading. Special emphasis was put into the development of a geometry parametrization strategy based on fluid mechanic considerations to prevent the generation of physically inexpedient designs. Finally, the optimization strategy, which utilizes the previously developed tools, was successfully employed to carry out three optimization tasks. The efficiency of the method was proven by the first and second testcase where an existing compressor design was optimized by the presented method. The results were comparable to optimizations which did not take preliminary information into account, while the required computational expense cloud be halved. In the third testcase, the method was applied to generate a new impeller design. In contrast to the previous examples, this optimization featuredlargervariationsoftheimpellerdesigns. Therefore, theapplicability of the method to parameter spaces with significantly varying designs could be proven, too.

Due to an increased need for hydro-electricity, water storage, and flood protection, it is assumed that a series of new dams will be built throughout the world. Comparing existing design methodologies for arch-type dams, model-based shape optimization can effectively reduce construction costs and leverage the properties of construction materials. To apply the means of shape optimization, suitable variables need to be chosen to formulate the objective function, which is the volume of the arch dam here. In order to increase the consistency with practical conditions, a great number of geometrical and behavioral constraints are included in the mathematical model. An optimization method, namely Genetic Algorithm is adopted which allows a global search.
Traditional optimization techniques are realized based on a deterministic approach, which means that the material properties and loading conditions are assumed to be fixed values. As a result, the real-world structures that are optimized by these approaches suffer from uncertainties that one needs to be aware of. Hence, in any optimization process for arch dams, it is nec- essary to find a methodology that is capable of considering the influences of uncertainties and generating a solution which is robust enough against the uncertainties.
The focus of this thesis is the formulation and the numerical method for the optimization of the arch dam under the uncertainties. The two main models, the probabilistic model, and non-probabilistic models are intro- duced and discussed. Classic procedures of probabilistic approaches un- der uncertainties, such as RDO (robust design optimization) and RBDO (reliability-based design optimization), are in general computationally ex- pensive and rely on estimates of the system’s response variance and fail- ure probabilities. Instead, the robust optimization (RO) method which is based on the non-probabilistic model, will not follow a full probabilistic approach but works with pre-defined confidence levels. This leads to a bi-level optimization program where the volume of the dam is optimized under the worst combination of the uncertain parameters. By this, robust and reliable designs are obtained and the result is independent of any as- sumptions on stochastic properties of the random variables in the model.
The optimization of an arch-type dam is realized here by a robust optimiza- tion method under load uncertainty, where hydraulic and thermal loads are considered. The load uncertainty is modeled as an ellipsoidal expression. Comparing with any traditional deterministic optimization (DO) method, which only concerns the minimum objective value and offers a solution candidate close to limit-states, the RO method provides a robust solution against uncertainties.
All the above mentioned methods are applied to the optimization of the arch dam to compare with the optimal design with DO methods. The re- sults are compared and analyzed to discuss the advantages and drawbacks of each method.
In order to reduce the computational cost, a ranking strategy and an ap- proximation model are further involved to do a preliminary screening. By means of these, the robust design can generate an improved arch dam structure which ensures both safety and serviceability during its lifetime.

Since the Industrial Revolution in the 1700s, the high emission of gaseous wastes into the atmosphere from the usage of fossil fuels has caused a general increase in temperatures globally. To combat the environmental imbalance, there is an increase in the demand for renewable energy sources. Dams play a major role in the generation of “green" energy. However, these structures require frequent and strict monitoring to ensure safe and efficient operation. To tackle the challenges faced in the application of convention dam monitoring techniques, this work proposes the inverse analysis of numerical models to identify damaged regions in the dam. Using a dynamic coupled hydro-mechanical Extended Finite Element Method (XFEM) model and a global optimization strategy, damage (crack) in the dam is identified. By employing seismic waves to probe the dam structure, a more detailed information on the distribution of heterogeneous materials and damaged regions are obtained by the application of the Full Waveform Inversion (FWI) method. The FWI is based on a local optimization strategy and thus it is highly dependent on the starting model. A variety of data acquisition setups are investigated, and an optimal setup is proposed. The effect of different starting models and noise in the measured data on the damage identification is considered. Combining the non-dependence of a starting model of the global optimization strategy based dynamic coupled hydro-mechanical XFEM method and the detailed output of the local optimization strategy based FWI method, an enhanced Full Waveform Inversion is proposed for the structural analysis of dams.

The vibration control of the tall building during earthquake excitations is a challenging task due to their complex seismic behavior. This paper investigates the optimum placement and properties of the Tuned Mass Dampers (TMDs) in tall buildings, which are employed to control the vibrations during earthquakes. An algorithm was developed to spend a limited mass either in a single TMD or in multiple TMDs and distribute them optimally over the height of the building. The Non-dominated Sorting Genetic Algorithm (NSGA – II) method was improved by adding multi-variant genetic operators and utilized to simultaneously study the optimum design parameters of the TMDs and the optimum placement. The results showed that under earthquake excitations with noticeable amplitude in higher modes, distributing TMDs over the height of the building is more effective in mitigating the vibrations compared to the use of a single TMD system. From the optimization, it was observed that the locations of the TMDs were related to the stories corresponding to the maximum modal displacements in the lower modes and the stories corresponding to the maximum modal displacements in the modes which were highly activated by the earthquake excitations. It was also noted that the frequency content of the earthquake has significant influence on the optimum location of the TMDs.

Identification of flaws in structures is a critical element in the management of maintenance and quality assurance processes in engineering. Nondestructive testing (NDT) techniques based on a wide range of physical principles have been developed and are used in common practice for structural health monitoring. However, basic NDT techniques are usually limited in their ability to provide the accurate information on locations, dimensions and shapes of flaws. One alternative to extract additional information from the results of NDT is to append it with a computational model that provides detailed analysis of the physical process involved and enables the accurate identification of the flaw parameters. The aim here is to develop the strategies to uniquely identify cracks in two-dimensional 2D) structures under dynamic loadings.
A local NDT technique combined eXtended Finite Element Method (XFEM) with dynamic loading in order to identify the cracks in the structures quickly and accurately is developed in this dissertation. The Newmark-b time integration method with Rayleigh damping is used for the time integration. We apply Nelder-Mead (NM)and Quasi-Newton (QN) methods for identifying the crack tip in plate. The inverse problem is solved iteratively, in which XFEM is used for solving the forward problem in each iteration. For a timeharmonic excitation with a single frequency and a short-duration signal measured along part of the external boundary, the crack is detected through the solution of an inverse time-dependent problem. Compared to the static load, we show that the dynamic loads are more effective for crack detection problems. Moreover, we tested different dynamic loads and find that NM method works more efficient under the harmonic load than the pounding load while the QN method achieves almost the same results for both load types.
A global strategy, Multilevel Coordinate Search (MCS) with XFEM (XFEM-MCS) methodology under the dynamic electric load, to detect multiple cracks in 2D piezoelectric plates is proposed in this dissertation. The Newmark-b method is employed for the time integration and in each iteration the forward problem is solved by XFEM for various cracks. The objective functional is minimized by using a global search algorithm MCS. The test problems show that the XFEM-MCS algorithm under the dynamic electric load can be effectively employed for multiple cracks detection in piezoelectric materials, and it proves to be robust in identifying defects in piezoelectric structures. Fiber-reinforced composites (FRCs) are extensively applied in practical engineering since they have high stiffness and strength. Experiments reveal a so-called interphase zone, i.e. the space between the outside interface of the fiber and the inside interface of the matrix. The interphase strength between the fiber and the matrix strongly affects the mechanical properties as a result of the large ratio of interface/volume. For the purpose of understanding the mechanical properties of FRCs with functionally graded interphase (FGI), a closed-form expression of the interface strength between a fiber and a matrix is obtained in this dissertation using a continuum modeling approach according to the ver derWaals (vdW) forces. Based on the interatomic potential, we develop a new modified nonlinear cohesive law, which is applied to study the interface delamination of FRCs with FGI under different loadings. The analytical solutions show that the delamination behavior strongly depends on the interphase thickness, the fiber radius, the Young’s moduli and Poisson’s ratios of the fiber and the matrix. Thermal conductivity is the property of a material to conduct heat. With the development and deep research of 2D materials, especially graphene and molybdenum disulfide (MoS2), the thermal conductivity of 2D materials attracts wide attentions. The thermal conductivity of graphene nanoribbons (GNRs) is found to appear a tendency of decreasing under tensile strain by classical molecular dynamics (MD) simulations. Hence, the strain effects of graphene can play a key role in the continuous tunability and applicability of its thermal conductivity property at nanoscale, and the dissipation of thermal conductivity is an obstacle for the applications of thermal management. Up to now, the thermal conductivity of graphene under shear deformation has not been investigated yet. From a practical point of view, good thermal managements of GNRs have significantly potential applications of future GNR-based thermal nanodevices, which can greatly improve performances of the nanosized devices due to heat dissipations. Meanwhile, graphene is a thin membrane structure, it is also important to understand the wrinkling behavior under shear deformation. MoS2 exists in the stable semiconducting 1H phase (1H-MoS2) while the metallic 1T phase (1T-MoS2) is unstable at ambient conditions. As it’s well known that much attention has been focused on studying the nonlinear optical properties of the 1H-MoS2. In a very recent research, the 1T-type monolayer crystals of TMDCs, MX2 (MoS2, WS2 ...) was reported having an intrinsic in-plane negative Poisson’s ratio. Luckily, nearly at the same time, unprecedented long-term (>3months) air stability of the 1T-MoS2 can be achieved by using the donor lithium hydride (LiH). Therefore, it’s very important to study the thermal conductivity of 1T-MoS2.
The thermal conductivity of graphene under shear strain is systematically studied in this dissertation by MD simulations. The results show that, in contrast to the dramatic decrease of thermal conductivity of graphene under uniaxial tensile, the thermal conductivity of graphene is not sensitive to the shear strain, and the thermal conductivity decreases only 12-16%. The wrinkle evolves when the shear strain is around 5%-10%, but the thermal conductivity barely changes.
The thermal conductivities of single-layer 1H-MoS2(1H-SLMoS2) and single-layer 1T-MoS2 (1T-SLMoS2) with different sample sizes, temperatures and strain rates have been studied systematically in this dissertation. We find that the thermal conductivities of 1H-SLMoS2 and 1T-SLMoS2 in both the armchair and the zigzag directions increase with the increasing of the sample length, while the increase of the width of the sample has minor effect on the thermal conductions of these two structures. The thermal conductivity of 1HSLMoS2 is smaller than that of 1T-SLMoS2 under size effect. Furthermore, the temperature effect results show that the thermal conductivities of both 1H-SLMoS2 and 1T-SLMoS2 decrease with the increasing of the temperature. The thermal conductivities of 1HSLMoS2 and 1T-SLMoS2 are nearly the same (difference <6%) in both of the chiral orientations under corresponding temperatures, especially in the armchair direction (difference <2.8%). Moreover, we find that the strain effects on the thermal conductivity of 1HSLMoS2 and 1T-SLMoS2 are different. More specifically, the thermal conductivity decreases with the increasing tensile strain rate for
1T-SLMoS2, while fluctuates with the growth of the strain for 1HSLMoS2. Finally, we find that the thermal conductivity of same sized 1H-SLMoS2 is similar with that of the strained 1H-SLMoS2 structure.

Matrix-free voxel-based finite element method for materials with heterogeneous microstructures
(2019)

Modern image detection techniques such as micro computer tomography
(μCT), magnetic resonance imaging (MRI) and scanning electron microscopy (SEM) provide us with high resolution images of the microstructure of materials in a non-invasive and convenient way. They form the basis for the geometrical models of high-resolution analysis, so called image-based analysis.
However especially in 3D, discretizations of these models reach easily the size of 100 Mill. degrees of freedoms and require extensive hardware resources in terms of main memory and computing power to solve the numerical model. Consequently, the focus of this work is to combine and adapt numerical solution methods to reduce the memory demand first and then the computation time and therewith enable an execution of the image-based analysis on modern computer desktops. Hence, the numerical model is a straightforward grid discretization of the voxel-based (pixels with a third dimension) geometry which omits the boundary detection algorithms and allows reduced storage of the finite element data structure and a matrix-free solution algorithm.
This in turn reduce the effort of almost all applied grid-based solution techniques and results in memory efficient and numerically stable algorithms for the microstructural models. Two variants of the matrix-free algorithm are presented. The efficient iterative solution method of conjugate gradients is used with matrix-free applicable preconditioners such as the Jacobi and the especially suited multigrid method. The jagged material boundaries of the voxel-based mesh are smoothed through embedded boundary elements which contain different material information at the integration point and are integrated sub-cell wise though without additional boundary detection. The efficiency of the matrix-free methods can be retained.

In this work, molecular separation of aqueous-organic was simulated by using combined soft computing-mechanistic approaches. The considered separation system was a microporous membrane contactor for separation of benzoic acid from water by contacting with an organic phase containing extractor molecules. Indeed, extractive separation is carried out using membrane technology where complex of solute-organic is formed at the interface. The main focus was to develop a simulation methodology for prediction of concentration distribution of solute (benzoic acid) in the feed side of the membrane system, as the removal efficiency of the system is determined by concentration distribution of the solute in the feed channel. The pattern of Adaptive Neuro-Fuzzy Inference System (ANFIS) was optimized by finding the optimum membership function, learning percentage, and a number of rules. The ANFIS was trained using the extracted data from the CFD simulation of the membrane system. The comparisons between the predicted concentration distribution by ANFIS and CFD data revealed that the optimized ANFIS pattern can be used as a predictive tool for simulation of the process. The R2 of higher than 0.99 was obtained for the optimized ANFIS model. The main privilege of the developed methodology is its very low computational time for simulation of the system and can be used as a rigorous simulation tool for understanding and design of membrane-based systems.
Highlights are, Molecular separation using microporous membranes. Developing hybrid model based on ANFIS-CFD for the separation process, Optimization of ANFIS structure for prediction of separation process

The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.

Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.

This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed.
A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth.
Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven.

Polymeric nanocomposites (PNCs) are considered for numerous nanotechnology such as: nano-biotechnology, nano-systems, nanoelectronics, and nano-structured materials. Commonly , they are formed by polymer (epoxy) matrix reinforced with a nanosized filler. The addition of rigid nanofillers to the epoxy matrix has offered great improvements in the fracture toughness without sacrificing other important thermo-mechanical properties. The physics of the fracture in PNCs is rather complicated and is influenced by different parameters. The presence of uncertainty in the predicted output is expected as a result of stochastic variance in the factors affecting the fracture mechanism. Consequently, evaluating the improved fracture toughness in PNCs is a challenging problem.
Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) have been employed to predict the fracture energy of polymer/particle nanocomposites. The ANN and ANFIS models were constructed, trained, and tested based on a collection of 115 experimental datasets gathered from the literature. The performance evaluation indices of the developed ANN and ANFIS showed relatively small error, with high coefficients of determination (R2), and low root mean square error and mean absolute percentage error.
In the framework for uncertainty quantification of PNCs, a sensitivity analysis (SA) has been conducted to examine the influence of uncertain input parameters on the fracture toughness of polymer/clay nanocomposites (PNCs). The phase-field approach is employed to predict the macroscopic properties of the composite considering six uncertain input parameters. The efficiency, robustness, and repeatability are compared and evaluated comprehensively for five different SA methods.
The Bayesian method is applied to develop a methodology in order to evaluate the performance of different analytical models used in predicting the fracture toughness of polymeric particles nanocomposites. The developed method have considered the model and parameters uncertainties based on different reference data (experimental measurements) gained from the literature. Three analytical models differing in theory and assumptions were examined. The coefficients of variation of the model predictions to the measurements are calculated using the approximated optimal parameter sets. Then, the model selection probability is obtained with respect to the different reference data.
Stochastic finite element modeling is implemented to predict the fracture toughness of polymer/particle nanocomposites. For this purpose, 2D finite element model containing an epoxy matrix and rigid nanoparticles surrounded by an interphase zone is generated. The crack propagation is simulated by the cohesive segments method and phantom nodes. Considering the uncertainties in the input parameters, a polynomial chaos expansion (PCE) surrogate model is construed followed by a sensitivity analysis.

Advances in nanotechnology lead to the development of nano-electro-mechanical systems (NEMS) such as nanomechanical resonators with ultra-high resonant frequencies. The ultra-high-frequency resonators have recently received significant attention for wide-ranging applications such as molecular separation, molecular transportation, ultra-high sensitive sensing, high-frequency signal processing, and biological imaging. It is well known that for micrometer length scale, first-principles technique, the most accurate approach, poses serious limitations for comparisons with experimental studies. For such larger size, classical molecular dynamics (MD) simulations are desirable, which require interatomic potentials. Additionally, a mesoscale method such as the coarse-grained (CG) method is another useful method to support simulations for even larger system sizes.
Furthermore, quasi-two-dimensional (Q2D) materials have attracted intensive research interest due to their many novel properties over the past decades. However, the energy dissipation mechanisms of nanomechanical resonators based on several Q2D materials are still unknown. In this work, the addressed main issues include the development of the CG models for molybdenum disulphide (MoS2), investigation of the mechanism effects on black phosphorus (BP) nanoresonators and the application of graphene nanoresonators. The primary coverage and results of the dissertation are as follows:
Method development. Firstly, a two-dimensional (2D) CG model for single layer MoS2 (SLMoS2) is analytically developed. The Stillinger-Weber (SW) potential for this 2D CG model is further parametrized, in which all SW geometrical parameters are determined analytically according to the equilibrium condition for each individual potential term, while the SW energy parameters are derived analytically based on the valence force field model. Next, the 2D CG model is further simplified to one-dimensional (1D) CG model, which describes the 2D SLMoS2 structure using a 1D chain model. This 1D CG model is applied to investigate the relaxed configuration and the resonant oscillation of the folded SLMoS2. Owning to the simplicity nature of the 1D CG model, the relaxed configuration of the folded SLMoS2 is determined analytically, and the resonant oscillation frequency is derived analytically. Considering the increasing interest in studying the properties of other 2D layered materials, and in particular those in the semiconducting transition metal dichalcogenide class like MoS2, the CG models proposed in current work provide valuable simulation approaches.
Mechanism understanding. Two energy dissipation mechanisms of BP nanoresonators are focused exclusively, i.e. mechanical strain effects and defect effects (including vacancy and oxidation). Vacancy defect is intrinsic damping factor for the quality (Q)-factor, while mechanical strain and oxidation are extrinsic damping factors. Intrinsic dissipation (induced by thermal vibrations) in BP resonators (BPRs) is firstly investigated. Specifically, classical MD simulations are performed to examine the temperature dependence for the Q-factor of the single layer BPR (SLBPR) along the armchair and zigzag directions, where two-step fitting procedure is used to extract the frequency and Q-factor from the kinetic energy time history. The Q-factors of BPRs are evaluated through comparison with those of graphene and MoS2 nanoresonators. Next, effects of mechanical strain, vacancy and oxidation on BP nanoresonators are investigated in turn. Considering the increasing interest in studying the properties of BP, and in particular the lack of theoretical study for the BPRs, the results in current work provide a useful reference.
Application. A novel application for graphene nanoresonators, using them to self-assemble small nanostructures such as water chains, is proposed. All of the underlying physics enabling this phenomenon is elucidated. In particular, by drawing inspiration from macroscale self-assembly using the higher order resonant modes of Chladni plates, classical MD simulations are used to investigate the self-assembly of water molecules using
graphene nanoresonators. An analytic formula for the critical resonant frequency based on the interaction between water molecules and graphene is provided. Furthermore, the properties of the water chains assembled by the graphene nanoresonators are studied.

The phenomenon of aerodynamic instability caused by the wind is usually a major design criterion for long-span cable-supported bridges. If the wind speed exceeds the critical flutter speed of the bridge, this constitutes an Ultimate Limit State. The prediction of the flutter boundary, therefore, requires accurate and robust models. The complexity and uncertainty of models for such engineering problems demand strategies for model assessment. This study is an attempt to use the concepts of sensitivity and uncertainty analyses to assess the aeroelastic instability prediction models for long-span bridges. The state-of-the-art theory concerning the determination of the flutter stability limit is presented. Since flutter is a coupling of aerodynamic forcing with a structural dynamics problem, different types and classes of structural and aerodynamic models can be combined to study the interaction. Here, both numerical approaches and analytical models are utilised and coupled in different ways to assess the prediction quality of the coupled model.

Biodiesel, as the main alternative fuel to diesel fuel which is produced from renewable and available resources, improves the engine emissions during combustion in diesel engines. In this study, the biodiesel is produced initially from waste cooking oil (WCO). The fuel samples are applied in a diesel engine and the engine performance has been considered from the viewpoint of exergy and energy approaches. Engine tests are performed at a constant 1500 rpm speed with various loads and fuel samples. The obtained experimental data are also applied to develop an artificial neural network (ANN) model. Response surface methodology (RSM) is employed to optimize the exergy and energy efficiencies. Based on the results of the energy analysis, optimal engine performance is obtained at 80% of full load in presence of B10 and B20 fuels. However, based on the exergy analysis results, optimal engine performance is obtained at 80% of full load in presence of B90 and B100 fuels. The optimum values of exergy and energy efficiencies are in the range of 25–30% of full load, which is the same as the calculated range obtained from mathematical modeling.

The main categories of wind effects on long span bridge decks are buffeting, flutter, vortex-induced vibrations (VIV) which are often critical for the safety and serviceability of the structure. With the rapid increase of bridge spans, research on controlling wind-induced vibrations of long span bridges has been a problem of great concern.The developments of vibration control theories have led to the wide use of tuned mass dampers (TMDs) which has been proven to be effective for suppressing these vibrations both analytically and experimentally. Fire incidents are also of special interest in the stability and safety of long span bridges due to significant role of the complex phenomenon through triple interaction between the deck with the incoming wind flow and the thermal boundary of the surrounding air.
This work begins with analyzing the buffeting response and flutter instability of three dimensional computational structural dynamics (CSD) models of a cable stayed bridge due to strong wind excitations using ABAQUS finite element commercial software. Optimization and global sensitivity analysis are utilized to target the vertical and torsional vibrations of the segmental deck through considering three aerodynamic parameters (wind attack angle, deck streamlined length and viscous damping of the stay cables). The numerical simulations results in conjunction with the frequency analysis results emphasized the existence of these vibrations and further theoretical studies are possible with a high level of accuracy. Model validation is performed by comparing the results of lift and moment coefficients between the created CSD models and two benchmarks from the literature (flat plate theory) and flat plate by (Xavier and co-authors) which resulted in very good agreements between them. Optimum values of the parameters have been identified. Global sensitivity analysis based on Monte Carlo sampling method was utilized to formulate the surrogate models and calculate the sensitivity indices. The rational effect and the role of each parameter on the aerodynamic stability of the structure were calculated and efficient insight has been constructed for the stability of the long span bridge.
2D computational fluid dynamics (CFD) models of the decks are created with the support of MATLAB codes to simulate and analyze the vortex shedding and VIV of the deck. Three aerodynamic parameters (wind speed, deck streamlined length and dynamic viscosity of the air) are dedicated to study their effects on the kinetic energy of the system and the vortices shapes and patterns. Two benchmarks from the literature (Von Karman) and (Dyrbye and Hansen) are used to validate the numerical simulations of the vortex shedding for the CFD models. A good consent between the results was detected. Latin hypercube experimental
method is dedicated to generate the surrogate models for the kinetic energy of the system and the generated lift forces. Variance based sensitivity analysis is utilized to calculate the main sensitivity indices and the interaction orders for each parameter. The kinetic energy approach performed very well in revealing the rational effect and the role of each parameter in the generation of vortex shedding and predicting the early VIV and the critical wind speed.
Both one-way fluid-structure interaction (one-way FSI) simulations and two-way fluid-structure interaction (two-way FSI) co-simulations for the 2D models of the deck are executed to calculate the shedding frequencies for the associated wind speeds in the lock-in region in addition to the lift and drag coefficients. Validation is executed with the results of (Simiu and Scanlan) and the results of flat plate theory compiled by (Munson and co-authors) respectively. High levels of agreements between all the results were detected. A decrease in the critical wind speed and the shedding frequencies considering (two-way FSI) was identified compared to those obtained in the (one-way FSI). The results from the (two-way FSI) approach predicted appreciable decrease in the lift and drag forces as well as prediction of earlier VIV for lower critical wind speeds and lock-in regions which exist at lower natural frequencies of the system. These conclusions help the designers to efficiently plan and consider for the design and safety of the long span bridge before and after construction.
Multiple tuned mass dampers (MTMDs) system has been applied in the three dimensional CSD models of the cable stayed bridge to analyze their control efficiency in suppressing both wind -induced vertical and torsional vibrations of the deck by optimizing three design parameters (mass ratio, frequency ratio and damping ratio) for the (TMDs) supporting on actual field data and minimax optimization technique in addition to MATLAB codes and Fast Fourier Transform technique. The optimum values of each parameter were identified and validated with two benchmarks from the literature, first with (Wang and co-authors) and then with (Lin and co-authors). The validation procedure detected a good agreement between the results. Box-Behnken experimental method is dedicated to formulate the surrogate models to represent the control efficiency of the vertical and torsional vibrations. Sobol's sensitivity indices are calculated for the design parameters in addition to their interaction orders. The optimization results revealed better performance of the MTMDs in controlling both the vertical and the torsional vibrations for higher mode shapes. Furthermore, the calculated rational effect of each design parameter facilitates to increase the control efficiency of the MTMDs in conjunction with the support of the surrogate models which simplifies the process of analysis for vibration control to a great extent.
A novel structural modification approach has been adopted to eliminate the early coupling between the bending and torsional mode shapes of the cable stayed bridge. Two lateral steel
beams are added to the middle span of the structure. Frequency analysis is dedicated to obtain the natural frequencies of the first eight mode shapes of vibrations before and after the structural modification. Numerical simulations of wind excitations are conducted for the 3D model of the cable stayed bridge. Both vertical and torsional displacements are calculated at the mid span of the deck to analyze the bending and the torsional stiffness of the system before and after the structural modification. The results of the frequency analysis after applying lateral steel beams declared that the coupling between the vertical and torsional mode shapes of vibrations has been removed to larger natural frequencies magnitudes and higher rare critical wind speeds with a high factor of safety.
Finally, thermal fluid-structure interaction (TFSI) and coupled thermal-stress analysis are utilized to identify the effects of transient and steady state heat-transfer on the VIV and fatigue of the deck due to fire incidents. Numerical simulations of TFSI models of the deck are dedicated to calculate the lift and drag forces in addition to determining the lock-in regions once using FSI models and another using TFSI models. Vorticity and thermal fields of three fire scenarios are simulated and analyzed. The benchmark of (Simiu and Scanlan) is used to validate the TFSI models, where a good agreement was manifested between the two results. Extended finite element method (XFEM) is adopted to create 3D models of the cable stayed bridge to simulate the fatigue of the deck considering three fire scenarios. The benchmark of (Choi and Shin) is used to validate the damaged models of the deck in which a good coincide was seen between them. The results revealed that the TFSI models and the coupled thermal-stress models are significant in detecting earlier vortex induced vibration and lock-in regions in addition to predicting damages and fatigue of the deck and identifying the role of wind-induced vibrations in speeding up the damage generation and the collapse of the structure in critical situations.

Phase Field Modeling for Fracture with Applications to Homogeneous and Heterogeneous Materials
(2017)

The thesis presents an implementation including different applications of a variational-based approach for gradient type standard dissipative solids. Phase field model for brittle fracture is an application of the variational-based framework for gradient type solids. This model allows the prediction of different crack topologies and states. Of significant concern is the application of theoretical and numerical formulation of the phase field modeling into the commercial finite element software Abaqus in 2D and 3D. The fully coupled incremental variational formulation of phase field method is implemented by using the UEL and UMAT subroutines of Abaqus. The phase field method
considerably reduces the implementation complexity of fracture problems as it removes the need for numerical tracking of discontinuities in the displacement field that are characteristic of discrete crack methods. This is accomplished by replacing the sharp discontinuities with a scalar damage phase field representing the diffuse crack topology wherein the amount of diffusion is controlled by a regularization parameter. The nonlinear coupled system consisting of the linear momentum equation and a diffusion type equation governing the phase field evolution is solved simultaneously via a Newton-
Raphson approach. Post-processing of simulation results to be used as visualization
module is performed via an additional UMAT subroutine implemented in the standard Abaqus viewer.
In the same context, we propose a simple yet effective algorithm to initiate and propagate cracks in 2D geometries which is independent of both particular constitutive laws and specific element technology and dimension. It consists of a localization limiter in the form of the screened Poisson equation with, optionally, local mesh refinement. A staggered scheme for standard equilibrium and screened Cauchy equations is used. The remeshing part of the algorithm consists of a sequence of mesh subdivision and element erosion steps. Element subdivision is based on edge split operations using a
given constitutive quantity (either damage or void fraction). Mesh smoothing makes use of edge contraction as function of a given constitutive quantity such as the principal stress or void fraction. To assess the robustness and accuracy of this algorithm, we use both quasi-brittle benchmarks and ductile tests.
Furthermore, we introduce a computational approach regarding mechanical loading in microscale on an inelastically deforming composite material. The nanocomposites material of fully exfoliated clay/epoxy is shaped to predict macroscopic elastic and fracture related material parameters based on their fine–scale features. Two different configurations of polymer nanocomposites material (PNCs) have been studied. These configurations are fully bonded PNCs and PNCs with an interphase zone formation between the matrix and the clay reinforcement. The representative volume element of PNCs specimens with different clay weight contents, different aspect ratios, and different
interphase zone thicknesses are generated by adopting Python scripting. Different constitutive models are employed for the matrix, the clay platelets, and the interphase zones. The brittle fracture behavior of the epoxy matrix and the interphase zones material are modeled using the phase field approach, whereas the stiff silicate clay platelets of the composite are designated as a linear elastic material. The comprehensive study investigates the elastic and fracture behavior of PNCs composites, in addition to predict Young’s modulus, tensile strength, fracture toughness, surface energy dissipation, and cracks surface area in the composite for different material parameters, geometry, and interphase zones properties and thicknesses.

A coupled thermo-hydro-mechanical model of jointed hard rock for compressed air energy storage
(2014)

Renewable energy resources such as wind and solar are intermittent, which causes instability when being connected to utility grid of electricity. Compressed air energy storage (CAES) provides an economic and technical viable solution to this problem by utilizing subsurface rock cavern to store the electricity generated by renewable energy in the form of compressed air. Though CAES has been used for over three decades, it is only restricted to salt rock or aquifers for air tightness reason. In this paper, the technical feasibility of utilizing hard rock for CAES is investigated by using a coupled thermo-hydro-mechanical (THM) modelling of nonisothermal gas flow. Governing equations are derived from the rules of energy balance, mass balance, and static equilibrium. Cyclic volumetric mass source and heat source models are applied to simulate the gas injection and production. Evaluation is carried out for intact rock and rock with discrete crack, respectively. In both cases, the heat and pressure losses using air mass control and supplementary air injection are compared.

Tensile strain and compress strain can greatly affect the thermal conductivity of graphene nanoribbons (GNRs). However, the effect of GNRs under shear strain, which is also one of the main strain effect, has not been studied systematically yet. In this work, we employ reverse nonequilibrium molecular dynamics (RNEMD) to the systematical study of the thermal conductivity of GNRs (with model size of 4 nm × 15 nm) under the shear strain. Our studies show that the thermal conductivity of GNRs is not sensitive to the shear strain, and the thermal conductivity decreases only 12–16% before the pristine structure is broken. Furthermore, the phonon frequency and the change of the micro-structure of GNRs, such as band angel and bond length, are analyzed to explore the tendency of thermal conductivity. The results show that the main influence of shear strain is on the in-plane phonon density of states (PDOS), whose G band (higher frequency peaks) moved to the low frequency, thus the thermal conductivity is decreased. The unique thermal properties of GNRs under shear strains suggest their great potentials for graphene nanodevices and great potentials in the thermal managements and thermoelectric applications.

This paper presents a novel numerical procedure based on the combination of an edge-based smoothed finite element (ES-FEM) with a phantom-node method for 2D linear elastic fracture mechanics. In the standard phantom-node method, the cracks are formulated by adding phantom nodes, and the cracked element is replaced by two new superimposed elements. This approach is quite simple to implement into existing explicit finite element programs. The shape functions associated with discontinuous elements are similar to those of the standard finite elements, which leads to certain simplification with implementing in the existing codes. The phantom-node method allows modeling discontinuities at an arbitrary location in the mesh. The ES-FEM model owns a close-to-exact stiffness that is much softer than lower-order finite element methods (FEM). Taking advantage of both the ES-FEM and the phantom-node method, we introduce an edge-based strain smoothing technique for the phantom-node method. Numerical results show that the proposed method achieves high accuracy compared with the extended finite element method (XFEM) and other reference solutions.

A simple multiscale analysis framework for heterogeneous solids based on a computational homogenization technique is presented. The macroscopic strain is linked kinematically to the boundary displacement of a circular or spherical representative volume which contains the microscopic information of the material. The macroscopic stress is obtained from the energy principle between the macroscopic scale and the microscopic scale. This new method is applied to several standard examples to show its accuracy and consistency of the method proposed.

We conducted extensive molecular dynamics simulations to investigate the thermal conductivity of polycrystalline hexagonal boron-nitride (h-BN) films. To this aim, we constructed large atomistic models of polycrystalline h-BN sheets with random and uniform grain configuration. By performing equilibrium molecular dynamics (EMD) simulations, we investigated the influence of the average grain size on the thermal conductivity of polycrystalline h-BN films at various temperatures. Using the EMD results, we constructed finite element models of polycrystalline h-BN sheets to probe the thermal conductivity of samples with larger grain sizes. Our multiscale investigations not only provide a general viewpoint regarding the heat conduction in h-BN films but also propose that polycrystalline h-BN sheets present high thermal conductivity comparable to monocrystalline sheets.

We investigate the thermal conductivity in the armchair and zigzag MoS2 nanoribbons, by combining the non-equilibrium Green's function approach and the first-principles method. A strong orientation dependence is observed in the thermal conductivity. Particularly, the thermal conductivity for the armchair MoS2 nanoribbon is about 673.6 Wm−1 K−1 in the armchair nanoribbon, and 841.1 Wm−1 K−1 in the zigzag nanoribbon at room temperature. By calculating the Caroli transmission, we disclose the underlying mechanism for this strong orientation dependence to be the fewer phonon transport channels in the armchair MoS2 nanoribbon in the frequency range of [150, 200] cm−1. Through the scaling of the phonon dispersion, we further illustrate that the thermal conductivity calculated for the MoS2 nanoribbon is esentially in consistent with the superior thermal conductivity found for graphene.

In this study, an application of evolutionary multi-objective optimization algorithms on the optimization of sandwich structures is presented. The solution strategy is known as Elitist Non-Dominated Sorting Evolution Strategy (ENSES) wherein Evolution Strategies (ES) as Evolutionary Algorithm (EA) in the elitist Non-dominated Sorting Genetic algorithm (NSGA-II) procedure. Evolutionary algorithm seems a compatible approach to resolve multi-objective optimization problems because it is inspired by natural evolution, which closely linked to Artificial Intelligence (AI) techniques and elitism has shown an important factor for improving evolutionary multi-objective search. In order to evaluate the notion of performance by ENSES, the well-known study case of sandwich structures are reconsidered. For Case 1, the goals of the multi-objective optimization are minimization of the deflection and the weight of the sandwich structures. The length, the core and skin thicknesses are the design variables of Case 1. For Case 2, the objective functions are the fabrication cost, the beam weight and the end deflection of the sandwich structures. There are four design variables i.e., the weld height, the weld length, the beam depth and the beam width in Case 2. Numerical results are presented in terms of Paretooptimal solutions for both evaluated cases.

The point collocation method of finite spheres (PCMFS) is used to model the hyperelastic response of soft biological tissue in real time within the framework of virtual surgery simulation. The proper orthogonal decomposition (POD) model order reduction (MOR) technique was used to achieve reduced-order model of the problem, minimizing computational cost. The PCMFS is a physics-based meshfree numerical technique for real-time simulation of surgical procedures where the approximation functions are applied directly on the strong form of the boundary value problem without the need for integration, increasing computational efficiency. Since computational speed has a significant role in simulation of surgical procedures, the proposed technique was able to model realistic nonlinear behavior of organs in real time. Numerical results are shown to demonstrate the effectiveness of the new methodology through a comparison between full and reduced analyses for several nonlinear problems. It is shown that the proposed technique was able to achieve good agreement with the full model; moreover, the computational and data storage costs were significantly reduced.

The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM) method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM) method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.

Paraffin Nanocomposites for Heat Management of Lithium-Ion Batteries: A Computational Investigation
(2016)

Lithium-ion (Li-ion) batteries are currently considered as vital components for advances in mobile technologies such as those in communications and transport. Nonetheless, Li-ion batteries suffer from temperature rises which sometimes lead to operational damages or may even cause fire. An appropriate solution to control the temperature changes during the operation of Li-ion batteries is to embed batteries inside a paraffin matrix to absorb and dissipate heat. In the present work, we aimed to investigate the possibility of making paraffin nanocomposites for better heat management of a Li-ion battery pack. To fulfill this aim, heat generation during a battery charging/discharging cycles was simulated using Newman’s well established electrochemical pseudo-2D model. We couple this model to a 3D heat transfer model to predict the temperature evolution during the battery operation. In the later model, we considered different paraffin nanocomposites structures made by the addition of graphene, carbon nanotubes, and fullerene by assuming the same thermal conductivity for all fillers. This way, our results mainly correlate with the geometry of the fillers. Our results assess the degree of enhancement in heat dissipation of Li-ion batteries through the use of paraffin nanocomposites. Our results may be used as a guide for experimental set-ups to improve the heat management of Li-ion batteries.

The current study attempts to recognise an adequate classification for a semi-rigid beam-to-column connection by investigating strength, stiffness and ductility. For this purpose, an experimental test was carried out to investigate the moment-rotation (M-theta) features of flush end-plate (FEP) connections including variable parameters like size and number of bolts, thickness of end-plate, and finally, size of beams and columns. The initial elastic stiffness and ultimate moment capacity of connections were determined by an extensive analytical procedure from the proposed method prescribed by ANSI/AISC 360-10, and Eurocode 3 Part 1-8 specifications. The behaviour of beams with partially restrained or semi-rigid connections were also studied by incorporating classical analysis methods. The results confirmed that thickness of the column flange and end-plate substantially govern over the initial rotational stiffness of of flush end-plate connections. The results also clearly showed that EC3 provided a more reliable classification index for flush end-plate (FEP) connections. The findings from this study make significant contributions to the current literature as the actual response characteristics of such connections are non-linear. Therefore, such semirigid behaviour should be used to for an analysis and design method.

The extended finite element method (XFEM) offers an elegant tool to model material discontinuities and cracks within a regular mesh, so that the element edges do not necessarily coincide with the discontinuities. This allows the modeling of propagating cracks without the requirement to adapt the mesh incrementally. Using a regular mesh offers the advantage, that simple refinement strategies based on the quadtree data structure can be used to refine the mesh in regions, that require a high mesh density. An additional benefit of the XFEM is, that the transmission of cohesive forces through a crack can be modeled in a straightforward way without introducing additional interface elements. Finally different criteria for the determination of the crack propagation angle are investigated and applied to numerical tests of cracked concrete specimens, which are compared with experimental results.

Major problems of applying selective sensitivity to system identification are requirement of precise knowledge about the system parameters and realization of the required system of forces. This work presents a procedure which is able to deriving selectively sensitive excitation by iterative experiments. The first step is to determine the selectively sensitive displacement and selectively sensitive force patterns. These values are obtained by introducing the prior information of system parameters into an optimization which minimizes the sensitivities of the structure response with respect to the unselected parameters while keeping the sensitivities with respect to the selected parameters as a constant. In a second step the force pattern is used to derive dynamic loads on the tested structure and measurements are carried out. An automatic control ensures the required excitation forces. In a third step, measured outputs are employed to update the prior information. The strategy is to minimize the difference between a predicted displacement response, formulated as function of the unknown parameters and the measured displacements, and the selectively sensitive displacement calculated in the first step. With the updated values of the parameters a re-analysis of selective sensitivity is performed and the experiment is repeated until the displacement response of the model and the actual structure are conformed. As an illustration a simply supported beam made of steel, vibrated by harmonic excitation is investigated, thereby demonstrating that the adaptive excitation can be obtained efficiently.

The Element-free Galerkin Method has become a very popular tool for the simulation of mechanical problems with moving boundaries. The internally applied Moving Least Squares approximation uses in general Gaussian or cubic weighting functions and has compact support. Due to the approximative character of this method the obtained shape functions do not fulfill the interpolation condition, which causes additional numerical effort for the imposition of the essential boundary conditions. The application of a singular weighting function, which leads to singular coefficient matrices at the nodes, can solve this problem, but requires a very careful placement of the integration points. Special procedures for the handling of such singular matrices were proposed in literature, which require additional numerical effort. In this paper a non-singular weighting function is presented, which leads to an exact fulfillment of the interpolation condition. This weighting function leads to regular values of the weights and the coefficient matrices in the whole interpolation domain even at the nodes. Furthermore this function gives much more stable results for varying size of the influence radius and for strongly distorted nodal arrangements than classical weighting function types. Nevertheless, for practical applications the results are similar as these obtained with the regularized weighting type presented by the authors in previous publications. Finally a new concept will be presented, which enables an efficient analysis of systems with strongly varying node density. In this concept the nodal influence domains are adapted depending on the nodal configuration by interpolating the influence radius for each direction from the distances to the natural neighbor nodes. This approach requires a Voronoi diagram of the domain, which is available in this study since Delaunay triangles are used as integration background cells. In the numerical examples it will be shown, that this method leads to a more uniform and reduced number of influencing nodes for systems with varying node density than the classical circular influence domains, which means that the small additional numerical effort for interpolating the influence radius leads to remarkable reduction of the total numerical cost in a linear analysis while obtaining similar results. For nonlinear calculations this advantage would be even more significant.

The modeling of crack propagation in plain and reinforced concrete structures is still a field for many researchers. If a macroscopic description of the cohesive cracking process of concrete is applied, generally the Fictitious Crack Model is utilized, where a force transmission over micro cracks is assumed. In the most applications of this concept the cohesive model represents the relation between the normal crack opening and the normal stress, which is mostly defined as an exponential softening function, independently from the shear stresses in tangential direction. The cohesive forces are then calculated only from the normal stresses. By Carol et al. 1997 an improved model was developed using a coupled relation between the normal and shear damage based on an elasto-plastic constitutive formulation. This model is based on a hyperbolic yield surface depending on the normal and the shear stresses and on the tensile and shear strength. This model also represents the effect of shear traction induced crack opening. Due to the elasto-plastic formulation, where the inelastic crack opening is represented by plastic strains, this model is limited for applications with monotonic loading. In order to enable the application for cases with un- and reloading the existing model is extended in this study using a combined plastic-damage formulation, which enables the modeling of crack opening and crack closure. Furthermore the corresponding algorithmic implementation using a return mapping approach is presented and the model is verified by means of several numerical examples. Finally an investigation concerning the identification of the model parameters by means of neural networks is presented. In this analysis an inverse approximation of the model parameters is performed by using a given set of points of the load displacement curves as input values and the model parameters as output terms. It will be shown, that the elasto-plastic model parameters could be identified well with this approach, but require a huge number of simulations.

In engineering science the modeling and numerical analysis of complex systems and relations plays an important role. In order to realize such an investigation, for example a stochastic analysis, in a reasonable computational time, approximation procedure have been developed. A very famous approach is the response surface method, where the relation between input and output quantities is represented for example by global polynomials or local interpolation schemes as Moving Least Squares (MLS). In recent years artificial neural networks (ANN) have been applied as well for such purposes. Recently an adaptive response surface approach for reliability analyses was proposed, which is very efficient concerning the number of expensive limit state function evaluations. Due to the applied simplex interpolation the procedure is limited to small dimensions. In this paper this approach is extended for larger dimensions using combined ANN and MLS response surfaces for evaluating the adaptation criterion with only one set of joined limit state points. As adaptation criterion a combination by using the maximum difference in the conditional probabilities of failure and the maximum difference in the approximated radii is applied. Compared to response surfaces on directional samples or to plain directional sampling the failure probability can be estimated with a much smaller number of limit state points.

The design and application of high performance materials demands extensive knowledge of the materials damage behavior, which significantly depends on the meso- and microstructural complexity. Numerical simulations of crack growth on multiple length scales are promising tools to understand the damage phenomena in complex materials. In polycrystalline materials it has been observed that the grain boundary decohesion is one important mechanism that leads to micro crack initiation. Following this observation the paper presents a polycrystal mesoscale model consisting of grains with orthotropic material behavior and cohesive interfaces along grain boundaries, which is able to reproduce the crack initiation and propagation along grain boundaries in polycrystalline materials. With respect to the importance of modeling the geometry of the grain structure an advanced Voronoi algorithm is proposed to generate realistic polycrystalline material structures based on measured grain size distribution. The polycrystal model is applied to investigate the crack initiation and propagation in statically loaded representative volume elements of aluminum on the mesoscale without the necessity of initial damage definition. Future research work is planned to include the mesoscale model into a multiscale model for the damage analysis in polycrystalline materials.

Advanced finite elements are proposed for the mechanical analysis of heterogeneous materials. The approximation quality of these finite elements can be controlled by a variable order of B-spline shape functions. An element-based formulation is developed such that the finite element problem can iteratively be solved without storing a global stiffness matrix. This memory saving allows for an essential increase of problem size. The heterogeneous material is modelled by projection onto a uniform, orthogonal grid of elements. Conventional, strictly grid-based finite element models show severe oscillating defects in the stress solutions at material interfaces. This problem is cured by the extension to multiphase finite elements. This concept enables to define a heterogeneous material distribution within the finite element. This is possible by a variable number of integration points to each of which individual material properties can be assigned. Based on an interpolation of material properties at nodes and further smooth interpolation within the finite elements, a continuous material function is established. With both, continuous B-spline shape function and continuous material function, also the stress solution will be continuous in the domain. The inaccuracy implied by the continuous material field is by far less defective than the prior oscillating behaviour of stresses. One- and two-dimensional numerical examples are presented.

The present paper is part of a comprehensive approach of grid-based modelling. This approach includes geometrical modelling by pixel or voxel models, advanced multiphase B-spline finite elements of variable order and fast iterative solver methods based on the multigrid method. So far, we have only presented these grid-based methods in connection with linear elastic analysis of heterogeneous materials. Damage simulation demands further considerations. The direct stress solution of standard bilinear finite elements is severly defective, especially along material interfaces. Besides achieving objective constitutive modelling, various nonlocal formulations are applied to improve the stress solution. Such a corrective data processing can either refer to input data in terms of Young's modulus or to the attained finite element stress solution, as well as to a combination of both. A damage-controlled sequentially linear analysis is applied in connection with an isotropic damage law. Essentially by a high resolution of the heterogeneous solid, local isotropic damage on the material subscale allows to simulate complex damage topologies such as cracks. Therefore anisotropic degradation of a material sample can be simulated. Based on an effectively secantial global stiffness the analysis is numerically stable. The iteration step size is controlled for an adequate simulation of the damage path. This requires many steps, but in the iterative solution process each new step starts with the solution of the prior step. Therefore this method is quite effective. The present paper provides an introduction of the proposed concept for a stable simulation of damage in heterogeneous solids.

A fast solver method called the multigrid preconditioned conjugate gradient method is proposed for the mechanical analysis of heterogeneous materials on the mesoscale. Even small samples of a heterogeneous material such as concrete show a complex geometry of different phases. These materials can be modelled by projection onto a uniform, orthogonal grid of elements. As one major problem the possible resolution of the concrete specimen is generally restricted due to (a) computation times and even more critical (b) memory demand. Iterative solvers can be based on a local element-based formulation while orthogonal grids consist of geometrical identical elements. The element-based formulation is short and transparent, and therefore efficient in implementation. A variation of the material properties in elements or integration points is possible. The multigrid method is a fast iterative solver method, where ideally the computational effort only increases linear with problem size. This is an optimal property which is almost reached in the implementation presented here. In fact no other method is known which scales better than linear. Therefore the multigrid method gains in importance the larger the problem becomes. But for heterogeneous models with very large ratios of Young's moduli the multigrid method considerably slows down by a constant factor. Such large ratios occur in certain heterogeneous solids, as well as in the damage analysis of solids. As solution to this problem the multigrid preconditioned conjugate gradient method is proposed. A benchmark highlights the multigrid preconditioned conjugate gradient method as the method of choice for very large ratio's of Young's modulus. A proposed modified multigrid cycle shows good results, in the application as stand-alone solver or as preconditioner.

In this paper an adaptive heterogeneous multiscale model, which couples two substructures with different length scales into one numerical model is introduced for the simulation of damage in concrete. In the presented approach the initiation, propagation and coalescence of microcracks is simulated using a mesoscale model, which explicitly represents the heterogeneous material structure of concrete. The mesoscale model is restricted to the damaged parts of the structure, whereas the undamaged regions are simulated on the macroscale. As a result an adaptive enlargement of the mesoscale model during the simulation is necessary. In the first part of the paper the generation of the heterogeneous mesoscopic structure of concrete, the finite element discretization of the mesoscale model, the applied isotropic damage model and the cohesive zone model are briefly introduced. Furthermore the mesoscale simulation of a uniaxial tension test of a concrete prism is presented and own obtained numerical results are compared to experimental results. The second part is focused on the adaptive heterogeneous multiscale approach. Indicators for the model adaptation and for the coupling between the different numerical models will be introduced. The transfer from the macroscale to the mesoscale and the adaptive enlargement of the mesoscale substructure will be presented in detail. A nonlinear simulation of a realistic structure using an adaptive heterogeneous multiscale model is presented at the end of the paper to show the applicability of the proposed approach to large-scale structures.

PARAMETER IDENTIFICATION OF MESOSCALE MODELS FROM MACROSCOPIC TESTS USING BAYESIAN NEURAL NETWORKS
(2010)

In this paper, a parameter identification procedure using Bayesian neural networks is proposed. Based on a training set of numerical simulations, where the material parameters are simulated in a predefined range using Latin Hypercube sampling, a Bayesian neural network, which has been extended to describe the noise of multiple outputs using a full covariance matrix, is trained to approximate the inverse relation from the experiment (displacements, forces etc.) to the material parameters. The method offers not only the possibility to determine the parameters itself, but also the accuracy of the estimate and the correlation between these parameters. As a result, a set of experiments can be designed to calibrate a numerical model.

The numerical simulation of microstructure models in 3D requires, due to enormous d.o.f., significant resources of memory as well as parallel computational power. Compared to homogeneous materials, the material hetrogeneity on microscale induced by different material phases demand for adequate computational methods for discretization and solution process of the resulting highly nonlinear problem. To enable an efficient/scalable solution process of the linearized equation systems the heterogeneous FE problem will be described by a FETI-DP (Finite Element Tearing and Interconnecting - Dual Primal) discretization. The fundamental FETI-DP equation can be solved by a number of different approaches. In our approach the FETI-DP problem will be reformulated as Saddle Point system, by eliminating the primal and Lagrangian variables. For the reduced Saddle Point system, only defined by interior and dual variables, special Uzawa algorithms can be adapted for iteratively solving the FETI-DP saddle-point equation system (FETI-DP SPE). A conjugate gradient version of the Uzawa algorithm will be shown as well as some numerical tests regarding to FETI-DP discretization of small examples using the presented solution technique. Furthermore the inversion of the interior-dual Schur complement operator can be approximated using different techniques building an adequate preconditioning matrix and therewith leading to substantial gains in computing time efficiency.

A four-node quadrilateral shell element with smoothed membrane-bending based on Mindlin-Reissner theory is proposed. The element is a combination of a plate bending and membrane element. It is based on mixed interpolation where the bending and membrane stiffness matrices are calculated on the boundaries of the smoothing cells while the shear terms are approximated by independent interpolation functions in natural coordinates. The proposed element is robust, computationally inexpensive and free of locking. Since the integration is done on the element boundaries for the bending and membrane terms, the element is more accurate than the MITC4 element for distorted meshes. This will be demonstrated for several numerical examples.

The thesis investigates at the computer aided simulation process for operational vibration analysis of complex coupled systems. As part of the internal methods project “Absolute Values” of the BMW Group, the thesis deals with the analysis of the structural dynamic interactions and excitation interactions. The overarching aim of the methods project is to predict the operational vibrations of engines.
Simulations are usually used to analyze technical aspects (e. g. operational vibrations, strength, ...) of single components in the industrial development. The boundary conditions of submodels are mostly based on experiences. So the interactions with neighboring components and systems are neglected. To get physically more realistic results but still efficient simulations, this work wants to support the engineer during the preprocessing phase by useful criteria.
At first suitable abstraction levels based on the existing literature are defined to identify structural dynamic interactions and excitation interactions of coupled systems. So it is possible to separate different effects of the coupled subsystems. On this basis, criteria are derived to assess the influence of interactions between the considered systems. These criteria can be used during the preprocessing phase and help the engineer to build up efficient models with respect to the interactions with neighboring systems. The method was developed by using several models with different complexity levels. Furthermore, the method is proved for the application in the industrial environment by using the example of a current combustion engine.

NUMERICAL SIMULATION OF THERMO-HYGRAL ALKALI-SILICA REACTION MODEL IN CONCRETE AT THE MESOSCALE
(2010)

This research aims to model Alkali-Silica Reaction gel expansion in concrete under the influence of hygral and thermal loading, based on experimental results. ASR provokes a heterogeneous expansion in concrete leading to dimensional changes and eventually the premature failure of the concrete structure. This can result in map cracking on the concrete surface which will decrease the concrete stiffness. Factors that influence ASR are parameters such as the cement alkalinity, the number of deleterious silica from the aggregate used, concrete porosity, and external factors like temperature, humidity and external source of alkali from ingression of deicing salts. Uncertainties of the influential factors make ASR a difficult phenomenon to solve; hence my approach to this matter is to solve the problem using stochastic modelling, where a numerical simulation of concrete cross-section with integration of experimental results from Finger-Institute for Building Materials Science at the Bauhaus-Universität Weimar. The problem is formulated as a multi-field problem, combining heat transfer, fluid transfer and the reaction rate model with the mechanical stress field. Simulation is performed as a mesoscale model considering aggregates and mortar matrix. The reaction rate model will be conducted using experimental results from concrete expansions due to ASR gained from concrete prism tests. Expansive strains values for transient environmental conditions due to the reaction rate will be determined from calculation based on the reaction rate model. Results from these models will be able to predict the rate of ASR expansion and the cracking propagation that may arise.

The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions.

In nonlinear simulations the loading is, in general, applied in an incremental way. Path-following algorithms are used to trace the equilibrium path during the failure process. Standard displacement controlled solution strategies fail if snap-back phenomena occur. In this contribution, a path-following algorithm based on the dissipation of the inelastic energy is presented which allows for the simulation of snap-backs. Since the constraint is defined in terms of the internal energy, the algorithm is not restricted to continuum damage models. Furthermore, no a priori knowledge about the final damage distribution is required. The performance of the proposed algorithm is illustrated using nonlinear mesoscale simulations.

In the context of finite element model updating using vibration test data, natural frequencies and mode shapes are used as validation criteria. Consequently, the order of natural frequencies and mode shapes is important. As only limited spatial information is available and noise is present in the measurements, the automatic selection of the most likely numerical mode shape corresponding to a measured mode shape is a difficult task. The most common criterion to indicate corresponding mode shapes is the modal assurance criterion. Unfortunately, this criterion fails in certain cases. In this paper, the pure mathematical modal assurance criterion will be enhanced by additional physical information of the numerical model in terms of modal strain energies. A numerical example and a benchmark study with real measured data are presented to show the advantages of the enhanced energy based criterion in comparison to the traditional modal assurance criterion.

From the design experiences of arch dams in the past, it has significant practical value to carry out the shape optimization of arch dams, which can fully make use of material characteristics and reduce the cost of constructions. Suitable variables need to be chosen to formulate the objective function, e.g. to minimize the total volume of the arch dam. Additionally a series of constraints are derived and a reasonable and convenient penalty function has been formed, which can easily enforce the characteristics of constraints and optimal design. For the optimization method, a Genetic Algorithm is adopted to perform a global search. Simultaneously, ANSYS is used to do the mechanical analysis under the coupling of thermal and hydraulic loads. One of the constraints of the newly designed dam is to fulfill requirements on the structural safety. Therefore, a reliability analysis is applied to offer a good decision supporting for matters concerning predictions of both safety and service life of the arch dam. By this, the key factors which would influence the stability and safety of arch dam significantly can be acquired, and supply a good way to take preventive measures to prolong ate the service life of an arch dam and enhances the safety of structure.

This study contributes to the identification of coupled THM constitutive model parameters via back analysis against information-rich experiments. A sampling based back analysis approach is proposed comprising both the model parameter identification and the assessment of the reliability of identified model parameters. The results obtained in the context of buffer elements indicate that sensitive parameter estimates generally obey the normal distribution. According to the sensitivity of the parameters and the probability distribution of the samples we can provide confidence intervals for the estimated parameters and thus allow a qualitative estimation on the identified parameters which are in future work used as inputs for prognosis computations of buffer elements. These elements play e.g. an important role in the design of nuclear waste repositories.

A topology optimization method has been developed for structures subjected to multiple load cases (Example of a bridge pier subjected to wind loads, traffic, superstructure...). We formulate the problem as a multi-criterial optimization problem, where the compliance is computed for each load case. Then, the Epsilon constraint method (method proposed by Chankong and Haimes, 1971) is adapted. The strategy of this method is based on the concept of minimizing the maximum compliance resulting from the critical load case while the other remaining compliances are considered in the constraints. In each iteration, the compliances of all load cases are computed and only the maximum one is minimized. The topology optimization process is switching from one load to another according to the variation of the resulting compliance. In this work we will motivate and explain the proposed methodology and provide some numerical examples.

In this paper, wavelet energy damage indicator is used in response surface methodology to identify the damage in simulated filler beam railway bridge. The approximate model is addressed to include the operational and surrounding condition in the assessment. The procedure is split into two stages, the training and detecting phase. During training phase, a so-called response surface is built from training data using polynomial regression and radial basis function approximation approaches. The response surface is used to detect the damage in structure during detection phase. The results show that the response surface model is able to detect moderate damage in one of bridge supports while the temperatures and train velocities are varied.