Refine
Document Type
- Article (254)
- Doctoral Thesis (56)
- Conference Proceeding (23)
- Preprint (6)
- Master's Thesis (5)
- Diploma Thesis (1)
- Habilitation (1)
Institute
- Institut für Strukturmechanik (ISM) (346) (remove)
Keywords
- Angewandte Mathematik (195)
- Strukturmechanik (186)
- Stochastik (40)
- Maschinelles Lernen (27)
- Computerunterstütztes Verfahren (22)
- OA-Publikationsfonds2020 (19)
- Architektur <Informatik> (17)
- Finite-Elemente-Methode (17)
- Machine learning (15)
- Angewandte Informatik (12)
- CAD (10)
- machine learning (10)
- Optimierung (8)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (7)
- Erdbeben (7)
- Deep learning (6)
- OA-Publikationsfonds2022 (6)
- Wärmeleitfähigkeit (6)
- big data (6)
- Neuronales Netz (5)
- Peridynamik (5)
- Beton (4)
- Building Information Modeling (4)
- Isogeometric Analysis (4)
- Modellierung (4)
- Polymere (4)
- finite element method (4)
- rapid visual screening (4)
- Batterie (3)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (3)
- Fuzzy-Logik (3)
- Isogeometrische Analyse (3)
- Künstliche Intelligenz (3)
- Mehrskalenmodell (3)
- NURBS (3)
- OA-Publikationsfonds2018 (3)
- OA-Publikationsfonds2021 (3)
- Optimization (3)
- Peridynamics (3)
- Phasenfeldmodell (3)
- Schaden (3)
- Simulation (3)
- Strukturdynamik (3)
- Variational principle (3)
- artificial intelligence (3)
- artificial neural networks (3)
- damaged buildings (3)
- earthquake (3)
- earthquake safety assessment (3)
- random forest (3)
- support vector machine (3)
- Abaqus (2)
- Artificial neural network (2)
- Biodiesel (2)
- Bridges (2)
- Bruch (2)
- Bruchmechanik (2)
- Defekt (2)
- Dynamik (2)
- Elastizität (2)
- Erdbebensicherheit (2)
- FEM (2)
- Fahrleitung (2)
- Fehlerabschätzung (2)
- Fluid (2)
- Fotovoltaik (2)
- Fracture (2)
- Fracture mechanics (2)
- Intelligente Stadt (2)
- Internet of things (2)
- Mechanische Eigenschaft (2)
- Mehrgitterverfahren (2)
- Mehrskalenanalyse (2)
- Mikrokapsel (2)
- Modalanalyse (2)
- Multiscale modeling (2)
- Nanomechanik (2)
- Nanostrukturiertes Material (2)
- Nanoverbundstruktur (2)
- Nichtlineare Finite-Elemente-Methode (2)
- OA-Publikationsfonds2019 (2)
- Partielle Differentialgleichung (2)
- Phase-field modeling (2)
- Riss (2)
- Rissausbreitung (2)
- SHM (2)
- Schwingung (2)
- Staumauer (2)
- Tragfähigkeit (2)
- Transfer learning (2)
- Uncertainty (2)
- Unsicherheit (2)
- Vulnerability assessment (2)
- XFEM (2)
- buildings (2)
- clustering (2)
- continuum mechanics (2)
- crack (2)
- dams (2)
- data science (2)
- extreme learning machine (2)
- mathematical modeling (2)
- multiphase (2)
- multiscale (2)
- nanocomposite (2)
- optimization (2)
- reinforcement learning (2)
- smart cities (2)
- soft computing techniques (2)
- stochastic (2)
- urban morphology (2)
- variational principle (2)
- vulnerability assessment (2)
- wireless sensor networks (2)
- 2D/3D Adaptive Mesh Refinement (1)
- 3D printing (1)
- 3D reinforced concrete buildings (1)
- 3D-Druck (1)
- ANN modeling (1)
- Abbruch (1)
- Activation function (1)
- Adaptive Pushover (1)
- Adaptive central high resolution schemes (1)
- Adaptives System (1)
- Adaptives Verfahren (1)
- Aerodynamic Stability (1)
- Aerodynamic derivatives (1)
- Aerodynamik (1)
- Akkumulator (1)
- Algorithmus (1)
- Arc-direct energy deposition (1)
- Artificial Intelligence (1)
- Auswirkung (1)
- Autogenous (1)
- Autonomous (1)
- B-Spline (1)
- B-Spline Finite Elemente (1)
- B-spline (1)
- Battery (1)
- Battery development (1)
- Baustahl (1)
- Bayes (1)
- Bayes neuronale Netze (1)
- Bayes-Verfahren (1)
- Bayesian Inference, Uncertainty Quantification (1)
- Bayesian inference (1)
- Bayesian method (1)
- Bayesian neural networks (1)
- Bayes’schen Inferenz (1)
- Beam-to-column connection; semi-rigid; flush end-plate connection; moment-rotation curve (1)
- Berechnung (1)
- Beschleunigungsmessung (1)
- Beschädigung (1)
- Bildanalyse (1)
- Biomechanics (1)
- Biomechanik (1)
- Bodenmechanik (1)
- Bodentemperatur (1)
- Bornitrid (1)
- Bridge (1)
- Bridge aerodynamics (1)
- Bruchverhalten (1)
- Brustkorb (1)
- Brücke (1)
- Brückenbau (1)
- Bubble column reactor (1)
- Building safety assessment (1)
- CFD (1)
- Capsular clustering; Design of microcapsules (1)
- Carbon nanotubes (1)
- Catenary poles (1)
- Chirurgie (1)
- Cohesive surface technique (1)
- Collocation method (1)
- Computational fracture modeling (1)
- Computermodellierung des Bruchverhaltens (1)
- Computersimulation (1)
- Concrete (1)
- Concrete catenary pole (1)
- ContikiMAC (1)
- Continuous-Time Markov Chain (1)
- Continuum Mechnics (1)
- Control system (1)
- Cost-Benefit Analysis (1)
- Damage (1)
- Damage Identification (1)
- Damage accumulation (1)
- Damage identification (1)
- Damm (1)
- Damping (1)
- Dams (1)
- Data Mining (1)
- Data, information and knowledge modeling in civil engineering (1)
- Data-driven (1)
- Deal ii C++ code (1)
- Demolition (1)
- Design Spectra (1)
- Diskontinuumsmechanik (1)
- Diskrete-Elemente-Methode (1)
- Dissertation (1)
- Domain Adaptation (1)
- Dreidimensionales Modell (1)
- Druckluft (1)
- Dual phase steel (1)
- Dual-support (1)
- ELM (1)
- Earthquake (1)
- Electrochemical properties (1)
- Elektrochemische Eigenschaft (1)
- Elektrode (1)
- Elektrodenmaterial (1)
- Elektrostatische Welle (1)
- Empire XPU 8.01 (1)
- Energieeffizienz (1)
- Energiespeichersystem (1)
- Energiespeicherung (1)
- Entropie (1)
- Entwurf von Mikrokapseln (1)
- Erbeben (1)
- Erneuerbare Energien (1)
- Erweiterte Finite-Elemente-Methode (1)
- Explicit finite element method (1)
- Fachwerkbau (1)
- Fahrleitungsmast (1)
- Fatigue life (1)
- Fernerkung (1)
- Festkörpermechanik (1)
- Feststoff (1)
- Fiber Reinforced Composite (1)
- Finite Element Method (1)
- Finite Element Model (1)
- Flattern (1)
- Flexoelectricity (1)
- Fluid-Structure Interaction (1)
- Flutter (1)
- Fracture Computational Model (1)
- Full waveform inversion (1)
- Function theoretic methods and PDE in engineering sciences (1)
- Funktechnik (1)
- Fuzzy Logic (1)
- Fuzzy logic (1)
- Fuzzy-Regelung (1)
- Gasleitung (1)
- Gaussian process regression (1)
- Gebäude (1)
- Geoinformatik (1)
- Geometric Modeling (1)
- Geometric Partial Differential Equations (1)
- Geometrie (1)
- Geometry Independent Field Approximation (1)
- Geschwindigkeit (1)
- Gesundheitsinformationssystem (1)
- Gesundheitswesen (1)
- Gewebeverbundwerkstoff (1)
- Goal-oriented A Posteriori Error Estimation (1)
- Graphen (1)
- Graphene (1)
- Grauguss (1)
- Gravel-bed rivers (1)
- Grundwasser (1)
- Größenverhältnis (1)
- Guyed antenna masts (1)
- HPC (1)
- Healing (1)
- High-speed electric train (1)
- High-speed railway bridge (1)
- Hochbau (1)
- Holzkonstruktion (1)
- Homogenisieren (1)
- Homogenisierung (1)
- Homogenization (1)
- Hydraulic geometry (1)
- Hydrodynamik (1)
- Hydrological drought (1)
- Hyperbolic PDEs (1)
- IOT (1)
- Impact (1)
- Implicit (1)
- Incompressibility (1)
- Infrastructures (1)
- Ingenieurwissenschaften (1)
- Instandhaltung (1)
- Internet der Dinge (1)
- Internet der dinge (1)
- Internet of Things (1)
- Inverse Probleme (1)
- Inverse Problems (1)
- Inverse analysis (1)
- Inverse problems (1)
- Isogeometrc Analysis (1)
- K-nearest neighbors (1)
- KNN (1)
- Kapselclustern (1)
- Kaverne (1)
- Keramik (1)
- Kirchoff--love theory (1)
- Klüftung (1)
- Kohlenstoff Nanoröhre (1)
- Kohäsionsflächenverfahren (1)
- Kollokationsmethode (1)
- Konjugierte-Gradienten-Methode (1)
- Kontinuierliche Simul (1)
- Kontinuumsmechanik (1)
- Kosten-Nutzen-Analyse (1)
- Körper (1)
- Kühlkörper (1)
- Land surface temperature (1)
- Lebensdauerabschätzung (1)
- Lebenszyklus (1)
- Loading sequence (1)
- Local maximum entropy approximants (1)
- Lufttemperatur (1)
- Lösungsverfahren (1)
- M5 model tree (1)
- MATLAB (1)
- MDLSM method (1)
- Machine Learning (1)
- Markov-Kette mit stetiger Zeit (1)
- Marmara Region (1)
- Maschinenbau (1)
- Mass Tuned Damper (1)
- Material (1)
- Materialverhalten (1)
- Materialversagen (1)
- Mathematical methods for (robotics and) computer vision (1)
- Matlab (1)
- Mechanical properties (1)
- Mechanik (1)
- Membrane contactors (1)
- Mensch (1)
- Mesh Refinement (1)
- Meso-Scale (1)
- Messtechnik (1)
- Mikro-Scale (1)
- Mild steel (1)
- MoS2 (1)
- Model assessment (1)
- Model-free status monitoring (1)
- Modellbildung (1)
- Modellkalibrierung (1)
- Modezuordung (1)
- Molecular Dynamics Simulation (1)
- Molecular Liquids (1)
- Molekulardynamik (1)
- Molekülstruktur (1)
- Monte-Carlo-Integration (1)
- Monte-Carlo-Simulation (1)
- Morphologie (1)
- Motion-induced forces (1)
- Multi-criteria decision making (1)
- Multi-objective Evolutionary Optimization, Elitist Non- Dominated Sorting Evolution Strategy (ENSES), Sandwich Structure, Pareto-Optimal Solutions, Evolutionary Algorithm (1)
- Multi-scale modeling (1)
- Multiphysics (1)
- Muscle model (1)
- Muskel (1)
- NURBS geometry (1)
- Nachhaltigkeit (1)
- Nanocomposite materials (1)
- Nanofluid (1)
- Nanomaterial (1)
- Nanomaterials (1)
- Nanomechanical Resonators (1)
- Nanopore (1)
- Nanoporöser Stoff (1)
- Nanoribbons, thermal conductivity (1)
- Nanostructures (1)
- Nasskühlung (1)
- Naturkatastrophe (1)
- Navier–Stokes equations (1)
- Neuronales Lernen (1)
- Nichtlokale Operatormethode (1)
- Nitratbelastung (1)
- Nonlocal operator method (1)
- Numerical Simulation (1)
- Numerical Simulations (1)
- Numerical modeling in engineering (1)
- Numerische Berechnung (1)
- Numerische Mathematik (1)
- OA-Publikationsfonds2023 (1)
- Oberflächentemperatur (1)
- Oberleitungsmasten (1)
- Operante Konditionierung (1)
- Operational modal analysis (1)
- Operator energy functional (1)
- Optimization in engineering applications (1)
- Optimization problems (1)
- PDEs (1)
- PU Enrichment method (1)
- Parameteridentification (1)
- Parameteridentifikation (1)
- Partial Differential Equations (1)
- Passive damper (1)
- Phase field method (1)
- Phase field model (1)
- Phase-field model (1)
- Physics informed neural network (1)
- Physikalische Eigenschaft (1)
- Piezoelectricity (1)
- Polykristall (1)
- Polymer compound (1)
- Polymer nanocomposites (1)
- Polymers (1)
- Polymerverbindung (1)
- Polymorphie (1)
- Polynomial Splines over Hierarchical T-meshes (1)
- Potential problem (1)
- RC Buildings (1)
- RSSI (1)
- Railway bridges (1)
- Rainflow counting algorithm (1)
- Rapid Visual Assessment (1)
- Rapid Visual Screening (1)
- Recovery Based Error Estimator (1)
- Referenzfläche (1)
- Rehabilitation (1)
- Reliability Analysis (1)
- Reliability Theory (1)
- Renewable energy (1)
- Residual-based variational multiscale method (1)
- Resonator (1)
- Rotorblatt (1)
- Schadenerkennung (1)
- Schadensakkumulation (1)
- Schadensdetektionsverfahren (1)
- Schadenserkennung (1)
- Schadensmechanik (1)
- Schubspannung (1)
- Schwellenwert (1)
- Schwingungsanalyse (1)
- Schwingungsdämpfer (1)
- Schädigung (1)
- Schätztheorie (1)
- Seismic Vulnerability (1)
- Seismic risk (1)
- Selbstheilendem Beton (1)
- Selbstheilung (1)
- Self-healing concrete (1)
- Semi-active damper (1)
- Sensitivity (1)
- Sensitivitätsanalyse (1)
- Sensor (1)
- Sigmoid function (1)
- Simulationsprozess (1)
- Solar (1)
- Spannungs-Dehnungs-Beziehung (1)
- Sprödbruch (1)
- Stabilität (1)
- Stahlbau (1)
- Stahlbetonkonstruktion (1)
- Standsicherheit (1)
- Staudamm (1)
- Steifigkeit (1)
- Stiffness matrix (1)
- Stochastic Subspace Identification (1)
- Stochastic analysis (1)
- Stoffeigenschaft (1)
- Stress-strain curve (1)
- Strukturanalyse (1)
- Strukturoptimierung (1)
- Strömungsmechanik (1)
- Stütze (1)
- Super Healing (1)
- Surface effects (1)
- Sustainability (1)
- Sustainable production (1)
- System Identification (1)
- Systemidentifikation (1)
- TPOGS (1)
- Talsperre (1)
- Taylor Series Expansion (1)
- Taylor series expansion (1)
- Thermal Fluid-Structure Interaction (1)
- Thermal conductivity (1)
- Thermoelastic damping (1)
- Thermoelasticity (1)
- Thermoelastizität (1)
- Thin shell (1)
- Thorax (1)
- Tichonov-Regularisierung (1)
- Tikhonov regularization (1)
- Träger (1)
- Tsallis entropy (1)
- Uncertainty analysis (1)
- Unschärfequantifizierung (1)
- Variationsprinzip (1)
- Verbundwerkstoff (1)
- Vernetzung (1)
- Vesicle dynamics (1)
- Vesikel (1)
- Vortex Induced Vibration (1)
- Vulnerability (1)
- Wasserbau (1)
- Wave propagation (1)
- Wavelet (1)
- Wavelet based adaptation (1)
- Wechselwirkung (1)
- Werkstoff (1)
- Werkstoffdämpfung (1)
- Werkstoffprüfung (1)
- Wind Energy (1)
- Wind Turbines (1)
- Wind load (1)
- Windenergie (1)
- Windkraftwerk (1)
- Windlast (1)
- Windturbine (1)
- Zementbeton (1)
- Zustandsraummodell (1)
- Zuverlässigkeitsanalyse (1)
- Zuverlässigkeitstheorie (1)
- action recognition (1)
- adaptive neuro-fuzzy inference system (ANFIS) (1)
- adaptive pushover (1)
- adaptive simulation (1)
- ant colony optimization algorithm (ACO) (1)
- artificial neural network (1)
- atomistic simulation methods (1)
- automatic modal analysis (1)
- back-pressure (1)
- battery (1)
- beton (1)
- biodiesel (1)
- brittle fracture (1)
- buckling (1)
- building information modelling (1)
- capsular clustering (1)
- ceramics (1)
- circumferential contact length (1)
- classification (1)
- classifier (1)
- clear channel assessments (1)
- cluster density (1)
- cluster shape (1)
- cohesive elements (1)
- composite (1)
- computation (1)
- computational fluid dynamics (CFD) (1)
- computational hydraulics (1)
- concrete (1)
- congestion control (1)
- conjugate gradient method (1)
- continuum damage mechanics (1)
- coronary artery disease (1)
- crack detection (1)
- crack identification (1)
- cylindrical shell structures (1)
- damage (1)
- damage identification (1)
- decay experiments (1)
- deep learning (1)
- deep learning neural network (1)
- deep neural network (1)
- defects (1)
- diesel engines (1)
- dimensionality reduction (1)
- diskontinuum mechanics (1)
- dissimilarity measures (1)
- domain decomposition (1)
- dual-support (1)
- duty-cycles (1)
- earthquake damage (1)
- earthquake vulnerability assessment (1)
- effective properties (1)
- electromagnetic waves (1)
- energy consumption (1)
- energy dissipation (1)
- energy efficiency (1)
- energy form (1)
- energy, exergy (1)
- ensemble model (1)
- estimation (1)
- explicit time integration (1)
- extreme events (1)
- extreme pressure (1)
- finite element (1)
- firefly optimization algorithm (1)
- flow pattern (1)
- fog computing (1)
- food informatics (1)
- fractional-order control (1)
- full-waveform inversion (1)
- fused filament fabrication (1)
- fuzzy decision making (1)
- gas pipes (1)
- genetic algorithm (1)
- genetic programming (1)
- geoinformatics (1)
- gradient elasticity (1)
- grid-based (1)
- ground structure (1)
- ground water contamination (1)
- growth mode (1)
- gully erosion susceptibility (1)
- health (1)
- health informatics (1)
- heart disease diagnosis (1)
- heat sink (1)
- heterogeneous material (1)
- high-performance computing (1)
- human blob (1)
- human body proportions (1)
- hybrid machine learning (1)
- hybrid machine learning model (1)
- hybride Werkstoffe (1)
- hydraulic jump (1)
- hydrological model (1)
- hydrology (1)
- image processing (1)
- industry 4.0 (1)
- intergranular damage (1)
- inverse analysis (1)
- isogeometric analysis (1)
- isogeometric methods (1)
- jointed rock (1)
- least square support vector machine (LSSVM) (1)
- level set method (1)
- longitudinal dispersion coefficient (1)
- maschinelles Lernen (1)
- material failure (1)
- matrix-free (1)
- maximum stress (1)
- mehrphasig (1)
- microcapsule (1)
- mitigation (1)
- modal analysis (1)
- modal damping (1)
- modal parameter estimation (1)
- modal tracking (1)
- mode pairing (1)
- model updating (1)
- molecular dynamics (1)
- mortar method (1)
- multigrid (1)
- multigrid method (1)
- multiscale method (1)
- nanofluid (1)
- nanoreinforced composites (1)
- nanosheets (1)
- natural hazard (1)
- neural architecture search (1)
- neural networks (NNs) (1)
- nonlocal Hessian operator (1)
- nonlocal operator method (1)
- numerical methods (1)
- numerical modelling (1)
- operator energy functional (1)
- optimal sensor positions (1)
- optimale Sensorpositionierung (1)
- parameter identification (1)
- partical swarm optimization (1)
- passive control (1)
- peridynamics (1)
- phase field (1)
- phase field fracture method (1)
- photovoltaic (1)
- photovoltaic-thermal (PV/T) (1)
- physical activities (1)
- polymorphe Unschärfemodellierung (1)
- precipitation (1)
- prediction (1)
- predictive model (1)
- principal component analysis (1)
- public health (1)
- public space (1)
- quasicontinuum method (1)
- randomized spectral representation (1)
- rapid assessment (1)
- rapid classification (1)
- received signal strength indicator (1)
- recovery-based and residual-based error estimators (1)
- remote sensing (1)
- residential buildings (1)
- response surface methodology (1)
- rice (1)
- rivers (1)
- rule based classification (1)
- scalable smeared crack analysis (1)
- scale transition (1)
- seasonal precipitation (1)
- seismic assessment (1)
- seismic control (1)
- seismic hazard analysis (1)
- seismic risk estimation (1)
- seismic vulnerability (1)
- self healing concrete (1)
- self-healing concrete (1)
- signal processing (1)
- site-specific spectrum (1)
- smart sensors (1)
- smooth rectangular channel (1)
- smoothed particle hydrodynamics (1)
- soil temperature (1)
- solver (1)
- spatial analysis (1)
- spatiotemporal database (1)
- spearman correlation coefficient (1)
- square root cubature calman filter (1)
- standard deviation of pressure fluctuations (1)
- statistical analysis (1)
- statistical coeffcient of the probability distribution (1)
- stilling basin (1)
- stochastic subspace identification (1)
- structural control (1)
- structural dynamics (1)
- sugarcane (1)
- supervised learning (1)
- support vector regression (1)
- sustainability (1)
- tall buildings (1)
- thermal conductivity (1)
- three-dimensional truss structures (1)
- topology optimization (1)
- tuned mass damper (1)
- tuned mass dampers (1)
- type-3 fuzzy systems (1)
- urban health (1)
- urban sustainability (1)
- vibration-based damage identification (1)
- vibration-based methodology (1)
- water quality (1)
- wave propagation (1)
- wavelet transform (1)
- weak form (1)
- weighted residual method (1)
- wind turbine rotor blades (1)
- wireless sensor network (1)
- woven composites (1)
The effect of urban form on energy consumption has been the subject of various studies around the world. Having examined the effect of buildings on energy consumption, these studies indicate that the physical form of a city has a notable impact on the amount of energy consumed in its spaces. The present study identified the variables that affected energy consumption in residential buildings and analyzed their effects on energy consumption in four neighborhoods in Tehran: Apadana, Bimeh, Ekbatan-phase I, and Ekbatan-phase II. After extracting the variables, their effects are estimated with statistical methods, and the results are compared with the land surface temperature (LST) remote sensing data derived from Landsat 8 satellite images taken in the winter of 2019. The results showed that physical variables, such as the size of buildings, population density, vegetation cover, texture concentration, and surface color, have the greatest impacts on energy usage. For the Apadana neighborhood, the factors with the most potent effect on energy consumption were found to be the size of buildings and the population density. However, for other neighborhoods, in addition to these two factors, a third factor was also recognized to have a significant effect on energy consumption. This third factor for the Bimeh, Ekbatan-I, and Ekbatan-II neighborhoods was the type of buildings, texture concentration, and orientation of buildings, respectively.
The fracture of microcapsules is an important issue to release the healing agent for healing the cracks in encapsulation-based self-healing concrete. The capsular clustering generated from the concrete mixing process is considered one of the critical factors in the fracture mechanism. Since there is a lack of studies in the literature regarding this issue, the design of self-healing concrete cannot be made without an appropriate modelling strategy. In this paper, the effects of microcapsule size and clustering on the fractured microcapsules are studied computationally. A simple 2D computational modelling approach is developed based on the eXtended Finite Element Method (XFEM) and cohesive surface technique. The proposed model shows that the microcapsule size and clustering have significant roles in governing the load-carrying capacity and the crack propagation pattern and determines whether the microcapsule will be fractured or debonded from the concrete matrix. The higher the microcapsule circumferential contact length, the higher the load-carrying capacity. When it is lower than 25% of the microcapsule circumference, it will result in a greater possibility for the debonding of the microcapsule from the concrete. The greater the core/shell ratio (smaller shell thickness), the greater the likelihood of microcapsules being fractured.
The key objective of this research is to study fracture with a meshfree method, local maximum entropy approximations, and model fracture in thin shell structures with complex geometry and topology. This topic is of high relevance for real-world applications, for example in the automotive industry and in aerospace engineering. The shell structure can be described efficiently by meshless methods which are capable of describing complex shapes as a collection of points instead of a structured mesh. In order to find the appropriate numerical method to achieve this goal, the first part of the work was development of a method based on local maximum entropy (LME)
shape functions together with enrichment functions used in partition of unity methods to discretize problems in linear elastic fracture mechanics. We obtain improved accuracy relative to the standard extended finite element method (XFEM) at a comparable computational cost. In addition, we keep the advantages of the LME shape functions,such as smoothness and non-negativity. We show numerically that optimal convergence (same as in FEM) for energy norm and stress intensity factors can be obtained through the use of geometric (fixed area) enrichment with no special treatment of the nodes
near the crack such as blending or shifting.
As extension of this method to three dimensional problems and complex thin shell structures with arbitrary crack growth is cumbersome, we developed a phase field model for fracture using LME. Phase field models provide a powerful tool to tackle moving interface problems, and have been extensively used in physics and materials science. Phase methods are gaining popularity in a wide set of applications in applied science and engineering, recently a second order phase field approximation for brittle fracture has gathered significant interest in computational fracture such that sharp cracks discontinuities are modeled by a diffusive crack. By minimizing the system energy with respect to the mechanical displacements and the phase-field, subject to an irreversibility condition to avoid crack healing, this model can describe crack nucleation, propagation, branching and merging. One of the main advantages of the phase field modeling of fractures is the unified treatment of the interfacial tracking and mechanics, which potentially leads to simple, robust, scalable computer codes applicable to complex systems. In other words, this approximation reduces considerably the implementation complexity because the numerical tracking of the fracture is not needed, at the expense of a high computational cost. We present a fourth-order phase field model for fracture based on local maximum entropy (LME) approximations. The higher order continuity of the meshfree LME approximation allows to directly solve the fourth-order phase field equations without splitting the fourth-order differential equation into two second order differential equations. Notably, in contrast to previous discretizations that use at least a quadratic basis, only linear completeness is needed in the LME approximation. We show that the crack surface can be captured more accurately in the fourth-order model than the second-order model. Furthermore, less nodes are needed for the fourth-order model to resolve the crack path. Finally, we demonstrate the performance of the proposed meshfree fourth order phase-field formulation for 5 representative numerical examples. Computational results will be compared to analytical solutions within linear elastic fracture mechanics and experimental data for three-dimensional crack propagation.
In the last part of this research, we present a phase-field model for fracture in Kirchoff-Love thin shells using the local maximum-entropy (LME) meshfree method. Since the crack is a natural outcome of the analysis it does not require an explicit representation and tracking, which is advantageous over techniques as the extended finite element method that requires tracking of the crack paths. The geometric description of the shell is based on statistical learning techniques that allow dealing with general point set surfaces avoiding a global parametrization, which can be applied to tackle surfaces of complex geometry and topology. We show the flexibility and robustness of the present methodology for two examples: plate in tension and a set of open connected
pipes.
Cooling Performance of a Novel Circulatory Flow Concentric Multi-Channel Heat Sink with Nanofluids
(2020)
Heat rejection from electronic devices such as processors necessitates a high heat removal rate. The present study focuses on liquid-cooled novel heat sink geometry made from four channels (width 4 mm and depth 3.5 mm) configured in a concentric shape with alternate flow passages (slot of 3 mm gap). In this study, the cooling performance of the heat sink was tested under simulated controlled conditions.The lower bottom surface of the heat sink was heated at a constant heat flux condition based on dissipated power of 50 W and 70 W. The computations were carried out for different volume fractions of nanoparticles, namely 0.5% to 5%, and water as base fluid at a flow rate of 30 to 180 mL/min. The results showed a higher rate of heat rejection from the nanofluid cooled heat sink compared with water. The enhancement in performance was analyzed with the help of a temperature difference of nanofluid outlet temperature and water outlet temperature under similar operating conditions. The enhancement was ~2% for 0.5% volume fraction nanofluids and ~17% for a 5% volume fraction.
Coronary Artery Disease Diagnosis: Ranking the Significant Features Using a Random Trees Model
(2020)
Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.
In recent years increasingly consideration has been given to the lifetime extension of existing structures. This is based on the fact that a growing percentage of civil infrastructure as well as buildings is threatened by obsolescence and that due to simple monetary reasons this can no longer be countered by simply re-building everything anew. Hence maintenance interventions are required which allow partial or complete structural rehabilitation. However, maintenance interventions have to be economically reasonable, that is, maintenance expenditures have to be outweighed by expected future benefits. Is this not the case, then indeed the structure is obsolete - at least in its current functional, economic, technical, or social configuration - and innovative alternatives have to be evaluated. An optimization formulation for planning maintenance interventions based on cost-benefit criteria is proposed herein. The underlying formulation is as follows: (a) between maintenance interventions structural deterioration is described as a random process; (b) maintenance interventions can take place anytime throughout lifetime and comprise the rehabilitation of all deterioration states above a certain minimum level; and (c) maintenance interventions are optimized by taking into account all expected life-cycle costs (construction, failure, inspection and state-dependent repair costs) as well as state- or time-dependent benefit rates. The optimization is performed by an evolutionary algorithm. The proposed approach also allows to determine optimal lifetimes and acceptable failure rates. Numerical examples demonstrate the importance of defining benefit rates explicitly. It is shown, that the optimal solution to maintenance interventions requires to take action before reaching the acceptable failure rate or the zero expected net benefit rate level. Deferring decisions with respect to maintenance not only results, in general, in higher losses, but also results in overly hazardous structures.
Identification of flaws in structures is a critical element in the management of maintenance and quality assurance processes in engineering. Nondestructive testing (NDT) techniques based on a wide range of physical principles have been developed and are used in common practice for structural health monitoring. However, basic NDT techniques are usually limited in their ability to provide the accurate information on locations, dimensions and shapes of flaws. One alternative to extract additional information from the results of NDT is to append it with a computational model that provides detailed analysis of the physical process involved and enables the accurate identification of the flaw parameters. The aim here is to develop the strategies to uniquely identify cracks in two-dimensional 2D) structures under dynamic loadings.
A local NDT technique combined eXtended Finite Element Method (XFEM) with dynamic loading in order to identify the cracks in the structures quickly and accurately is developed in this dissertation. The Newmark-b time integration method with Rayleigh damping is used for the time integration. We apply Nelder-Mead (NM)and Quasi-Newton (QN) methods for identifying the crack tip in plate. The inverse problem is solved iteratively, in which XFEM is used for solving the forward problem in each iteration. For a timeharmonic excitation with a single frequency and a short-duration signal measured along part of the external boundary, the crack is detected through the solution of an inverse time-dependent problem. Compared to the static load, we show that the dynamic loads are more effective for crack detection problems. Moreover, we tested different dynamic loads and find that NM method works more efficient under the harmonic load than the pounding load while the QN method achieves almost the same results for both load types.
A global strategy, Multilevel Coordinate Search (MCS) with XFEM (XFEM-MCS) methodology under the dynamic electric load, to detect multiple cracks in 2D piezoelectric plates is proposed in this dissertation. The Newmark-b method is employed for the time integration and in each iteration the forward problem is solved by XFEM for various cracks. The objective functional is minimized by using a global search algorithm MCS. The test problems show that the XFEM-MCS algorithm under the dynamic electric load can be effectively employed for multiple cracks detection in piezoelectric materials, and it proves to be robust in identifying defects in piezoelectric structures. Fiber-reinforced composites (FRCs) are extensively applied in practical engineering since they have high stiffness and strength. Experiments reveal a so-called interphase zone, i.e. the space between the outside interface of the fiber and the inside interface of the matrix. The interphase strength between the fiber and the matrix strongly affects the mechanical properties as a result of the large ratio of interface/volume. For the purpose of understanding the mechanical properties of FRCs with functionally graded interphase (FGI), a closed-form expression of the interface strength between a fiber and a matrix is obtained in this dissertation using a continuum modeling approach according to the ver derWaals (vdW) forces. Based on the interatomic potential, we develop a new modified nonlinear cohesive law, which is applied to study the interface delamination of FRCs with FGI under different loadings. The analytical solutions show that the delamination behavior strongly depends on the interphase thickness, the fiber radius, the Young’s moduli and Poisson’s ratios of the fiber and the matrix. Thermal conductivity is the property of a material to conduct heat. With the development and deep research of 2D materials, especially graphene and molybdenum disulfide (MoS2), the thermal conductivity of 2D materials attracts wide attentions. The thermal conductivity of graphene nanoribbons (GNRs) is found to appear a tendency of decreasing under tensile strain by classical molecular dynamics (MD) simulations. Hence, the strain effects of graphene can play a key role in the continuous tunability and applicability of its thermal conductivity property at nanoscale, and the dissipation of thermal conductivity is an obstacle for the applications of thermal management. Up to now, the thermal conductivity of graphene under shear deformation has not been investigated yet. From a practical point of view, good thermal managements of GNRs have significantly potential applications of future GNR-based thermal nanodevices, which can greatly improve performances of the nanosized devices due to heat dissipations. Meanwhile, graphene is a thin membrane structure, it is also important to understand the wrinkling behavior under shear deformation. MoS2 exists in the stable semiconducting 1H phase (1H-MoS2) while the metallic 1T phase (1T-MoS2) is unstable at ambient conditions. As it’s well known that much attention has been focused on studying the nonlinear optical properties of the 1H-MoS2. In a very recent research, the 1T-type monolayer crystals of TMDCs, MX2 (MoS2, WS2 ...) was reported having an intrinsic in-plane negative Poisson’s ratio. Luckily, nearly at the same time, unprecedented long-term (>3months) air stability of the 1T-MoS2 can be achieved by using the donor lithium hydride (LiH). Therefore, it’s very important to study the thermal conductivity of 1T-MoS2.
The thermal conductivity of graphene under shear strain is systematically studied in this dissertation by MD simulations. The results show that, in contrast to the dramatic decrease of thermal conductivity of graphene under uniaxial tensile, the thermal conductivity of graphene is not sensitive to the shear strain, and the thermal conductivity decreases only 12-16%. The wrinkle evolves when the shear strain is around 5%-10%, but the thermal conductivity barely changes.
The thermal conductivities of single-layer 1H-MoS2(1H-SLMoS2) and single-layer 1T-MoS2 (1T-SLMoS2) with different sample sizes, temperatures and strain rates have been studied systematically in this dissertation. We find that the thermal conductivities of 1H-SLMoS2 and 1T-SLMoS2 in both the armchair and the zigzag directions increase with the increasing of the sample length, while the increase of the width of the sample has minor effect on the thermal conductions of these two structures. The thermal conductivity of 1HSLMoS2 is smaller than that of 1T-SLMoS2 under size effect. Furthermore, the temperature effect results show that the thermal conductivities of both 1H-SLMoS2 and 1T-SLMoS2 decrease with the increasing of the temperature. The thermal conductivities of 1HSLMoS2 and 1T-SLMoS2 are nearly the same (difference <6%) in both of the chiral orientations under corresponding temperatures, especially in the armchair direction (difference <2.8%). Moreover, we find that the strain effects on the thermal conductivity of 1HSLMoS2 and 1T-SLMoS2 are different. More specifically, the thermal conductivity decreases with the increasing tensile strain rate for
1T-SLMoS2, while fluctuates with the growth of the strain for 1HSLMoS2. Finally, we find that the thermal conductivity of same sized 1H-SLMoS2 is similar with that of the strained 1H-SLMoS2 structure.
This dissertation is devoted to the theoretical development and experimental laboratory verification of a new damage localization method: The state projection estimation error (SP2E). This method is based on the subspace identification of mechanical structures, Krein space based H-infinity estimation and oblique projections. To explain method SP2E, several theories are discussed and laboratory experiments have been conducted and analysed.
A fundamental approach of structural dynamics is outlined first by explaining mechanical systems based on first principles. Following that, a fundamentally different approach, subspace identification, is comprehensively explained. While both theories, first principle and subspace identification based mechanical systems, may be seen as widespread methods, barely known and new techniques follow up. Therefore, the indefinite quadratic estimation theory is explained. Based on a Popov function approach, this leads to the Krein space based H-infinity theory. Subsequently, a new method for damage identification, namely SP2E, is proposed. Here, the introduction of a difference process, the analysis by its average process power and the application of oblique projections is discussed in depth.
Finally, the new method is verified in laboratory experiments. Therefore, the identification of a laboratory structure at Leipzig University of Applied Sciences is elaborated. Then structural alterations are experimentally applied, which were localized by SP2E afterwards. In the end four experimental sensitivity studies are shown and discussed. For each measurement series the structural alteration was increased, which was successfully tracked by SP2E. The experimental results are plausible and in accordance with the developed theories. By repeating these experiments, the applicability of SP2E for damage localization is experimentally proven.
One of the most important renewable energy technologies used nowadays are wind power turbines. In this paper, we are interested in identifying the operating status of wind turbines, especially rotor blades, by means of multiphysical models. It is a state-of-the-art technology to test mechanical structures with ultrasonic-based methods. However, due to the density and the required high resolution, the testing is performed with high-frequency waves, which cannot penetrate the structure in depth. Therefore, there is a need to adopt techniques in the fields of multiphysical model-based inversion schemes or data-driven structural health monitoring. Before investing effort in the development of such approaches, further insights and approaches are necessary to make the techniques applicable to structures such as wind power plants (blades). Among the expected developments, further accelerations of the so-called “forward codes” for a more efficient implementation of the wave equation could be envisaged. Here, we employ electromagnetic waves for the early detection of cracks. Because in many practical situations, it is not possible to apply techniques from tomography (characterized by multiple sources and sensor pairs), we focus here on the question of whether the existence of cracks can be determined by using only one source for the sent waves.
The present paper is part of a comprehensive approach of grid-based modelling. This approach includes geometrical modelling by pixel or voxel models, advanced multiphase B-spline finite elements of variable order and fast iterative solver methods based on the multigrid method. So far, we have only presented these grid-based methods in connection with linear elastic analysis of heterogeneous materials. Damage simulation demands further considerations. The direct stress solution of standard bilinear finite elements is severly defective, especially along material interfaces. Besides achieving objective constitutive modelling, various nonlocal formulations are applied to improve the stress solution. Such a corrective data processing can either refer to input data in terms of Young's modulus or to the attained finite element stress solution, as well as to a combination of both. A damage-controlled sequentially linear analysis is applied in connection with an isotropic damage law. Essentially by a high resolution of the heterogeneous solid, local isotropic damage on the material subscale allows to simulate complex damage topologies such as cracks. Therefore anisotropic degradation of a material sample can be simulated. Based on an effectively secantial global stiffness the analysis is numerically stable. The iteration step size is controlled for an adequate simulation of the damage path. This requires many steps, but in the iterative solution process each new step starts with the solution of the prior step. Therefore this method is quite effective. The present paper provides an introduction of the proposed concept for a stable simulation of damage in heterogeneous solids.
Damping in Bolted Joints
(2013)
With the help of modern CAE-based simulation processes, it is possible to predict the dynamic behavior of fatigue strength problems in order to improve products of many industries, e.g. the building, the machine construction or the automotive industry. Amongst others, it can be used to improve the acoustic design of automobiles in an early development stage.
Nowadays, the acoustics of automobiles plays a crucial role in the process of vehicle development. Because of the advanced demand of comfort and due to statutory rules the manufacturers are faced with the challenge of optimizing their car’s sound emissions. The optimization includes not only the reduction of noises. Lately with the trend to hybrid and electric cars, it has been shown that vehicles can become too quiet. Thus, the prediction of structural and acoustic properties based on FE-simulations is becoming increasingly important before any experimental prototype is examined. With the state of the art, qualitative comparisons between different implementations are possible. However, an accurate and reliable quantitative prediction is still a challenge.
One aspect in the context of increasing the prediction quality of acoustic (or general oscillating) problems - especially in power-trains of automobiles - is the more accurate implementation of damping in joint structures. While material damping occurs globally and homogenous in a structural system, the damping due to joints is a very local problem, since energy is especially dissipated in the vicinity of joints.
This paper focusses on experimental and numerical studies performed on a single (extracted) screw connection. Starting with experimental studies that are used to identify the underlying physical model of the energy loss, the locally influencing parameters (e.g. the damping factor) should be identified. In contrast to similar research projects, the approach tends to a more local consideration within the joint interface. Tangential stiffness and energy loss within the interface are spatially distributed and interactions between the influencing parameters are regarded. As a result, the damping matrix is no longer proportional to mass or stiffness matrix, since it is composed of the global material damping and the local joint damping. With this new approach, the prediction quality can be increased, since the local distribution of the physical parameters within the joint interface corresponds much closer to the reality.
Isogeometric analysis (IGA) is a numerical method for solving partial differential equations (PDEs), which was introduced with the aim of integrating finite element analysis with computer-aided design systems. The main idea of the method is to use the same spline basis functions which describe the geometry in CAD systems for the approximation of solution fields in the finite element method (FEM). Originally, NURBS which is a standard technology employed in CAD systems was adopted as basis functions in IGA but there were several variants of IGA using other technologies such as T-splines, PHT splines, and subdivision surfaces as basis functions. In general, IGA offers two key advantages over classical FEM: (i) by describing the CAD geometry exactly using smooth, high-order spline functions, the mesh generation process is simplified and the interoperability between CAD and FEM is improved, (ii) IGA can be viewed as a high-order finite element method which offers basis functions with high inter-element continuity and therefore can provide a primal variational formulation of high-order PDEs in a straightforward fashion. The main goal of this thesis is to further advance isogeometric analysis by exploiting these major advantages, namely precise geometric modeling and the use of smooth high-order splines as basis functions, and develop robust computational methods for problems with complex geometry and/or complex multi-physics.
As the first contribution of this thesis, we leverage the precise geometric modeling of isogeometric analysis and propose a new method for its coupling with meshfree discretizations. We exploit the strengths of both methods by using IGA to provide a smooth, geometrically-exact surface discretization of the problem domain boundary, while the Reproducing Kernel Particle Method (RKPM) discretization is used to provide the volumetric discretization of the domain interior. The coupling strategy is based upon the higher-order consistency or reproducing conditions that are directly imposed in the physical domain. The resulting coupled method enjoys several favorable features: (i) it preserves the geometric exactness of IGA, (ii) it circumvents the need for global volumetric parameterization of the problem domain, (iii) it achieves arbitrary-order approximation accuracy while preserving higher-order smoothness of the discretization. Several numerical examples are solved to show the optimal convergence properties of the coupled IGA–RKPM formulation, and to demonstrate its effectiveness in constructing volumetric discretizations for complex-geometry objects.
As for the next contribution, we exploit the use of smooth, high-order spline basis functions in IGA to solve high-order surface PDEs governing the morphological evolution of vesicles. These governing equations are often consisted of geometric PDEs, high-order PDEs on stationary or evolving surfaces, or a combination of them. We propose an isogeometric formulation for solving these PDEs. In the context of geometric PDEs, we consider phase-field approximations of mean curvature flow and Willmore flow problems and numerically study the convergence behavior of isogeometric analysis for these problems. As a model problem for high-order PDEs on stationary surfaces, we consider the Cahn–Hilliard equation on a sphere, where the surface is modeled using a phase-field approach. As for the high-order PDEs on evolving surfaces, a phase-field model of a deforming multi-component vesicle, which consists of two fourth-order nonlinear PDEs, is solved using the isogeometric analysis in a primal variational framework. Through several numerical examples in 2D, 3D and axisymmetric 3D settings, we show the robustness of IGA for solving the considered phase-field models.
Finally, we present a monolithic, implicit formulation based on isogeometric analysis and generalized-alpha time integration for simulating hydrodynamics of vesicles according to a phase-field model. Compared to earlier works, the number of equations of the phase-field model which need to be solved is reduced by leveraging high continuity of NURBS functions, and the algorithm is extended to 3D settings. We use residual-based variational multi-scale method (RBVMS) for solving Navier–Stokes equations, while the rest of PDEs in the phase-field model are treated using a standard Galerkin-based IGA. We introduce the resistive immersed surface (RIS) method into the formulation which can be employed for an implicit description of complex geometries using a diffuse-interface approach. The implementation highlights the robustness of the RBVMS method for Navier–Stokes equations of incompressible flows with non-trivial localized forcing terms including bending and tension forces of the vesicle. The potential of the phase-field model and isogeometric analysis for accurate simulation of a variety of fluid-vesicle interaction problems in 2D and 3D is demonstrated.
In this research, an attempt was made to reduce the dimension of wavelet-ANFIS/ANN (artificial neural network/adaptive neuro-fuzzy inference system) models toward reliable forecasts as well as to decrease computational cost. In this regard, the principal component analysis was performed on the input time series decomposed by a discrete wavelet transform to feed the ANN/ANFIS models. The models were applied for dissolved oxygen (DO) forecasting in rivers which is an important variable affecting aquatic life and water quality. The current values of DO, water surface temperature, salinity, and turbidity have been considered as the input variable to forecast DO in a three-time step further. The results of the study revealed that PCA can be employed as a powerful tool for dimension reduction of input variables and also to detect inter-correlation of input variables. Results of the PCA-wavelet-ANN models are compared with those obtained from wavelet-ANN models while the earlier one has the advantage of less computational time than the later models. Dealing with ANFIS models, PCA is more beneficial to avoid wavelet-ANFIS models creating too many rules which deteriorate the efficiency of the ANFIS models. Moreover, manipulating the wavelet-ANFIS models utilizing PCA leads to a significant decreasing in computational time. Finally, it was found that the PCA-wavelet-ANN/ANFIS models can provide reliable forecasts of dissolved oxygen as an important water quality indicator in rivers.
The aim of this study is controlling of spurious oscillations developing around discontinuous solutions of both linear and non-linear wave equations or hyperbolic partial differential equations (PDEs). The equations include both first-order and second-order (wave) hyperbolic systems. In these systems even smooth initial conditions, or smoothly varying source (load) terms could lead to discontinuous propagating solutions (fronts). For the first order hyperbolic PDEs, the concept of central high resolution schemes is integrated with the multiresolution-based adaptation to capture properly both discontinuous propagating fronts and effects of fine-scale responses on those of larger scales in the multiscale manner. This integration leads to using central high resolution schemes on non-uniform grids; however, such simulation is unstable, as the central schemes are originally developed to work properly on uniform cells/grids. Hence, the main concern is stable collaboration of central schemes and multiresoltion-based cell adapters. Regarding central schemes, the considered approaches are: 1) Second order central and central-upwind schemes; 2) Third order central schemes; 3) Third and fourth order central weighted non-oscillatory schemes (central-WENO or CWENO); 4) Piece-wise parabolic methods (PPMs) obtained with two different local stencils. For these methods, corresponding (nonlinear) stability conditions are studied and modified, as well. Based on these stability conditions several limiters are modified/developed as follows: 1) Several second-order limiters with total variation diminishing (TVD) feature, 2) Second-order uniformly high order accurate non-oscillatory (UNO) limiters, 3) Two third-order nonlinear scaling limiters, 4) Two new limiters for PPMs. Numerical results show that adaptive solvers lead to cost-effective computations (e.g., in some 1-D problems, number of adapted grid points are less than 200 points during simulations, while in the uniform-grid case, to have the same accuracy, using of 2049 points is essential). Also, in some cases, it is confirmed that fine scale responses have considerable effects on higher scales.
In numerical simulation of nonlinear first order hyperbolic systems, the two main concerns are: convergence and uniqueness. The former is important due to developing of the spurious oscillations, the numerical dispersion and the numerical dissipation. Convergence in a numerical solution does not guarantee that it is the physical/real one (the uniqueness feature). Indeed, a nonlinear systems can converge to several numerical results (which mathematically all of them are true). In this work, the convergence and uniqueness are directly studied on non-uniform grids/cells by the concepts of local numerical truncation error and numerical entropy production, respectively. Also, both of these concepts have been used for cell/grid adaptations. So, the performance of these concepts is also compared by the multiresolution-based method. Several 1-D and 2-D numerical examples are examined to confirm the efficiency of the adaptive solver. Examples involve problems with convex and non-convex fluxes. In the latter case, due to developing of complex waves, proper capturing of real answers needs more attention. For this purpose, using of method-adaptation seems to be essential (in parallel to the cell/grid adaptation). This new type of adaptation is also performed in the framework of the multiresolution analysis.
Regarding second order hyperbolic PDEs (mechanical waves), the regularization concept is used to cure artificial (numerical) oscillation effects, especially for high-gradient or discontinuous solutions. There, oscillations are removed by the regularization concept acting as a post-processor. Simulations will be performed directly on the second-order form of wave equations. It should be mentioned that it is possible to rewrite second order wave equations as a system of first-order waves, and then simulated the new system by high resolution schemes. However, this approach ends to increasing of variable numbers (especially for 3D problems).
The numerical discretization is performed by the compact finite difference (FD) formulation with desire feature; e.g., methods with spectral-like or optimized-error properties. These FD methods are developed to handle high frequency waves (such as waves near earthquake sources). The performance of several regularization approaches is studied (both theoretically and numerically); at last, a proper regularization approach controlling the Gibbs phenomenon is recommended.
At the end, some numerical results are provided to confirm efficiency of numerical solvers enhanced by the regularization concept. In this part, shock-like responses due to local and abrupt changing of physical properties, and also stress wave propagation in stochastic-like domains are studied.
The extended finite element method (XFEM) offers an elegant tool to model material discontinuities and cracks within a regular mesh, so that the element edges do not necessarily coincide with the discontinuities. This allows the modeling of propagating cracks without the requirement to adapt the mesh incrementally. Using a regular mesh offers the advantage, that simple refinement strategies based on the quadtree data structure can be used to refine the mesh in regions, that require a high mesh density. An additional benefit of the XFEM is, that the transmission of cohesive forces through a crack can be modeled in a straightforward way without introducing additional interface elements. Finally different criteria for the determination of the crack propagation angle are investigated and applied to numerical tests of cracked concrete specimens, which are compared with experimental results.
The concept of information entropy together with the principle of maximum entropy to open channel flow is essentially based on some physical consideration of the problem under consideration. This paper is a discussion on Yeganeh and Heidari (2020)’s paper, who proposed a new approach for measuring vertical distribution of streamwise velocity in open channels. The discussers argue that their approach is conceptually incorrect and thus leads to a physically unrealistic situation. In addition, the discussers found some wrong mathematical expressions (which are assumed to be typos) written in the paper, and also point out that the authors did not cite some of the original papers on the topic.
In machine learning, if the training data is independently and identically distributed as the test data then a trained model can make an accurate predictions for new samples of data. Conventional machine learning has a strong dependence on massive amounts of training data which are domain specific to understand their latent patterns. In contrast, Domain adaptation and Transfer learning methods are sub-fields within machine learning that are concerned with solving the inescapable problem of insufficient training data by relaxing the domain dependence hypothesis. In this contribution, this issue has been addressed and by making a novel combination of both the methods we develop a computationally efficient and practical algorithm to solve boundary value problems based on nonlinear partial differential equations. We adopt a meshfree analysis framework to integrate the prevailing geometric modelling techniques based on NURBS and present an enhanced deep collocation approach that also plays an important role in the accuracy of solutions. We start with a brief introduction on how these methods expand upon this framework. We observe an excellent agreement between these methods and have shown that how fine-tuning a pre-trained network to a specialized domain may lead to an outstanding performance compare to the existing ones. As proof of concept, we illustrate the performance of our proposed model on several benchmark problems.
In the last two decades, Peridynamics (PD) attracts much attention in the field of fracture mechanics. One key feature of PD is the nonlocality, which is quite different from the ideas in conventional methods such as FEM and meshless method. However, conventional PD suffers from problems such as constant horizon, explicit algorithm, hourglass mode. In this thesis, by examining the nonlocality with scrutiny, we proposed several new concepts such as dual-horizon (DH) in PD, dual-support (DS) in smoothed particle hydrodynamics (SPH), nonlocal operators and operator energy functional. The conventional PD (SPH) is incorporated in the DH-PD (DS-SPH), which can adopt an inhomogeneous discretization and inhomogeneous support domains. The DH-PD (DS-SPH) can be viewed as some fundamental improvement on the conventional PD (SPH). Dual formulation of PD and SPH allows h-adaptivity while satisfying the conservations of linear momentum, angular momentum and energy. By developing the concept of nonlocality further, we introduced the nonlocal operator method as a generalization of DH-PD. Combined with energy functional of various physical models, the nonlocal forms based on dual-support concept are derived. In addition, the variation of the energy functional allows implicit formulation of the nonlocal theory. At last, we developed the higher order nonlocal operator method which is capable of solving higher order partial differential equations on arbitrary domain in higher dimensional space. Since the concepts are developed gradually, we described our findings chronologically.
In chapter 2, we developed a DH-PD formulation that includes varying horizon sizes and solves the "ghost force" issue. The concept of dual-horizon considers the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly with arbitrary particle discretization. All three peridynamic formulations, namely bond based, ordinary state based and non-ordinary state based peridynamics can be implemented within the DH-PD framework. A simple adaptive refinement procedure (h-adaptivity) is proposed reducing the computational cost. Both two- and three- dimensional examples including the Kalthoff-Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method.
In chapter 3, a nonlocal operator method (NOM) based on the variational principle is proposed for the solution of waveguide problem in computational electromagnetic field. Common differential operators as well as the variational forms are defined within the context of nonlocal operators. The present nonlocal formulation allows the assembling of the tangent stiffness matrix with ease, which is necessary for the eigenvalue analysis of the waveguide problem. The present formulation is applied to solve 1D Schrodinger equation, 2D electrostatic problem and the differential electromagnetic vector wave equations based on electric fields.
In chapter 4, a general nonlocal operator method is proposed which is applicable for solving partial differential equations (PDEs) of mechanical problems. The nonlocal operator can be regarded as the integral form, ``equivalent'' to the differential form in the sense of a nonlocal interaction model. The variation of a nonlocal operator plays an equivalent role as the derivatives of the shape functions in the meshless methods or those of the finite element method. Based on the variational principle, the residual and the tangent stiffness matrix can be obtained with ease. The nonlocal operator method is enhanced here also with an operator energy functional to satisfy the linear consistency of the field. A highlight of the present method is the functional derived based on the nonlocal operator can convert the construction of residual and stiffness matrix into a series of matrix multiplications using the predefined nonlocal operators. The nonlocal strong forms of different functionals can be obtained easily via the concept of support and dual-support. Several numerical examples of different types of PDEs are presented.
In chapter 5, we extended the NOM to higher order scheme by using a higher order Taylor series expansion of the unknown field. Such a higher order scheme improves the original NOM in chapter 3 and chapter 4, which can only achieve one-order convergence. The higher order NOM obtains all partial derivatives with specified maximal order simultaneously without resorting to shape functions. The functional based on the nonlocal operators converts the construction of residual and stiffness matrix into a series of matrix multiplication on the nonlocal operator matrix. Several numerical examples solved by strong form or weak form are presented to show the capabilities of this method.
In chapter 6, the NOM proposed as a particle-based method in chapter 3,4,5, has difficulty in imposing accurately the boundary conditions of various orders. In this paper, we converted the particle-based NOM into a scheme with interpolation property. The new scheme describes partial derivatives of various orders at a point by the nodes in the support and takes advantage of the background mesh for numerical integration. The boundary conditions are enforced via the modified variational principle. The particle-based NOM can be viewed a special case of NOM with interpolation property when nodal integration is used. The scheme based on numerical integration greatly improves the stability of the method, as a consequence, the operator energy functional in particle-based NOM is not required. We demonstrated the capabilities of current method by solving the gradient solid problems and comparing the numerical results with the available exact solutions.
In chapter 7, we derived the DS-SPH in solid within the framework of variational principle. The tangent stiffness matrix of SPH can be obtained with ease, and can be served as the basis for the present implicit SPH. We proposed an hourglass energy functional, which allows the direct derivation of hourglass force and hourglass tangent stiffness matrix. The dual-support is {involved} in all derivations based on variational principles and is automatically satisfied in the assembling of stiffness matrix. The implementation of stiffness matrix comprises with two steps, the nodal assembly based on deformation gradient and global assembly on all nodes. Several numerical examples are presented to validate the method.
The latest earthquakes have proven that several existing buildings, particularly in developing countries, are not secured from damages of earthquake. A variety of statistical and machine-learning approaches have been proposed to identify vulnerable buildings for the prioritization of retrofitting. The present work aims to investigate earthquake susceptibility through the combination of six building performance variables that can be used to obtain an optimal prediction of the damage state of reinforced concrete buildings using artificial neural network (ANN). In this regard, a multi-layer perceptron network is trained and optimized using a database of 484 damaged buildings from the Düzce earthquake in Turkey. The results demonstrate the feasibility and effectiveness of the selected ANN approach to classify concrete structural damage that can be used as a preliminary assessment technique to identify vulnerable buildings in disaster risk-management programs.
Earthquake is among the most devastating natural disasters causing severe economical, environmental, and social destruction. Earthquake safety assessment and building hazard monitoring can highly contribute to urban sustainability through identification and insight into optimum materials and structures. While the vulnerability of structures mainly depends on the structural resistance, the safety assessment of buildings can be highly challenging. In this paper, we consider the Rapid Visual Screening (RVS) method, which is a qualitative procedure for estimating structural scores for buildings suitable for medium- to high-seismic cases. This paper presents an overview of the common RVS methods, i.e., FEMA P-154, IITK-GGSDMA, and EMPI. To examine the accuracy and validation, a practical comparison is performed between their assessment and observed damage of reinforced concrete buildings from a street survey in the Bingöl region, Turkey, after the 1 May 2003 earthquake. The results demonstrate that the application of RVS methods for preliminary damage estimation is a vital tool. Furthermore, the comparative analysis showed that FEMA P-154 creates an assessment that overestimates damage states and is not economically viable, while EMPI and IITK-GGSDMA provide more accurate and practical estimation, respectively.
Realistic uncertainty description incorporating aleatoric and epistemic uncertainties can be described within the framework of polymorphic uncertainty, which is computationally demanding. Utilizing a domain decomposition approach for random field based uncertainty models the proposed level-based sampling method can reduce these computational costs significantly and shows good agreement with a standard sampling technique. While 2-level configurations tend to get unstable with decreasing sampling density 3-level setups show encouraging results for the investigated reliability analysis of a structural unit square.
This study proposes an efficient Bayesian, frequency-based damage identification approach to identify damages in cantilever structures with an acceptable error rate, even at high noise levels. The catenary poles of electric high-speed train systems were selected as a realistic case study to cover the objectives of this study. Compared to other frequency-based damage detection approaches described in the literature, the proposed approach is efficiently able to detect damages in cantilever structures to higher levels of damage detection, namely identifying both the damage location and severity using a low-cost structural health monitoring (SHM) system with a limited number of sensors; for example, accelerometers. The integration of Bayesian inference, as a stochastic framework, in the proposed approach, makes it possible to utilize the benefit of data fusion in merging the informative data from multiple damage features, which increases the quality and accuracy of the results. The findings provide the decision-maker with the information required to manage the maintenance, repair, or replacement procedures.
In nonlinear simulations the loading is, in general, applied in an incremental way. Path-following algorithms are used to trace the equilibrium path during the failure process. Standard displacement controlled solution strategies fail if snap-back phenomena occur. In this contribution, a path-following algorithm based on the dissipation of the inelastic energy is presented which allows for the simulation of snap-backs. Since the constraint is defined in terms of the internal energy, the algorithm is not restricted to continuum damage models. Furthermore, no a priori knowledge about the final damage distribution is required. The performance of the proposed algorithm is illustrated using nonlinear mesoscale simulations.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM) method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM) method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
The longitudinal dispersion coefficient (LDC) plays an important role in modeling the transport of pollutants and sediment in natural rivers. As a result of transportation processes, the concentration of pollutants changes along the river. Various studies have been conducted to provide simple equations for estimating LDC. In this study, machine learning methods, namely support vector regression, Gaussian process regression, M5 model tree (M5P) and random forest, and multiple linear regression were examined in predicting the LDC in natural streams. Data sets from 60 rivers around the world with different hydraulic and geometric features were gathered to develop models for LDC estimation. Statistical criteria, including correlation coefficient (CC), root mean squared error (RMSE) and mean absolute error (MAE), were used to scrutinize the models. The LDC values estimated by these models were compared with the corresponding results of common empirical models. The Taylor chart was used to evaluate the models and the results showed that among the machine learning models, M5P had superior performance, with CC of 0.823, RMSE of 454.9 and MAE of 380.9. The model of Sahay and Dutta, with CC of 0.795, RMSE of 460.7 and MAE of 306.1, gave more precise results than the other empirical models. The main advantage of M5P models is their ability to provide practical formulae. In conclusion, the results proved that the developed M5P model with simple formulations was superior to other machine learning models and empirical models; therefore, it can be used as a proper tool for estimating the LDC in rivers.
Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
The seismic vulnerability assessment of existing reinforced concrete (RC) buildings is a significant source of disaster mitigation plans and rescue services. Different countries evolved various Rapid Visual Screening (RVS) techniques and methodologies to deal with the devastating consequences of earthquakes on the structural characteristics of buildings and human casualties. Artificial intelligence (AI) methods, such as machine learning (ML) algorithm-based methods, are increasingly used in various scientific and technical applications. The investigation toward using these techniques in civil engineering applications has shown encouraging results and reduced human intervention, including uncertainties and biased judgment. In this study, several known non-parametric algorithms are investigated toward RVS using a dataset employing different earthquakes. Moreover, the methodology encourages the possibility of examining the buildings’ vulnerability based on the factors related to the buildings’ importance and exposure. In addition, a web-based application built on Django is introduced. The interface is designed with the idea to ease the seismic vulnerability investigation in real-time. The concept was validated using two case studies, and the achieved results showed the proposed approach’s potential efficiency
We present an extended finite element formulation for dynamic fracture of piezo-electric materials. The method is developed in the context of linear elastic fracture mechanics. It is applied to mode I and mixed mode-fracture for quasi-steady cracks. An implicit time integration scheme is exploited. The results are compared to results obtained with the boundary element method and show excellent agreement.
This paper presents a strain smoothing procedure for the extended finite element method (XFEM). The resulting “edge-based” smoothed extended finite element method (ESm-XFEM) is tailored to linear elastic fracture mechanics and, in this context, to outperform the standard XFEM. In the XFEM, the displacement-based approximation is enriched by the Heaviside and asymptotic crack tip functions using the framework of partition of unity. This eliminates the need for the mesh alignment with the crack and re-meshing, as the crack evolves. Edge-based smoothing (ES) relies on a generalized smoothing operation over smoothing domains associated with edges of simplex meshes, and produces a softening effect leading to a close-to-exact stiffness, “super-convergence” and “ultra-accurate” solutions. The present method takes advantage of both the ES-FEM and the XFEM. Thanks to the use of strain smoothing, the subdivision of elements intersected by discontinuities and of integrating the (singular) derivatives of the approximation functions is suppressed via transforming interior integration into boundary integration. Numerical examples show that the proposed method improves significantly the accuracy of stress intensity factors and achieves a near optimal convergence rate in the energy norm even without geometrical enrichment or blending correction.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.
The present article proposes an alternative way to compute the torsional stiffness based on three-dimensional continuum mechanics instead of applying a specific theory of torsion. A thin, representative beam slice is discretized by solid finite elements. Adequate boundary conditions and coupling conditions are integrated into the numerical model to obtain a proper answer on the torsion behaviour, thus on shear center, shear stress and torsional stiffness. This finite element approach only includes general assumptions of beam torsion which are independent of cross-section geometry. These assumptions essentially are: no in-plane deformation, constant torsion and free warping. Thus it is possible to achieve numerical solutions of high accuracy for arbitrary cross-sections. Due to the direct link to three-dimensional continuum mechanics, it is possible to extend the range of torsion analysis to sections which are composed of different materials or even to heterogeneous beams on a high scale of resolution. A brief study follows to validate the implementation and results are compared to analytical solutions.
For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.
In recent years, substantial attention has been devoted to thermoelastic multifield problems and their numerical analysis. Thermoelasticity is one of the important categories of multifield problems which deals with the effect of mechanical and thermal disturbances on an elastic body. In other words, thermoelasticity encompasses the phenomena that describe the elastic and thermal behavior of solids and their interactions under thermo-mechanical loadings. Since providing an analytical solution for general coupled thermoelasticity problems is mathematically complicated, the development of alternative numerical solution techniques seems essential.
Due to the nature of numerical analysis methods, presence of error in results is inevitable, therefore in any numerical simulation, the main concern is the accuracy of the approximation. There are different error estimation (EE) methods to assess the overall quality of numerical approximation. In many real-life numerical simulations, not only the overall error, but also the local error or error in a particular quantity of interest is of main interest. The error estimation techniques which are developed to evaluate the error in the quantity of interest are known as “goal-oriented” error estimation (GOEE) methods.
This project, for the first time, investigates the classical a posteriori error estimation and goal-oriented a posteriori error estimation in 2D/3D thermoelasticity problems. Generally, the a posteriori error estimation techniques can be categorized into two major branches of recovery-based and residual-based error estimators. In this research, application of both recovery- and residual-based error estimators in thermoelasticity are studied. Moreover, in order to reduce the error in the quantity of interest efficiently and optimally in 2D and 3D thermoelastic problems, goal-oriented adaptive mesh refinement is performed.
As the first application category, the error estimation in classical Thermoelasticity (CTE) is investigated. In the first step, a rh-adaptive thermo-mechanical formulation based on goal-oriented error estimation is proposed.The developed goal-oriented error estimation relies on different stress recovery techniques, i.e., the superconvergent patch recovery (SPR), L2-projection patch recovery (L2-PR), and weighted superconvergent patch recovery (WSPR). Moreover, a new adaptive refinement strategy (ARS) is presented that minimizes the error in a quantity of interest and refines the discretization such that the error is equally distributed in the refined mesh. The method is validated by numerous numerical examples where an analytical solution or reference solution is available.
After investigating error estimation in classical thermoelasticity and evaluating the quality of presented error estimators, we extended the application of the developed goal-oriented error estimation and the associated adaptive refinement technique to the classical fully coupled dynamic thermoelasticity. In this part, we present an adaptive method for coupled dynamic thermoelasticity problems based on goal-oriented error estimation. We use dimensionless variables in the finite element formulation and for the time integration we employ the acceleration-based Newmark-_ method. In this part, the SPR, L2-PR, and WSPR recovery methods are exploited to estimate the error in the quantity of interest (QoI). By using
adaptive refinement in space, the error in the quantity of interest is minimized. Therefore, the discretization is refined such that the error is equally distributed in the refined mesh. We demonstrate the efficiency of this method by numerous numerical examples.
After studying the recovery-based error estimators, we investigated the residual-based error estimation in thermoelasticity. In the last part of this research, we present a 3D adaptive method for thermoelastic problems based on goal-oriented error estimation where the error is measured with respect to a pointwise quantity of interest. We developed a method for a posteriori error estimation and mesh adaptation based on dual weighted residual (DWR) method relying on the duality principles and consisting of an adjoint problem solution. Here, we consider the application of the derived estimator and mesh refinement to two-/three-dimensional (2D/3D) thermo-mechanical multifield problems. In this study, the goal is considered to be given by singular pointwise functions, such as the point value or point value derivative at a specific point of interest (PoI). An adaptive algorithm has been adopted to refine the mesh to minimize the goal in the quantity of interest.
The mesh adaptivity procedure based on the DWR method is performed by adaptive local h-refinement/coarsening with allowed hanging nodes. According to the proposed DWR method, the error contribution of each element is evaluated. In the refinement process, the contribution of each element to the goal error is considered as the mesh refinement criterion.
In this study, we substantiate the accuracy and performance of this method by several numerical examples with available analytical solutions. Here, 2D and 3D problems under thermo-mechanical loadings are considered as benchmark problems. To show how accurately the derived estimator captures the exact error in the evaluation of the pointwise quantity of interest, in all examples, considering the analytical solutions, the goal error effectivity index as a standard measure of the quality of an estimator is calculated. Moreover, in order to demonstrate the efficiency of the proposed method and show the optimal behavior of the employed refinement method, the results of different conventional error estimators and refinement techniques (e.g., global uniform refinement, Kelly, and weighted Kelly techniques) are used for comparison.
The importance of modern simulation methods in the mechanical analysis of heterogeneous solids is presented in detail. Thereby the problem is noted that even for small bodies the required high-resolution analysis reaches the limits of today's computational power, in terms of memory demand as well as acceptable computational effort. A further problem is that frequently the accuracy of geometrical modelling of heterogeneous bodies is inadequate. The present work introduces a systematic combination and adaption of grid-based methods for achieving an essentially higher resolution in the numerical analysis of heterogeneous solids. Grid-based methods are as well primely suited for developing efficient and numerically stable algorithms for flexible geometrical modeling. A key aspect is the uniform data management for a grid, which can be utilized to reduce the effort and complexity of almost all concerned methods. A new finite element program, called Mulgrido, was just developed to realize this concept consistently and to test the proposed methods. Several disadvantages which generally result from grid discretizations are selectively corrected by modified methods. The present work is structured into a geometrical model, a mechanical model and a numerical model. The geometrical model includes digital image-based modeling and in particular several methods for the theory-based generation of inclusion-matrix models. Essential contributions refer to variable shape, size distribution, separation checks and placement procedures of inclusions. The mechanical model prepares the fundamentals of continuum mechanics, homogenization and damage modeling for the following numerical methods. The first topic of the numerical model introduces to a special version of B-spline finite elements. These finite elements are entirely variable in the order k of B-splines. For homogeneous bodies this means that the approximation quality can arbitrarily be scaled. In addition, the multiphase finite element concept in combination with transition zones along material interfaces yields a valuable solution for heterogeneous bodies. As the formulation is element-based, the storage of a global stiffness matrix is superseded such that the memory demand can essentially be reduced. This is possible in combination with iterative solver methods which represent the second topic of the numerical model. Here, the focus lies on multigrid methods where the number of required operations to solve a linear equation system only increases linearly with problem size. Moreover, for badly conditioned problems quite an essential improvement is achieved by preconditioning. The third part of the numerical model discusses certain aspects of damage simulation which are closely related to the proposed grid discretization. The strong efficiency of the linear analysis can be maintained for damage simulation. This is achieved by a damage-controlled sequentially linear iteration scheme. Finally a study on the effective material behavior of heterogeneous bodies is presented. Especially the influence of inclusion shapes is examined. By means of altogether more than one hundred thousand random geometrical arrangements, the effective material behavior is statistically analyzed and assessed.
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
The thesis investigates at the computer aided simulation process for operational vibration analysis of complex coupled systems. As part of the internal methods project “Absolute Values” of the BMW Group, the thesis deals with the analysis of the structural dynamic interactions and excitation interactions. The overarching aim of the methods project is to predict the operational vibrations of engines.
Simulations are usually used to analyze technical aspects (e. g. operational vibrations, strength, ...) of single components in the industrial development. The boundary conditions of submodels are mostly based on experiences. So the interactions with neighboring components and systems are neglected. To get physically more realistic results but still efficient simulations, this work wants to support the engineer during the preprocessing phase by useful criteria.
At first suitable abstraction levels based on the existing literature are defined to identify structural dynamic interactions and excitation interactions of coupled systems. So it is possible to separate different effects of the coupled subsystems. On this basis, criteria are derived to assess the influence of interactions between the considered systems. These criteria can be used during the preprocessing phase and help the engineer to build up efficient models with respect to the interactions with neighboring systems. The method was developed by using several models with different complexity levels. Furthermore, the method is proved for the application in the industrial environment by using the example of a current combustion engine.
Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification
(2020)
Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.
In this paper, we present an open-source code for the first-order and higher-order nonlocal operator method (NOM) including a detailed description of the implementation. The NOM is based on so-called support, dual-support, nonlocal operators, and an operate energy functional ensuring stability. The nonlocal operator is a generalization of the conventional differential operators. Combined with the method of weighed residuals and variational principles, NOM establishes the residual and tangent stiffness matrix of operate energy functional through some simple matrix without the need of shape functions as in other classical computational methods such as FEM. NOM only requires the definition of the energy drastically simplifying its implementation. The implementation in this paper is focused on linear elastic solids for sake of conciseness through the NOM can handle more complex nonlinear problems. The NOM can be very flexible and efficient to solve partial differential equations (PDEs), it’s also quite easy for readers to use the NOM and extend it to solve other complicated physical phenomena described by one or a set of PDEs. Finally, we present some classical benchmark problems including the classical cantilever beam and plate-with-a-hole problem, and we also make an extension of this method to solve complicated problems including phase-field fracture modeling and gradient elasticity material.
This work describes an algorithm and corresponding software for incorporating general nonlinear multiple-point equality constraints in a implicit sparse direct solver. It is shown that direct addressing of sparse matrices is possible in general circumstances, circumventing the traditional linear or binary search for introducing (generalized) constituents to a sparse matrix. Nested and arbitrarily interconnected multiple-point constraints are introduced by processing of multiplicative constituents with a built-in topological ordering of the resulting directed graph. A classification of discretization methods is performed and some re-classified problems are described and solved under this proposed perspective. The dependence relations between solution methods, algorithms and constituents becomes apparent. Fracture algorithms can be naturally casted in this framework. Solutions based on control equations are also directly incorporated as equality constraints. We show that arbitrary constituents can be used as long as the resulting directed graph is acyclic. It is also shown that graph partitions and orderings should be performed in the innermost part of the algorithm, a fact with some peculiar consequences. The core of our implicit code is described, specifically new algorithms for direct access of sparse matrices (by means of the clique structure) and general constituent processing. It is demonstrated that the graph structure of the second derivatives of the equality constraints are cliques (or pseudo-elements) and are naturally included as such. A complete algorithm is presented which allows a complete automation of equality constraints, avoiding the need of pre-sorting. Verification applications in four distinct areas are shown: single and multiple rigid body dynamics, solution control and computational fracture.
Although it is impractical to avert subsequent natural disasters, advances in simulation science and seismological studies make it possible to lessen the catastrophic damage. There currently exists in many urban areas a large number of structures, which are prone to damage by earthquakes. These were constructed without the guidance of a national seismic code, either before it existed or before it was enforced. For instance, in Istanbul, Turkey, as a high seismic area, around 90% of buildings are substandard, which can be generalized into other earthquakeprone regions in Turkey. The reliability of this building stock resulting from earthquake-induced collapse is currently uncertain. Nonetheless, it is also not feasible to perform a detailed seismic vulnerability analysis on each building as a solution to the scenario, as it will be too complicated and expensive. This indicates the necessity of a reliable, rapid, and computationally easy method for seismic vulnerability assessment, commonly known as Rapid Visual Screening (RVS). In RVS methodology, an observational survey of buildings is performed, and according to the data collected during the visual inspection, a structural score is calculated without performing any structural calculations to determine the expected damage of a building and whether the building needs detailed assessment. Although this method might save time and resources due to the subjective/qualitative judgments of experts who performed the inspection, the evaluation process is dominated by vagueness and uncertainties, where the vagueness can be handled adequately through the fuzzy set theory but do not cover all sort of uncertainties due to its crisp membership functions. In this study, a novel method of rapid visual hazard safety assessment of buildings against earthquake is introduced in which an interval type-2 fuzzy logic system (IT2FLS) is used to cover uncertainties. In addition, the proposed method provides the possibility to evaluate the earthquake risk of the building by considering factors related to the building importance and exposure. A smartphone app prototype of the method has been introduced. For validation of the proposed method, two case studies have been selected, and the result of the analysis presents the robust efficiency of the proposed method.
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritizes their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. This might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of reinforced concrete buildings from the Bingöl and Düzce earthquakes in Turkey.
Bolted connections are widely employed in structures like transmission poles, wind turbines, and television (TV) towers. The behaviour of bolted connections is often complex and plays a significant role in the overall dynamic characteristics of the structure. The goal of this work is to conduct a fatigue lifecycle assessment of such a bolted connection block of a 193 m tall TV tower, for which 205 days of real measurement data have been obtained from the installed monitoring devices. Based on the recorded data, the best-fit stochastic wind distribution for 50 years, the decisive wind action, and the locations to carry out the fatigue analysis have been decided. A 3D beam model of the entire tower is developed to extract the nodal forces corresponding to the connection block location under various mean wind speeds, which is later coupled with a detailed complex finite element model of the connection block, with over three million degrees of freedom, for acquiring stress histories on some pre-selected bolts. The random stress histories are analysed using the rainflow counting algorithm (RCA) and the damage is estimated using Palmgren-Miner's damage accumulation law. A modification is proposed to integrate the loading sequence effect into the RCA, which otherwise is ignored, and the differences between the two RCAs are investigated in terms of the accumulated damage.
The gradual digitization in the architecture, engineering, and construction industry over the past fifty years led to an extremely heterogeneous software environment, which today is embodied by the multitude of different digital tools and proprietary data formats used by the many specialists contributing to the design process in a construction project. Though these projects become increasingly complex, the demands on financial efficiency and the completion within a tight schedule grow at the same time. The digital collaboration of project partners has been identified as one key issue in successfully dealing with these challenges. Yet currently, the numerous software applications and their respective individual views on the design process severely impede that collaboration.
An approach to establish a unified basis for the digital collaboration, regardless of the existing software heterogeneity, is a comprehensive digital building model contributed to by all projects partners. This type of data management known as building information modeling (BIM) has many benefits, yet its adoption is associated with many difficulties and thus, proceeds only slowly. One aspect in the field of conflicting requirements on such a digital model is the cooperation of architects and structural engineers. Traditionally, these two disciplines use different abstractions of reality for their models that in consequence lead to incompatible digital representations thereof.
The onset of isogeometric analysis (IGA) promised to ease the discrepancy in design and analysis model representations. Yet, that initial focus quickly shifted towards using these methods as a more powerful basis for numerical simulations. Furthermore, the isogeometric representation alone is not capable of solving the model abstraction problem. It is thus the intention of this work to contribute to an improved digital collaboration of architects and engineers by exploring an integrated analysis approach on the basis of an unified digital model and solid geometry expressed by splines. In the course of this work, an analysis framework is developed that utilizes such models to automatically conduct numerical simulations commonly required in construction projects. In essence, this allows to retrieve structural analysis results from BIM models in a fast and simple manner, thereby facilitating rapid design iterations and profound design feedback.
The BIM implementation Industry Foundation Classes (IFC) is reviewed with regard to its capabilities of representing the unified model. The current IFC schema strongly supports the use of redundant model data, a major pitfall in digital collaboration. Additionally, it does not allow to describe the geometry by volumetric splines. As the pursued approach builds upon a unique model for both, architectural and structural design, and furthermore requires solid geometry, necessary schema modifications are suggested.
Structural entities are modeled by volumetric NURBS patches, each of which constitutes an individual subdomain that, with regard to the analysis, is incompatible with the remaining full model. The resulting consequences for numerical simulation are elaborated in this work. The individual subdomains have to be weakly coupled, for which the mortar method is used. Different approaches to discretize the interface traction fields are implemented and their respective impact on the analysis results is evaluated. All necessary coupling conditions are automatically derived from the related geometry model.
The weak coupling procedure leads to a linear system of equations in saddle point form, which, owed to the volumetric modeling, is large in size and, the associated coefficient matrix has, due to the use of higher degree basis functions, a high bandwidth. The peculiarities of the system require adapted solution methods that generally cause higher numerical costs than the standard procedures for symmetric, positive-definite systems do. Different methods to solve the specific system are investigated and an efficient parallel algorithm is finally proposed.
When the structural analysis model is derived from the unified model in the BIM data, it does in general initially not meet the requirements on the discretization that are necessary to obtain sufficiently accurate analysis results. The consequently necessary patch refinements must be controlled automatically to allowfor an entirely automatic analysis procedure. For that purpose, an empirical refinement scheme based on the geometrical and possibly mechanical properties of the specific entities is proposed. The level of refinement may be selectively manipulated by the structural engineer in charge. Furthermore, a Zienkiewicz-Zhu type error estimator is adapted for the use with isogeometric analysis results. It is shown that also this estimator can be used to steer an adaptive refinement procedure.
The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference.
We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference!
Piezoelectric materials are used in several applications as sensors and actuators where they experience high stress and electric field concentrations as a result of which they may fail due to fracture. Though there are many analytical and experimental works on piezoelectric fracture mechanics. There are very few studies about damage detection, which is an interesting way to prevent the failure of these ceramics.
An iterative method to treat the inverse problem of detecting cracks and voids in piezoelectric structures is proposed. Extended finite element method (XFEM) is employed for solving the inverse problem as it allows the use of a single regular mesh for large number of iterations with different flaw geometries.
Firstly, minimization of cost function is performed by Multilevel Coordinate Search (MCS) method. The XFEM-MCS methodology is applied to two dimensional electromechanical problems where flaws considered are straight cracks and elliptical voids. Then a numerical method based on combination of classical shape derivative and level set method for front propagation used in structural optimization is utilized to minimize the cost function. The results obtained show that the XFEM-level set methodology is effectively able to determine the number of voids in a piezoelectric structure and its corresponding locations.
The XFEM-level set methodology is improved to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure. The material interfaces are implicitly represented by level sets which are identified by applying regularisation using total variation penalty terms. The formulation is presented for three dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material subdomains in the presence of higher noise levels.
Piezoelectric nanostructures exhibit size dependent properties because of surface elasticity and surface piezoelectricity. Initially a study to understand the influence of surface elasticity on optimization of nano elastic beams is performed. The boundary of the nano structure is implicitly represented by a level set function, which is considered as the design variable in the optimization process. Two objective functions, minimizing the total potential energy of a nanostructure subjected to a material volume constraint and minimizing the least square error compared to a target
displacement, are chosen for the numerical examples. The numerical examples demonstrate the importance of size and aspect ratio in determining how surface effects impact the optimized topology of nanobeams.
Finally a conventional cantilever energy harvester with a piezoelectric nano layer is analysed. The presence of surface piezoelectricity in nano beams and nano plates leads to increase in electromechanical coupling coefficient. Topology optimization of these piezoelectric structures in an energy harvesting device to further increase energy conversion using appropriately modified XFEM-level set algorithm is performed .
Thin-walled cylindrical composite shell structures are often applied in aerospace for lighter and cheaper launcher transport system. These structures exhibit sensitivity to geometrical imperfection and are prone to buckling under axial compression. Today the design is based on NASA guidelines from the 1960’s [1] using a conservative lower bound curve embodying many experimental results of that time. It is well known that the advantages and different characteristics of composites as well as the evolution of manufacturing standards are not considered apporopriately in this outdated approach. The DESICOS project was initiated to provide new design guidelines regarding all the advantages of composites and allow further weight reduction of space structures by guaranteeing a more precise and robust design.
Therefore it is necessary among other things to understand how a cutout with different dimensions affects the buckling load of a thin-walled cylindrical shell structure in combination with initial geometric imperfections. This work is intended to identify a ratio between the cutout characteristic dimension (in this case the cutout diameter) and the structure characteristic dimension (in this case the cylinder radius) that can be used to tell if the buckling structure is dominated by initial imperfections or is dominated by the cutout.
The design and application of high performance materials demands extensive knowledge of the materials damage behavior, which significantly depends on the meso- and microstructural complexity. Numerical simulations of crack growth on multiple length scales are promising tools to understand the damage phenomena in complex materials. In polycrystalline materials it has been observed that the grain boundary decohesion is one important mechanism that leads to micro crack initiation. Following this observation the paper presents a polycrystal mesoscale model consisting of grains with orthotropic material behavior and cohesive interfaces along grain boundaries, which is able to reproduce the crack initiation and propagation along grain boundaries in polycrystalline materials. With respect to the importance of modeling the geometry of the grain structure an advanced Voronoi algorithm is proposed to generate realistic polycrystalline material structures based on measured grain size distribution. The polycrystal model is applied to investigate the crack initiation and propagation in statically loaded representative volume elements of aluminum on the mesoscale without the necessity of initial damage definition. Future research work is planned to include the mesoscale model into a multiscale model for the damage analysis in polycrystalline materials.
This thesis addresses an adaptive higher-order method based on a Geometry Independent Field approximatTion(GIFT) of polynomial/rationals plines over hierarchical T-meshes(PHT/RHT-splines).
In isogeometric analysis, basis functions used for constructing geometric models in computer-aided design(CAD) are also employed to discretize the partial differential equations(PDEs) for numerical analysis. Non-uniform rational B-Splines(NURBS) are the most commonly used basis functions in CAD. However, they may not be ideal for numerical analysis where local refinement is required.
The alternative method GIFT deploys different splines for geometry and numerical analysis. NURBS are utilized for the geometry representation, while for the field solution, PHT/RHT-splines are used. PHT-splines not only inherit the useful properties of B-splines and NURBS, but also possess the capabilities of local refinement and hierarchical structure. The smooth basis function properties of PHT-splines make them suitable for analysis purposes. While most problems considered in isogeometric analysis can be solved efficiently when the solution is smooth, many non-trivial problems have rough solutions. For example, this can be caused by the presence of re-entrant corners in the domain. For such problems, a tensor-product basis (as in the case of NURBS) is less suitable for resolving the singularities that appear since refinement propagates throughout the computational domain. Hierarchical bases and local refinement (as in the case of PHT-splines) allow for a more efficient way to resolve these singularities by adding more degrees of freedom where they are necessary. In order to drive the adaptive refinement, an efficient recovery-based error estimator is proposed in this thesis. The estimator produces a recovery solution which is a more accurate approximation than the computed numerical solution. Several two- and three-dimensional numerical investigations with PHT-splines of higher order and continuity prove that the proposed method is capable of obtaining results with higher accuracy, better convergence, fewer degrees of freedom and less computational cost than NURBS for smooth solution problems. The adaptive GIFT method utilizing PHT-splines with the recovery-based error estimator is used for solutions with discontinuities or singularities where adaptive local refinement in particular domains of interest achieves higher accuracy with fewer degrees of freedom. This method also proves that it can handle complicated multi-patch domains for two- and three-dimensional problems outperforming uniform refinement in terms of degrees of freedom and computational cost.