500 Naturwissenschaften und Mathematik
Refine
Document Type
- Doctoral Thesis (41)
- Article (23)
- Bachelor Thesis (5)
- Master's Thesis (5)
- Conference Proceeding (2)
- Habilitation (1)
- Preprint (1)
- Study Thesis (1)
Institute
- Institut für Strukturmechanik (ISM) (33)
- F. A. Finger-Institut für Baustoffkunde (FIB) (6)
- Professur Angewandte Mathematik (5)
- Professur Werkstoffe des Bauens (4)
- Bauhaus-Institut für zukunftsweisende Infrastruktursysteme (b.is) (3)
- Professur Bauchemie und Polymere Werkstoffe (3)
- Professur Modellierung und Simulation - Konstruktion (3)
- Professur Modellierung und Simulation - Mechanik (3)
- Professur Stochastik und Optimierung (3)
- Institut für Konstruktiven Ingenieurbau (IKI) (2)
Keywords
- Finite-Elemente-Methode (6)
- Isogeometric Analysis (6)
- Isogeometrische Analyse (5)
- Maschinelles Lernen (5)
- Gestaltoptimierung (4)
- Machine learning (4)
- OA-Publikationsfonds2020 (4)
- Peridynamik (4)
- Beton (3)
- Modellierung (3)
In this paper, an artificial neural network is implemented for the sake of predicting the thermal conductivity ratio of TiO2-Al2O3/water nanofluid. TiO2-Al2O3/water in the role of an innovative type of nanofluid was synthesized by the sol–gel method. The results indicated that 1.5 vol.% of nanofluids enhanced the thermal conductivity by up to 25%. It was shown that the heat transfer coefficient was linearly augmented with increasing nanoparticle concentration, but its variation with temperature was nonlinear. It should be noted that the increase in concentration may cause the particles to agglomerate, and then the thermal conductivity is reduced. The increase in temperature also increases the thermal conductivity, due to an increase in the Brownian motion and collision of particles. In this research, for the sake of predicting the thermal conductivity of TiO2-Al2O3/water nanofluid based on volumetric concentration and temperature functions, an artificial neural network is implemented. In this way, for predicting thermal conductivity, SOM (self-organizing map) and BP-LM (Back Propagation-Levenberq-Marquardt) algorithms were used. Based on the results obtained, these algorithms can be considered as an exceptional tool for predicting thermal conductivity. Additionally, the correlation coefficient values were equal to 0.938 and 0.98 when implementing the SOM and BP-LM algorithms, respectively, which is highly acceptable. View Full-Text
Pressure fluctuations beneath hydraulic jumps potentially endanger the stability of stilling basins. This paper deals with the mathematical modeling of the results of laboratory-scale experiments to estimate the extreme pressures. Experiments were carried out on a smooth stilling basin underneath free hydraulic jumps downstream of an Ogee spillway. From the probability distribution of measured instantaneous pressures, pressures with different probabilities could be determined. It was verified that maximum pressure fluctuations, and the negative pressures, are located at the positions near the spillway toe. Also, minimum pressure fluctuations are located at the downstream of hydraulic jumps. It was possible to assess the cumulative curves of pressure data related to the characteristic points along the basin, and different Froude numbers. To benchmark the results, the dimensionless forms of statistical parameters include mean pressures (P*m), the standard deviations of pressure fluctuations (σ*X), pressures with different non-exceedance probabilities (P*k%), and the statistical coefficient of the probability distribution (Nk%) were assessed. It was found that an existing method can be used to interpret the present data, and pressure distribution in similar conditions, by using a new second-order fractional relationships for σ*X, and Nk%. The values of the Nk% coefficient indicated a single mean value for each probability.
Along with environmental pollution, urban planning has been connected to public health. The research indicates that the quality of built environments plays an important role in reducing mental disorders and overall health. The structure and shape of the city are considered as one of the factors influencing happiness and health in urban communities and the type of the daily activities of citizens. The aim of this study was to promote physical activity in the main structure of the city via urban design in a way that the main form and morphology of the city can encourage citizens to move around and have physical activity within the city. Functional, physical, cultural-social, and perceptual-visual features are regarded as the most important and effective criteria in increasing physical activities in urban spaces, based on literature review. The environmental quality of urban spaces and their role in the physical activities of citizens in urban spaces were assessed by using the questionnaire tool and analytical network process (ANP) of structural equation modeling. Further, the space syntax method was utilized to evaluate the role of the spatial integration of urban spaces on improving physical activities. Based on the results, consideration of functional diversity, spatial flexibility and integration, security, and the aesthetic and visual quality of urban spaces plays an important role in improving the physical health of citizens in urban spaces. Further, more physical activities, including motivation for walking and the sense of public health and happiness, were observed in the streets having higher linkage and space syntax indexes with their surrounding texture.
Synergistic Framework for Analysis and Model Assessment in Bridge Aerodynamics and Aeroelasticity
(2020)
Wind-induced vibrations often represent a major design criterion for long-span bridges. This work deals with the assessment and development of models for aerodynamic and aeroelastic analyses of long-span bridges.
Computational Fluid Dynamics (CFD) and semi-analytical aerodynamic models are employed to compute the bridge response due to both turbulent and laminar free-stream. For the assessment of these models, a comparative methodology is developed that consists of two steps, a qualitative and a quantitative one. The first, qualitative, step involves an extension
of an existing approach based on Category Theory and its application to the field of bridge aerodynamics. Initially, the approach is extended to consider model comparability and completeness. Then, the complexity of the CFD and twelve semi-analytical models are evaluated based on their mathematical constructions, yielding a diagrammatic representation of model quality.
In the second, quantitative, step of the comparative methodology, the discrepancy of a system response quantity for time-dependent aerodynamic models is quantified using comparison metrics for time-histories. Nine metrics are established on a uniform basis to quantify the discrepancies in local and global signal features that are of interest in bridge aerodynamics. These signal features involve quantities such as phase, time-varying frequency and magnitude content, probability density, non-stationarity, and nonlinearity.
The two-dimensional (2D) Vortex Particle Method is used for the discretization of the Navier-Stokes equations including a Pseudo-three dimensional (Pseudo-3D) extension within an existing CFD solver. The Pseudo-3D Vortex Method considers the 3D structural behavior for aeroelastic analyses by positioning 2D fluid strips along a line-like structure. A novel turbulent Pseudo-3D Vortex Method is developed by combining the laminar Pseudo-3D VPM and a previously developed 2D method for the generation of free-stream turbulence. Using analytical derivations, it is shown that the fluid velocity correlation is maintained between the CFD strips.
Furthermore, a new method is presented for the determination of the complex aerodynamic admittance under deterministic sinusoidal gusts using the Vortex Particle Method. The sinusoidal gusts are simulated by modeling the wakes of flapping airfoils in the CFD domain with inflow vortex particles. Positioning a section downstream yields sinusoidal forces that are used for determining all six components of the complex aerodynamic admittance. A closed-form analytical relation is derived, based on an existing analytical model. With this relation, the inflow particles’ strength can be related with the target gust amplitudes a priori.
The developed methodologies are combined in a synergistic framework, which is applied to both fundamental examples and practical case studies. Where possible, the results are verified and validated. The outcome of this work is intended to shed some light on the complex wind–bridge interaction and suggest appropriate modeling strategies for an enhanced design.
A novel combination of the ant colony optimization algorithm (ACO)and computational fluid dynamics (CFD) data is proposed for modeling the multiphase chemical reactors. The proposed intelligent model presents a probabilistic computational strategy for predicting various levels of three-dimensional bubble column reactor (BCR) flow. The results prove an enhanced communication between ant colony prediction and CFD data in different sections of the BCR.
The K-nearest neighbors (KNN) machine learning algorithm is a well-known non-parametric classification method. However, like other traditional data mining methods, applying it on big data comes with computational challenges. Indeed, KNN determines the class of a new sample based on the class of its nearest neighbors; however, identifying the neighbors in a large amount of data imposes a large computational cost so that it is no longer applicable by a single computing machine. One of the proposed techniques to make classification methods applicable on large datasets is pruning. LC-KNN is an improved KNN method which first clusters the data into some smaller partitions using the K-means clustering method; and then applies the KNN for each new sample on the partition which its center is the nearest one. However, because the clusters have different shapes and densities, selection of the appropriate cluster is a challenge. In this paper, an approach has been proposed to improve the pruning phase of the LC-KNN method by taking into account these factors. The proposed approach helps to choose a more appropriate cluster of data for looking for the neighbors, thus, increasing the classification accuracy. The performance of the proposed approach is evaluated on different real datasets. The experimental results show the effectiveness of the proposed approach and its higher classification accuracy and lower time cost in comparison to other recent relevant methods.
Hydrological drought forecasting plays a substantial role in water resources management. Hydrological drought highly affects the water allocation and hydropower generation. In this research, short term hydrological drought forecasted based on the hybridized of novel nature-inspired optimization algorithms and Artificial Neural Networks (ANN). For this purpose, the Standardized Hydrological Drought Index (SHDI) and the Standardized Precipitation Index (SPI) were calculated in one, three, and six aggregated months. Then, three states where proposed for SHDI forecasting, and 36 input-output combinations were extracted based on the cross-correlation analysis. In the next step, newly proposed optimization algorithms, including Grasshopper Optimization Algorithm (GOA), Salp Swarm algorithm (SSA), Biogeography-based optimization (BBO), and Particle Swarm Optimization (PSO) hybridized with the ANN were utilized for SHDI forecasting and the results compared to the conventional ANN. Results indicated that the hybridized model outperformed compared to the conventional ANN. PSO performed better than the other optimization algorithms. The best models forecasted SHDI1 with R2 = 0.68 and RMSE = 0.58, SHDI3 with R 2 = 0.81 and RMSE = 0.45 and SHDI6 with R 2 = 0.82 and RMSE = 0.40.
Energy‐Efficient Method for Wireless Sensor Networks Low‐Power Radio Operation in Internet of Things
(2020)
The radio operation in wireless sensor networks (WSN) in Internet of Things (IoT)applications is the most common source for power consumption. Consequently, recognizing and controlling the factors affecting radio operation can be valuable for managing the node power consumption. Among essential factors affecting radio operation, the time spent for checking the radio is of utmost importance for monitoring power consumption. It can lead to false WakeUp or idle listening in radio duty cycles and ContikiMAC. ContikiMAC is a low‐power radio duty‐cycle protocol in Contiki OS used in WakeUp mode, as a clear channel assessment (CCA) for checking radio status periodically. This paper presents a detailed analysis of radio WakeUp time factors of ContikiMAC. Furthermore, we propose a lightweight CCA (LW‐CCA) as an extension to ContikiMAC to reduce the Radio Duty‐Cycles in false WakeUps and idle listening though using dynamic received signal strength indicator (RSSI) status check time. The simulation results in the Cooja simulator show that LW‐CCA reduces about 8% energy consumption in nodes while maintaining up to 99% of the packet delivery rate (PDR).
Die Gase Sauerstoff und Stickstoff werden für eine Vielzahl an technischen, industriellen, biologischen und medizinischen Einsatzzwecken benötigt. So liegen Anwendungsgebiete dieser Gase neben der klassischen metallverarbeitenden und der chemischen Industrie bei Sauerstoff vor allem in der Medizin, Verbrennungs- und Kläranlagenoptimierung sowie der Fischzucht und bei Stickstoff als Schutz- beziehungsweise Inertgas in der Kunststoffindustrie, der Luft- und Raumfahrt sowie dem Brandschutz.
Die Bereitstellung der Gase Sauerstoff und Stickstoff wird nahezu ausschließlich durch die Abtrennung aus der Umgebungsluft realisiert, welche aus ca. 78 Vol.-% Stickstoff, 21 Vol.-% Sauerstoff und 1 Vol.-% Spurengasen (Ar, CO2, Ne, He, ...) besteht. Am Markt etablierte Verfahren der Luftzerlegung sind das Linde-, das PSA- (pressure swing adsorption/Druckwechseladsorption) oder verschiedene Membran-Verfahren. Hierdurch werden die benötigten Gase entweder direkt vor Ort beim Verbraucher erzeugt (PSA- und Polymer-Membranverfahren: geringe Reinheiten) oder zentral in großen Anlagen hergestellt (Linde-Verfahren: hohe Reinheiten) und anschließend zum Verbraucher in Form von Flaschen- oder Tankgasen geliefert (Tansportkosten).
Für kleinere Verbraucher mit hohen Ansprüchen an die Reinheit des benötigten Sauerstoffs beziehungsweise Stickstoffs ergibt sich nur die Möglichkeit, die Gase als kostenintensive Transportgase zentraler Gaseversorger zu beziehen und sich somit in eine Abhängigkeit (Lieferverträge, Flaschen-/Tankmieten, ...) zu diesen zu begeben sowie eine eigene Lagerhaltung für die benötigten Gase (Mehraufwand, Lagerkosten, Platzbedarf) zu betreiben.
Ziel dieser Arbeit ist es, keramische Material-Systeme auf Basis chemischer Hochtemperatur-Reaktionen als Reaktive Oxidkeramiken zu entwickeln und diese hinsichtlich eines möglichen Einsatzes für die Sauerstoffseparation in neuartigen Luftzerlegungsanlagen zu untersuchen.
Derartige Anlagen sollen in ihrem Prinzip an die regenerative Sauerstoffseparation angelehnt sein und in ihren Reaktoren die Reaktiven Oxidkeramiken als Festbett-Material abwechselnd mit Luft be- und Vakuum oder O2-armen Atmosphären entladen.
Die Verwendung Reaktiver Oxidkeramiken, welche im Vergleich zu den bisherigen Materialien höhere Sauerstoffaustauschmengen und -raten bei gleichzeitig hoher Lebensdauer und Korrosionsbeständigkeit sowie relativ einfacher Handhabe aufweisen würden, soll ein Schritt in Richtung einer effizienten alternativen Luftzerlegungstechnologie sein.
Mit den Reaktiven Oxidkeramiken in einer Luftzerlegungsanlage sollte es im besten Fall möglich sein, in kleinen Anlagen sehr reinen Sauerstoff und zugleich sauerstofffreies Inertgas zu erzeugen sowie eine Sauerstoffan- oder -abreicherung von Luft, Prozess- oder Abgasen zu generieren.
Somit besäße eine solche, auf Reaktiven Oxidkeramiken basierende Technologie sehr weit gefächerte Einsatzgebiete und demzufolge ein enormes wirtschaftliches Potential.
Recent years have seen a gradual shift in focus of international policies from a national and regional perspective to that of cities, a shift which is closely related to the rapid urbanization of developing countries. As revealed in the 2011 Revision of the World Urbanization Prospects published by the United Nations, 51% of the global population (approximately 3.6 billion people) lives in cities. The report predicts that by 2050, the world’s urban population will increase by 2.3 billion, making up 68% of the population. The growth of urbanization in the next few decades is expected to primarily come from developing countries, one third of which will be in China and India.
With rapid urbanization and the ongoing growth of mega cities, cities must become increasingly resilient and intelligent to cope with numerous challenges and crises like droughts and floods arising from extreme climate, destruction brought by severe natural disasters, and aggregated social contradictions resulting from economic crises. All cities face the urban development dynamics and uncertainties arising from these problems. Under such circumstances, cities are considered the critical path from crisis to prosperity, so scholars and organizations have proposed the construction of “resilient cities.” On the one hand, this theory emphasizes cities’ defenses and buffering capacity against disasters, crises and uncertainties, as well as recovery after destruction; on the other hand, it highlights the learning capacity of urban systems, identification of opportunities amid challenges, and maintenance of development vitality. Some scholars even believe that urban resilience is a powerful supplement to sustainable development. Hence, resilience assessment has become the latest and most important perspective for evaluating the development and crisis defense capacity of cities.
Rather than a general abstract concept, urban resilience is a comprehensive measurement of a city’s level of development. The dynamic development of problems is reflected through quantitative indicators and appraisal systems not only from the perspective of academic research, but also governmental policy, so as to scientifically guide development, and measure and compare cities’ development levels. Although international scholars have proposed
quantitative methods for urban resilience assessment, they are however insufficiently systematic and regionally adaptive for China’s current urban development needs. On the basis of comparative study on European and North American resilient city theories, therefore, this paper puts forwards a theoretical framework for resilient city systems consistent with China’s national conditions in light of economic development pressure, natural resource depletion, pollution, and other salient development crises in China. The key factors influencing urban resilience are taken into full consideration; expert appraisal is conducted based on the Delphi Method and the analytic hierarchy process (AHP) to design an extensible and updatable resilient city evaluation system which is sufficiently systematic, geographically adaptable, and sustainable for China’s current urban development needs. Finally, Changsha is taken as the main case for empirical study on comprehensive evaluation of similar cities in Central China to improve the indicator system.
Scalarization methods are a category of multiobjective optimization (MOO) methods. These methods allow the usage of conventional single objective optimization algorithms, as scalarization methods reformulate the MOO problem into a single objective optimization problem. The scalarization methods analysed within this thesis are the Weighted Sum (WS), the Epsilon-Constraint (EC), and the MinMax (MM) method. After explaining the approach of each method, the WS, EC and MM are applied, a-posteriori, to three different examples: to the Kursawe function; to the ten bar truss, a common benchmark problem in structural optimization; and to the metamodel of an aero engine exit module.
The aim is to evaluate and compare the performance of each scalarization method that is examined within this thesis. The evaluation is conducted using performance metrics, such as the hypervolume and the generational distance, as well as using visual comparison.
The application to the three examples gives insight into the advantages and disadvantages of each method, and provides further understanding of an adequate application of the methods concerning high dimensional optimization problems.
Bei einem marktüblichen Calciumsulfat-Fließestrich wurden in der Praxis schädigende Volu-menexpansionen festgestellt. Diese sind ein Resultat aus dem Zusammenwirken des einge-setzten Bindemittel-Compounds und einer kritischen Gesteinskörnung.
Das Ziel dieser Arbeit ist es, ein Calciumsulfat-Bindemittelsystem zu konfektionieren, welches in der Lage ist, die im Mörtel festgestellten Volumenexpansionen zu unterbinden. Es sollen verschiedene Bindemittel- und Additivzusammensetzungen untersucht werden, welche in Verbindung mit der kritischen Gesteinskörnung die Herstellung eines volumenstabilen Fließestrichs ermöglichen. Dazu soll folgende Fragestellung beantwortet werden: Welche Ursachen hat die Volumenzunahme und wie ist diese zu minimieren bzw. unterbinden?
Dabei werden unterschiedliche Bindemittelrezepturen aus α-Halbhydrat, Thermoanhydrit und Naturanhydrit, sowie verschiedene Additivzusammensetzungen hergestellt und untersucht.
Durch Längenänderungsmessungen in der Schwindrinne werden die Einflüsse der Binde-mittel, der Additivzusammensetzungen und der Wasser/Bindemittel-Werte auf das Län-genänderungsverhalten untersucht. Mittels Variation der einzelnen Compound-Bestandteile kann festgestellt werden, dass der Stabilisierer die Längenänderung negativ beeinflusst. Dieser bindet freies Wasser, welches für eine Reaktion zwischen Bindemittel und Gesteins-körnung im plastischen Zustand nicht mehr zur Verfügung steht. Diese Reaktion kann folglich erst im erhärteten Zustand ablaufen und verursacht die schädigende Volumenexpansion.
Abschließend wurde ein Bindemittel-Compound konfektioniert, welcher ohne Zusatz von Stabilisierern in Zusammenhang mit der kritischen Gesteinskörnung volumenstabil ist und keine Schäden auslöst.
Wireless sensor networks have attracted great attention for applications in structural health monitoring due to their ease of use, flexibility of deployment, and cost-effectiveness. This paper presents a software framework for WiFi-based wireless sensor networks composed of low-cost mass market single-board computers. A number of specific system-level software components were developed to enable robust data acquisition, data processing, sensor network communication, and timing with a focus on structural health monitoring (SHM) applications. The framework was validated on Raspberry Pi computers, and its performance was studied in detail. The paper presents several characteristics of the measurement quality such as sampling accuracy and time synchronization and discusses the specific limitations of the system. The implementation includes a complementary smartphone application that is utilized for data acquisition, visualization, and analysis. A prototypical implementation further demonstrates the feasibility of integrating smartphones as data acquisition nodes into the network, utilizing their internal sensors. The measurement system was employed in several monitoring campaigns, three of which are documented in detail. The suitability of the system is evaluated based on comparisons of target quantities with reference measurements. The results indicate that the presented system can robustly achieve a measurement performance commensurate with that required in many typical SHM tasks such as modal identification. As such, it represents a cost-effective alternative to more traditional monitoring solutions.
A categorical perspective towards aerodynamic models for aeroelastic analyses of bridge decks
(2019)
Reliable modelling in structural engineering is crucial for the serviceability and safety of structures. A huge variety of aerodynamic models for aeroelastic analyses of bridges poses natural questions on their complexity and thus, quality. Moreover, a direct comparison of aerodynamic models is typically either not possible or senseless, as the models can be based on very different physical assumptions. Therefore, to address the question of principal comparability and complexity of models, a more abstract approach, accounting for the effect of basic physical assumptions, is necessary.
This paper presents an application of a recently introduced category theory-based modelling approach to a diverse set of models from bridge aerodynamics. Initially, the categorical approach is extended to allow an adequate description of aerodynamic models. Complexity of the selected aerodynamic models is evaluated, based on which model comparability is established. Finally, the utility of the approach for model comparison and characterisation is demonstrated on an illustrative example from bridge aeroelasticity. The outcome of this study is intended to serve as an alternative framework for model comparison and impact future model assessment studies of mathematical models for engineering applications.
The vibration control of the tall building during earthquake excitations is a challenging task due to their complex seismic behavior. This paper investigates the optimum placement and properties of the Tuned Mass Dampers (TMDs) in tall buildings, which are employed to control the vibrations during earthquakes. An algorithm was developed to spend a limited mass either in a single TMD or in multiple TMDs and distribute them optimally over the height of the building. The Non-dominated Sorting Genetic Algorithm (NSGA – II) method was improved by adding multi-variant genetic operators and utilized to simultaneously study the optimum design parameters of the TMDs and the optimum placement. The results showed that under earthquake excitations with noticeable amplitude in higher modes, distributing TMDs over the height of the building is more effective in mitigating the vibrations compared to the use of a single TMD system. From the optimization, it was observed that the locations of the TMDs were related to the stories corresponding to the maximum modal displacements in the lower modes and the stories corresponding to the maximum modal displacements in the modes which were highly activated by the earthquake excitations. It was also noted that the frequency content of the earthquake has significant influence on the optimum location of the TMDs.
The production of a desired product needs an effective use of the experimental model. The present study proposes an extreme learning machine (ELM) and a support vector machine (SVM) integrated with the response surface methodology (RSM) to solve the complexity in optimization and prediction of the ethyl ester and methyl ester production process. The novel hybrid models of ELM-RSM and ELM-SVM are further used as a case study to estimate the yield of methyl and ethyl esters through a trans-esterification process from waste cooking oil (WCO) based on American Society for Testing and Materials (ASTM) standards. The results of the prediction phase were also compared with artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS), which were recently developed by the second author of this study. Based on the results, an ELM with a correlation coefficient of 0.9815 and 0.9863 for methyl and ethyl esters, respectively, had a high estimation capability compared with that for SVM, ANNs, and ANFIS. Accordingly, the maximum production yield was obtained in the case of using ELM-RSM of 96.86% for ethyl ester at a temperature of 68.48 °C, a catalyst value of 1.15 wt. %, mixing intensity of 650.07 rpm, and an alcohol to oil molar ratio (A/O) of 5.77; for methyl ester, the production yield was 98.46% at a temperature of 67.62 °C, a catalyst value of 1.1 wt. %, mixing intensity of 709.42 rpm, and an A/O of 6.09. Therefore, ELM-RSM increased the production yield by 3.6% for ethyl ester and 3.1% for methyl ester, compared with those for the experimental data.
Management strategies for sustainable sugarcane production need to deal with the increasing complexity and variability of the whole sugar system. Moreover, they need to accommodate the multiple goals of different industry sectors and the wider community. Traditional disciplinary approaches are unable to provide integrated management solutions, and an approach based on whole systems analysis is essential to bring about beneficial change to industry and the community. The application of this approach to water management, environmental management and cane supply management is outlined, where the literature indicates that the application of extreme learning machine (ELM) has never been explored in this realm. Consequently, the leading objective of the current research was set to filling this gap by applying ELM to launch swift and accurate model for crop production data-driven. The key learning has been the need for innovation both in the technical aspects of system function underpinned by modelling of sugarcane growth. Therefore, the current study is an attempt to establish an integrate model using ELM to predict the concluding growth amount of sugarcane. Prediction results were evaluated and further compared with artificial neural network (ANN) and genetic programming models. Accuracy of the ELM model is calculated using the statistics indicators of Root Means Square Error (RMSE), Pearson Coefficient (r), and Coefficient of Determination (R2) with promising results of 0.8, 0.47, and 0.89, respectively. The results also show better generalization ability in addition to faster learning curve. Thus, proficiency of the ELM for supplementary work on advancement of prediction model for sugarcane growth was approved with promising results.
The Variability of the Void Ratio of Sand and its Effect on Settlement and Infinite Slope Stability
(2018)
The uncertainty of a soil property can significantly affect the physical behavior of soil, so as to influence geotechnical practice. The uncertainty can be expressed by its stochastic parameters, including the mean, the standard deviation, and the spatial correlation length. These stochastic parameters are regarded as constant value in most of the former studies. The main aim of this thesis is to prove whether they are depth-dependent, and to evaluate the effect of this depth-dependent character on both the settlement and the infinite slope stability during rainwater infiltration.
A stochastic one-dimensional settlement simulation is carried out using random finite element method with the von Wolffersdorff hypoplastic model, so as to evaluate the effect of stress level on the stochastic parameters of void ratio related parameters of sand. It is found that these stochastic parameters are both stress-dependent and depth-dependent.
The non-stationary random field, considering the depth-dependent character of these stochastic parameters, can be generated through the distortion of the stationary random field.
The one-dimensional settlement analysis is carried out to evaluation the effect of the depth-dependent character of the stochastic parameters of void ratio on the strain. It is found that the depth-dependent character has low effect on the strain.
The deterministic analysis of infinite slope stability during rainwater infiltration is simulated.
The transient seepage is carried out using finite difference method, while the steady state seepage is simulated using the analytical solution. The saturated hydraulic conductivity (ks) is taken as the only variable. The results show that the depth-dependent ks has a significant influence on the stability of the slope when the negative flux is high. Without considering the depth-dependent character, can overestimate the factor of safety of the slope. A slope can fail if the depth-dependent character is considered, while it is stable if the depth-dependent character is neglected. The failure time of the slope with a greater depth-dependent ks is earlier during transient infiltration.
Meanwhile, the stochastic infinite slope stability analysis during infiltration, is also carried out to highlight the effect of the depth-dependent character of the stochastic parameters of ks. The results show that: the probability of failure is significantly increased if the depth-dependent character of mean is considered, while, it is moderately reduced if the depth-dependent character of the standard deviation is accounted. If the depth-dependent character of both the mean and standard deviation of ks is considered, the depth-dependent mean value plays a dominant influence on the results. Furthermore, the depth-dependent character of the spatial correlation length can slightly reduce the probability of failure.