TY - JOUR A1 - Abbaspour-Gilandeh, Yousef A1 - Molaee, Amir A1 - Sabzi, Sajad A1 - Nabipour, Narjes A1 - Shamshirband, Shahaboddin A1 - Mosavi, Amir T1 - A Combined Method of Image Processing and Artificial Neural Network for the Identification of 13 Iranian Rice Cultivars JF - agronomy N2 - Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars. KW - Maschinelles Lernen KW - Machine learning KW - food informatics KW - big data KW - artificial neural networks KW - artificial intelligence KW - image processing KW - rice Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200123-40695 UR - https://www.mdpi.com/2073-4395/10/1/117 VL - 2020 IS - Volume 10, Issue 1, 117 PB - MDPI ER - TY - THES A1 - Lux, Christopher T1 - A Data-Virtualization System for Large Model Visualization N2 - Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources. In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts. Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques. The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches. KW - Computer Graphics KW - Visualisation KW - Volume Rendering KW - Large Data Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20130725-19855 ER - TY - THES A1 - Vogler, Verena T1 - A framework for artificial coral reef design: Integrating computational modelling and high precision monitoring strategies for artificial coral reefs – an Ecosystem-aware design approach in times of climate change N2 - Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling. The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts. Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem. For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time. In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them. Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design. N2 - Charakteristisch für das Zeitalter des Klimawandels sind die durch den Menschen verursachte Meeresverschmutzung sowie ein massiver Rückgang der Artenvielfalt in den Weltmeeren. Tropische Korallenriffe sind als eines der ältesten und artenreichsten Ökosysteme der Erde besonders stark gefährdet und stehen somit symbolisch für die zerstörerischen Auswirkungen menschlicher Aktivitäten auf die Natur [4], [5]. Um dem massiven Rückgang der Korallenriffe entgegenzuwirken, wurden von Menschen künstliche Riffsysteme entwickelt [6], [7]. Sie sind Beispiele dafür, wie Architektur und die Regenerierung von Ökosystemen miteinander verbunden werden können [8]. Eine Verknüpfung von einerseits künstlichen und andererseits komplexen, sich verändernden natürlichen Systemen, stellt jedoch eine Herausforderung dar, u.a. in Bezug auf die Computermodellierung (B-Rep Modellierung). Zum Erhalt der Korallenriffe werden in der vorliegende Doktorarbeit Strategien aus der digitalen Praxis neuartig auf das Entwerfen von künstlichen Korallenriffen angewendet. Die Hauptfrage befasst sich damit, wie der Entwurfsprozess von künstlichen Korallenriffen unter Einbeziehung von Computermodellierung und hochpräzisen Überwachungsstrategien optimiert werden kann. In diesem Zusammenhang werden Techniken, Methoden sowie ein übergeordnetes Framework erforscht, welche zukünftige Forschung und Praxis in Bezug auf Ökologie-geleitete Entwurfsprozesse fördern soll. In Anbetracht der vielen vorhandenen künstlichen Riffsysteme, kann man feststellen, dass die Zusammenhänge zwischen Architektur- und Ökosystem-Anforderungen nicht genau untersucht und dadurch bei der Umsetzung nicht entsprechend berücksichtigt werden. Zum Beispiel wie Oberflächenbeschaffenheit und Materialität eine Ansiedlung von Korallenlarven begünstigt oder wie eine räumlich vielseitige Struktur die Artenvielfalt verbessern kann. Zudem fehlt ein integrierter Unterwasser-Überwachungsprozess, welcher Informationen über das Ökosystem liefert und diese dem Entwurf bereitstellt. Zusätzlich ist eine Unterwasser-Überwachung notwendig, um herauszufinden, ob die künstlichen Riffstrukturen zur Regenerierung beitragen oder dem Ökosystem gänzlich schaden. In dieser Forschungsarbeit werden empirische und experimentelle Methoden angewendet: Algorithmisches Entwerfen für Korallenriffe, hochpräzise Unterwasser-Überwachung, Computermodellierung und -simulation. Die Forschung wird seit 2012 bis heute durch zwei Riffprototypen (Artificial Reef Prototypes – ARPs) in Gili Trawangan, Indonesien validiert. Zusätzlich wurden weitere separate Methoden und Techniken in insgesamt siebzehn computergestützten Experimenten entwickelt und so angewendet, dass viele kreuzvalidiert und in ein Framework integriert sind, welches dann als bedeutender Beitrag dem Forschungsgebiet zur Verfügung steht. Weitere Hauptbeiträge sind der Ökosystem-bewusste Entwurfsansatz (Ecosystem-aware design approach), Key Performance Indicators (KPIs) für das Gestalten von Korallenriffen, algorithmisches Entwerfen und die Herstellung von Biorock-Kathoden, neue hochpräzise Unterwasser-Überwachungsstrategien, reale Langzeitexperimente, neue digitale Analysemethoden, sowie zwei webbasierte Softwareanwendungen für die Gestaltung und die Überwachung von künstlichen Korallenriffen. Das methodische Framework ist das Hauptergebnis der Forschung, da die vielen technischen Komponenten in dieser Weise zum ersten Mal getestet und kombiniert wurden. Zusammenfassend reagiert die vorliegende Doktorarbeit sowohl auf die Dringlichkeit als auch auf die Relevanz der Erhaltung von Artenvielfalt in tropischen Korallenriffen in Zeiten eines massiven Aussterbens, indem sie einen differenzierten Entwurfsansatz für künstliche Korallenriffe offeriert. Die Arbeit zeigt auf, dass ein digitales Entwerfen einer solchen „lebendigen Architektur“ unter Berücksichtigung vielfältiger Anforderungen und Leistungsparametern machbar ist. Zusätzlich bietet sie eine ausführliche kritische Diskussion über die Rolle von computergestützten Entwerfen und Architektur im Zusammenhang mit Regenerierung von Ökosystemen und “Planetary Thinking”. In dieser Hinsicht fungiert die Doktorarbeit als theoretischer und praktischer Hintergrund für computergestütztes Entwerfen, Ökologie und Meeresschutz. Eine Verbesserung des Entwerfens von künstlichen Korallenriffen wird nicht nur auf technischer Ebene aufgezeigt, sondern es werden auch die wesentlichen Kriterien und Techniken für deren Umsetzung benannt. Schlüsselwörter: Künstliche Korallenriffe, Computermodellierung, hochpräzise Unterwasser-Überwachung, Ökologie im Architekturentwurf. KW - Korallenriff KW - Algorithmus KW - Architektur KW - Meeresökologie KW - Software KW - Artificial coral reefs KW - Computational modelling KW - High precision underwater monitoring KW - Ecology in design KW - Künstliche Korallenriffe KW - Unterwasserarchitektur Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20220322-46115 UR - https://artificialreefdesign.com/ SN - 978-3-00-074495-2 N1 - Die URL führt zu 3D Modelle von echten Korallenriffen. ER - TY - INPR A1 - Mosavi, Amir A1 - Torabi, Mehrnoosh A1 - Hashemi, Sattar A1 - Saybani, Mahmoud Reza A1 - Shamshirband, Shahaboddin T1 - A Hybrid Clustering and Classification Technique for Forecasting Short-Term Energy Consumption N2 - Electrical energy distributor companies in Iran have to announce their energy demand at least three 3-day ahead of the market opening. Therefore, an accurate load estimation is highly crucial. This research invoked methodology based on CRISP data mining and used SVM, ANN, and CBA-ANN-SVM (a novel hybrid model of clustering with both widely used ANN and SVM) to predict short-term electrical energy demand of Bandarabbas. In previous studies, researchers introduced few effective parameters with no reasonable error about Bandarabbas power consumption. In this research we tried to recognize all efficient parameters and with the use of CBA-ANN-SVM model, the rate of error has been minimized. After consulting with experts in the field of power consumption and plotting daily power consumption for each week, this research showed that official holidays and weekends have impact on the power consumption. When the weather gets warmer, the consumption of electrical energy increases due to turning on electrical air conditioner. Also, con-sumption patterns in warm and cold months are different. Analyzing power consumption of the same month for different years had shown high similarity in power consumption patterns. Factors with high impact on power consumption were identified and statistical methods were utilized to prove their impacts. Using SVM, ANN and CBA-ANN-SVM, the model was built. Sine the proposed method (CBA-ANN-SVM) has low MAPE 5 1.474 (4 clusters) and MAPE 5 1.297 (3 clusters) in comparison with SVM (MAPE 5 2.015) and ANN (MAPE 5 1.790), this model was selected as the final model. The final model has the benefits from both models and the benefits of clustering. Clustering algorithm with discovering data structure, divides data into several clusters based on similarities and differences between them. Because data inside each cluster are more similar than entire data, modeling in each cluster will present better results. For future research, we suggest using fuzzy methods and genetic algorithm or a hybrid of both to forecast each cluster. It is also possible to use fuzzy methods or genetic algorithms or a hybrid of both without using clustering. It is issued that such models will produce better and more accurate results. This paper presents a hybrid approach to predict the electric energy usage of weather-sensitive loads. The presented methodutilizes the clustering paradigm along with ANN and SVMapproaches for accurate short-term prediction of electric energyusage, using weather data. Since the methodology beinginvoked in this research is based on CRISP data mining, datapreparation has received a gr eat deal of attention in thisresear ch. Once data pre-processing was done, the underlyingpattern of electric energy consumption was extracted by themeans of machine learning methods to precisely forecast short-term energy consumption. The proposed approach (CBA-ANN-SVM) was applied to real load data and resulting higher accu-racy comparing to the existing models. 2018 American Institute of Chemical Engineers Environ Prog, 2018 https://doi.org/10.1002/ep.12934 KW - Data Mining KW - support vector machine (SVM) KW - Machine Learning KW - forecasting KW - Prediction KW - Electric Energy Consumption KW - clustering KW - artificial neural networks (ANN) Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180907-37550 N1 - This is the pre-peer reviewed version of the following article: https://onlinelibrary.wiley.com/doi/10.1002/ep.12934, which has been published in final form at https://doi.org/10.1002/ep.12934. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions. ER - TY - INPR A1 - Kavrakov, Igor A1 - Morgenthal, Guido T1 - A synergistic study of a CFD and semi-analytical models for aeroelastic analysis of bridges in turbulent wind conditions N2 - Long-span bridges are prone to wind-induced vibrations. Therefore, a reliable representation of the aerodynamic forces acting on a bridge deck is of a major significance for the design of such structures. This paper presents a systematic study of the two-dimensional (2D) fluid-structure interaction of a bridge deck under smooth and turbulent wind conditions. Aerodynamic forces are modeled by two approaches: a computational fluid dynamics (CFD) model and six semi-analytical models. The vortex particle method is utilized for the CFD model and the free-stream turbulence is introduced by seeding vortex particles upstream of the deck with prescribed spectral characteristics. The employed semi-analytical models are based on the quasi-steady and linear unsteady assumptions and aerodynamic coefficients obtained from CFD analyses. The underlying assumptions of the semi-analytical aerodynamic models are used to interpret the results of buffeting forces and aeroelastic response due to a free-stream turbulence in comparison with the CFD model. Extensive discussions are provided to analyze the effect of linear fluid memory and quasi-steady nonlinearity from a CFD perspective. The outcome of the analyses indicates that the fluid memory is a governing effect in the buffeting forces and aeroelastic response, while the effect of the nonlinearity is overestimated by the quasi-steady models. Finally, flutter analyses are performed and the obtained critical velocities are further compared with wind tunnel results, followed by a brief examination of the post-flutter behavior. The results of this study provide a deeper understanding of the extent of which the applied models are able to replicate the physical processes for fluid-structure interaction phenomena in bridge aerodynamics and aeroelasticity. KW - Ingenieurwissenschaften KW - Aerodynamik KW - Bridge KW - Aerodynamic nonlinearity KW - Fluid memory KW - Vortex particle method KW - Buffeting KW - Flutter Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200206-40873 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/abs/pii/S0889974617308423?via%3Dihub, which has been published in final form at https://doi.org/10.1016/j.jfluidstructs.2018.06.013 ER - TY - JOUR A1 - Köhler, Hermann A1 - König, Reinhard T1 - Aktionsräume in Dresden N2 - In vorliegender Studie werden die Aktionsräume von Befragten in Dresden über eine standardisierte Befragung (n=360) untersucht. Die den Aktionsräumen zugrundeliegenden Aktivitäten werden unterschieden in Einkaufen für den täglichen Bedarf, Ausgehen (z.B. in Café, Kneipe, Gaststätte), Erholung im Freien (z.B. spazieren gehen, Nutzung von Grünanlagen) und private Geselligkeit (z.B. Feiern, Besuch von Verwandten/Freunden). Der Aktionsradius wird unterschieden in Wohnviertel, Nachbarviertel und sonstiges weiteres Stadtgebiet. Um aus den vier betrachteten Aktivitäten einen umfassenden Kennwert für den durchschnittlichen Aktionsradius eines Befragten zu bilden, wird ein Modell für den Kennwert eines Aktionsradius entwickelt. Die Studie kommt zu dem Ergebnis, dass das Alter der Befragten einen signifikanten – wenn auch geringen – Einfluss auf den Aktionsradius hat. Das Haushaltsnettoeinkommen hat einen mit Einschränkung signifikanten, ebenfalls geringen Einfluss auf alltägliche Aktivitäten der Befragten. T3 - Arbeitspapiere Informatik in der Architektur - Nr. 14 KW - Aktionsraumforschung KW - Quantitative Sozialforschung KW - Dresden Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20160822-26726 UR - http://infar.architektur.uni-weimar.de/service/drupal-infar/Arbeitspapiere ER - TY - BOOK ED - Maier, Matthias ED - Simon-Ritz, Frank T1 - Alles digital? E-Books in Studium und Forschung : Weimarer EDOC-Tage 2011 N2 - Es ist ein Bild aus alten Tagen: ein wissbegieriger Student, auf der Suche nach fundierter wissenschaftlicher Information, begibt sich an den heiligsten Ort aller Bücher – die Universitätsbibliothek. Doch seit einiger Zeit tummeln sich Studierende nicht mehr nur in Bibliotheken, sondern auch immer häufiger im Internet. Sie suchen und finden dort digitale Bücher, sogenannte E-Books. Wie lässt sich der Wandel durch den Einzug des E-Books in das etablierte Forschungssystem beschreiben, welche Konsequenzen lassen sich daraus ablesen und wird schließlich alles digital, sogar die Bibliothek? Diesen Fragen geht ein elfköpfiges Expertenteam aus Deutschland und der Schweiz während der zweitägigen Konferenz auf den Grund. Bei den Weimarer E-DOC-Tagen geht es nun um die Veränderung des institutionellen Gefüges rund um das digitale Buch. Denn traditionell sind Verlage und Bibliotheken wichtige Bestandteile der Wissensversorgung in Studium und Lehre. Doch mit dem Aufkommen des E-Books verlagert sich die Recherche mehr und mehr ins Internet. Die Suchmaschine Google tritt als neuer Konkurrent der klassischen Bibliotheksrecherche auf. Aber auch Verlage müssen verstärkt auf die neuen Herausforderungen eines digitalen Buchmarktes reagieren. In Kooperation mit der Universitätsbibliothek und dem Master-Studiengang Medienmanagement diskutieren Studierende, Wissenschaftler, Bibliothekare und Verleger, wie das E-Book unseren Umgang mit Literatur verändert. Der Tagungsband stellt alle Perspektiven und Ergebnisse zum Nachlesen zusammen. KW - Elektronisches Buch KW - Studium KW - Lehre KW - Forschung KW - EDOC-Tage Weimar 2011 KW - E-Book KW - Digitaler Buchmarkt Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20120223-15699 SN - 978-3-86068-454-2 N1 - Die Tagung fand am 27. - 28. Mai 2011 an der Bauhaus-Univesität Weimar im Audimax statt. PB - Verlag der Bauhaus-Universität CY - Weimar ER - TY - INPR A1 - Steiner, Maria A1 - Bourinet, Jean-Marc A1 - Lahmer, Tom T1 - An adaptive sampling method for global sensitivity analysis based on least-squares support vector regression N2 - In the field of engineering, surrogate models are commonly used for approximating the behavior of a physical phenomenon in order to reduce the computational costs. Generally, a surrogate model is created based on a set of training data, where a typical method for the statistical design is the Latin hypercube sampling (LHS). Even though a space filling distribution of the training data is reached, the sampling process takes no information on the underlying behavior of the physical phenomenon into account and new data cannot be sampled in the same distribution if the approximation quality is not sufficient. Therefore, in this study we present a novel adaptive sampling method based on a specific surrogate model, the least-squares support vector regresson. The adaptive sampling method generates training data based on the uncertainty in local prognosis capabilities of the surrogate model - areas of higher uncertainty require more sample data. The approach offers a cost efficient calculation due to the properties of the least-squares support vector regression. The opportunities of the adaptive sampling method are proven in comparison with the LHS on different analytical examples. Furthermore, the adaptive sampling method is applied to the calculation of global sensitivity values according to Sobol, where it shows faster convergence than the LHS method. With the applications in this paper it is shown that the presented adaptive sampling method improves the estimation of global sensitivity values, hence reducing the overall computational costs visibly. KW - Approximation KW - Sensitivitätsanalyse KW - Abtastung KW - Surrogate models KW - Least-squares support vector regression KW - Adaptive sampling method KW - Global sensitivity analysis KW - Sampling Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20181218-38320 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/pii/S0951832017311808, which has been published in final form at https://doi.org/10.1016/j.ress.2018.11.015. SP - 1 EP - 33 ER - TY - JOUR A1 - Homaei, Mohammad Hossein A1 - Soleimani, Faezeh A1 - Shamshirband, Shahaboddin A1 - Mosavi, Amir A1 - Nabipour, Narjes A1 - Varkonyi-Koczy, Annamaria R. T1 - An Enhanced Distributed Congestion Control Method for Classical 6LowPAN Protocols Using Fuzzy Decision System JF - IEEE Access N2 - The classical Internet of things routing and wireless sensor networks can provide more precise monitoring of the covered area due to the higher number of utilized nodes. Because of the limitations in shared transfer media, many nodes in the network are prone to the collision in simultaneous transmissions. Medium access control protocols are usually more practical in networks with low traffic, which are not subjected to external noise from adjacent frequencies. There are preventive, detection and control solutions to congestion management in the network which are all the focus of this study. In the congestion prevention phase, the proposed method chooses the next step of the path using the Fuzzy decision-making system to distribute network traffic via optimal paths. In the congestion detection phase, a dynamic approach to queue management was designed to detect congestion in the least amount of time and prevent the collision. In the congestion control phase, the back-pressure method was used based on the quality of the queue to decrease the probability of linking in the pathway from the pre-congested node. The main goals of this study are to balance energy consumption in network nodes, reducing the rate of lost packets and increasing quality of service in routing. Simulation results proved the proposed Congestion Control Fuzzy Decision Making (CCFDM) method was more capable in improving routing parameters as compared to recent algorithms. KW - Internet der dinge KW - IOT KW - Internet of things KW - wireless sensor network KW - congestion control KW - fuzzy decision making KW - back-pressure Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200213-40805 UR - https://ieeexplore.ieee.org/document/8967114 IS - volume 8 SP - 20628 EP - 20645 PB - IEEE ER - TY - THES A1 - Fleischmann, Ewan T1 - Analysis and Design of Blockcipher Based Cryptographic Algorithms N2 - This thesis focuses on the analysis and design of hash functions and authenticated encryption schemes that are blockcipher based. We give an introduction into these fields of research – taking in a blockcipher based point of view – with special emphasis on the topics of double length, double call blockcipher based compression functions. The first main topic (thesis parts I - III) is on analysis and design of hash functions. We start with a collision security analysis of some well known double length blockcipher based compression functions and hash functions: Abreast-DM, Tandem-DM and MDC-4. We also propose new double length compression functions that have elevated collision security guarantees. We complement the collision analysis with a preimage analysis by stating (near) optimal security results for Abreast-DM, Tandem-DM, and Hirose-DM. Also, some generalizations are discussed. These are the first preimage security results for blockcipher based double length hash functions that go beyond the birthday barrier. We then raise the abstraction level and analyze the notion of ’hash function indifferentiability from a random oracle’. So we not anymore focus on how to obtain a good compression function but, instead, on how to obtain a good hash function using (other) cryptographic primitives. In particular we give some examples when this strong notion of hash function security might give questionable advice for building a practical hash function. In the second main topic (thesis part IV), which is on authenticated encryption schemes, we present an on-line authenticated encryption scheme, McOEx, that simultaneously achieves privacy and confidentiality and is secure against nonce-misuse. It is the first dedicated scheme that achieves high standards of security and – at the same time – is on-line computable. N2 - Die Schwerpunkte dieser Dissertation sind die Analyse und das Design von blockchiffrenbasierten Hashfunktionen (Abschnitte I-III) sowie die Entwicklung von robusten Verfahren zur authentifizierten erschlüsselung (Abschnitt IV). Die Arbeit beginnt mit einer Einführung in diese Themengebiete, wobei – insbesondere bei den Hashfunktionen – eine blockchiffrenzentrierte Perspektive eingenommen wird. Die Abschnitte I-III dieser Dissertation beschäftigen sich mit der Analyse und dem Design von Hashfunktionen. Zu Beginn werden die Kollisionssicherheit einiger wohlbekannter Kompressions- und Hashfunktionen mit zweifacher Blockchiffrenausgabelänge näher analysiert: Abreast-DM, Tandem-DMundMDC-4. Ebenso werden neue Designs vorgestellt, welche erhöhte Kollisionssicherheitsgarantien haben. Ergänzend zur Kollisionssicherheitsanalyse wird die Resistenz gegen Urbildangriffe von Kompressionsfunktionen doppelter Ausgabelänge untersucht. Dabei werden nahezu optimale Sicherheitsschranken für Abreast-DM, Tandem-DM und Hirose-DM abgeleitet. Einige Verallgemeinerungen sind ebenfalls Teil der Diskussion. Das sind die ersten Sicherheitsresultate gegen Urbildangriffe auf blockchiffrenbasierte Kompressionsfunktionen doppelter Länge, die weit über die bis dahin bekannten Sicherheitsresultate hinausgehen. Daran anschließend folgt eine Betrachtung, die auf einem erhöhten Abstraktionslevel durchgeführt wird und den Begriff der Undifferenzierbarkeit einer Hashfunktion von einem Zufallsorakel diskutiert. Hierbei liegt der Fokus nicht darauf, wie man eine gute Kompressionfunktion auf Basis anderer kryptographischer Funktionen erstellt, sondern auf dem Design einer Hashfunktionen auf Basis einer Kompressionsfunktion. Unter Einnahme eines eher praktischen Standpunktes wird anhand einiger Beispiele aufgezeigt, dass die relativ starke Eigenschaft der Undifferenzierbarkeit einer Hashfunktion zu widersprüchlichen Designempfehlungen für praktikable Hashfunktionen führen kann. Im zweiten Schwerpunkt, in Abschnitt IV, werden Verfahren zur authentifizierten Verschlüsselung behandelt. Es wird ein neues Schema zur authentifizierten Verschlüsselung vorgestellt,McOEx. Es schützt gleichzeitig die Integrität und die Vertrauchlichkeit einer Nachricht. McOEx ist das erste konkrete Schema das sowohl robust gegen die Wiederverwendung von Nonces ist und gleichzeitig on-line berechnet werden kann. KW - Kryptologie KW - Kryptographie KW - Authentifizierte Verschlüsselung KW - Beweisbare Sicherheit KW - Blockchiffrenbasierte Hashfunktion KW - Blockcipher Based Hashing Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20130722-19835 ER -