000 Informatik, Informationswissenschaft, allgemeine Werke
Filtern
Volltext vorhanden
- ja (98) (entfernen)
Dokumenttyp
- Artikel (Wissenschaftlicher) (38)
- Dissertation (23)
- Konferenzveröffentlichung (12)
- Masterarbeit (6)
- Preprint (6)
- Bachelorarbeit (5)
- Bericht (4)
- Buch (Monographie) (2)
- Ton (1)
- Studienarbeit (1)
Institut
- Junior-Professur Computational Architecture (25)
- Institut für Strukturmechanik (ISM) (19)
- Professur Informatik in der Architektur (10)
- Professur Bauphysik (5)
- Professur Content Management und Webtechnologien (5)
- Professur Systeme der Virtuellen Realität (5)
- Professur Informatik im Bauwesen (4)
- Professur Modellierung und Simulation - Konstruktion (4)
- Bauhaus-Institut für zukunftsweisende Infrastruktursysteme (b.is) (3)
- Professur Mediensicherheit (3)
Schlagworte
- Maschinelles Lernen (10)
- Architektur (9)
- Machine learning (7)
- machine learning (6)
- CAD (5)
- Städtebau (5)
- BIM (4)
- Deep learning (4)
- OA-Publikationsfonds2020 (4)
- Simulation (4)
A fundamental characteristic of human beings is the desire to start learning at the moment of birth. The rather formal learning process that learners have to deal with in school, on vocational training or in university, is currently subject to fundamental changes. The increasing technologization, overall existing mobile devices, the ubiquitous access to digital information, and students being early adaptors of all these technological innovations require reactions on the part of the educational system.
This study examines such a reaction: The use of mobile learning in higher education.
Examining the subject m-learning first requires an investigation of the educational model e-learning. Many universities already established e-learning as one of their educational segments, providing a wide range of methods to support this kind of teaching.
This study includes an empirical acceptance analysis regarding the general learning behavior of students and their approval of e-learning methods. A survey on the approval of m-learning supplements the results.
Mobile learning is characterized by both the mobility of the communication devices and the users. Both factors lead to new correlations, demonstrate the potential of today's mobile devices and the probability to increase the learning performance.
The dissertation addresses these correlations and the use of mobile devices in the context of m-learning. M-learning and the usage of mobile devices not only require a reflection from a technological point of view. In addition to the technical features of such mobile devices, the usability of their applications plays an important role, especially with regard to the limited display size.
For the purpose of evaluating mobile apps and browser-based applications, various analytical methods are suitable.
The concluding heuristic evaluation points out the vulnerability of an established m-learning application, reveals the need for improvement, and shows an approach to rectify the shortcoming.
Tropical coral reefs, one of the world’s oldest ecosystems which support some of the highest levels of biodiversity on the planet, are currently facing an unprecedented ecological crisis during this massive human-activity-induced period of extinction. Hence, tropical reefs symbolically stand for the destructive effects of human activities on nature [4], [5]. Artificial reefs are excellent examples of how architectural design can be combined with ecosystem regeneration [6], [7], [8]. However, to work at the interface between the artificial and the complex and temporal nature of natural systems presents a challenge, i.a. in respect to the B-rep modelling legacy of computational modelling.
The presented doctorate investigates strategies on how to apply digital practice to realise what is an essential bulwark to retain reefs in impossibly challenging times. Beyond the main question of integrating computational modelling and high precision monitoring strategies in artificial coral reef design, this doctorate explores techniques, methods, and linking frameworks to support future research and practice in ecology led design contexts.
Considering the many existing approaches for artificial coral reefs design, one finds they often fall short in precisely understanding the relationships between architectural and ecological aspects (e.g. how a surface design and material composition can foster coral larvae settlement, or structural three-dimensionality enhance biodiversity) and lack an integrated underwater (UW) monitoring process. Such a process is necessary in order to gather knowledge about the ecosystem and make it available for design, and to learn whether artificial structures contribute to reef regeneration or rather harm the coral reef ecosystem.
For the research, empirical experimental methods were applied: Algorithmic coral reef design, high precision UW monitoring, computational modelling and simulation, and validated through parallel real-world physical experimentation – two Artificial Reef Prototypes (ARPs) in Gili Trawangan, Indonesia (2012–today). Multiple discrete methods and sub techniques were developed in seventeen computational experiments and applied in a way in which many are cross valid and integrated in an overall framework that is offered as a significant contribution to the field. Other main contributions include the Ecosystem-aware design approach, Key Performance Indicators (KPIs) for coral reef design, algorithmic design and fabrication of Biorock cathodes, new high precision UW monitoring strategies, long-term real-world constructed experiments, new digital analysis methods and two new front-end web-based tools for reef design and monitoring reefs. The methodological framework is a finding of the research that has many technical components that were tested and combined in this way for the very first time.
In summary, the thesis responds to the urgency and relevance in preserving marine species in tropical reefs during this massive extinction period by offering a differentiated approach towards artificial coral reefs – demonstrating the feasibility of digitally designing such ‘living architecture’ according to multiple context and performance parameters. It also provides an in-depth critical discussion of computational design and architecture in the context of ecosystem regeneration and Planetary Thinking. In that respect, the thesis functions as both theoretical and practical background for computational design, ecology and marine conservation – not only to foster the design of artificial coral reefs technically but also to provide essential criteria and techniques for conceiving them.
Keywords: Artificial coral reefs, computational modelling, high precision underwater monitoring, ecology in design.
In this paper we introduce LUCI, a Lightweight Urban Calculation Interchange system, designed to bring the advantages of a calculation and content co-ordination system to small planning and design groups by the means of an open source middle-ware. The middle-ware focuses on problems typical to urban planning and therefore features a geo-data repository as well as a job runtime administration, to coordinate simulation models and its multiple views. The described system architecture is accompanied by two exemplary use cases that have been used to test and further develop our concepts and implementations.
In this paper we introduce LUCI, a Lightweight Urban Calculation Interchange system, designed to bring the advantages of a calculation and content co-ordination system to small planning and design groups by the means of an open source middle-ware. The middle-ware focuses on problems typical to urban planning and therefore features a geo-data repository as well as a job runtime administration, to coordinate simulation models and its multiple views. The described system architecture is accompanied by two exemplary use cases that have been used to test and further develop our concepts and implementations.
In this paper we introduce LUCI, a Lightweight Urban Calculation Interchange system, designed to bring the advantages of calculation and content co-ordination system to small planning and design groups by the means of an open source middle-ware. The middle-ware focuses on problems typical to urban planning and therefore features a geo-data repository as well as a job runtime administration, to coordinate simulation models and its multiple views. The described system architecture is accompanied by two exemplary use cases, that have been used to test and further develop our concepts and implementations.
Volumerendering ist eine Darstellungstechnik, um verschiedene räumliche Mess- und Simulationsdaten anschaulich, interaktiv grafisch darzustellen. Im folgenden Beitrag wird ein Verfahren vorgestellt, mehrere Volumendaten mit einem Architekturflächenmodell zu überlagern. Diese komplexe Darstellungsberechnung findet mit hardwarebeschleunigten Shadern auf der Grafikkarte statt. Im Beitrag wird hierzu der implementierte Softwareprototyp "VolumeRendering" vorgestellt. Neben dem interaktiven Berechnungsverfahren wurde ebenso Wert auf eine nutzerfreundliche Bedienung gelegt. Das Ziel bestand darin, eine einfache Bewertung der Volumendaten durch Fachplaner zu ermöglichen. Durch die Überlagerung, z. B. verschiedener Messverfahren mit einem Flächenmodell, ergeben sich Synergien und neue Auswertungsmöglichkeiten. Abschließend wird anhand von Beispielen aus einem interdisziplinären Forschungsprojekt die Anwendung des Softwareprototyps illustriert.
Der inhaltlichen Qualitätssicherung von Bauwerksinformationsmodellen (BIM) kommt im Zuge einer stetig wachsenden Nutzung der verwendeten BIM für unterschiedliche Anwen-dungsfälle eine große Bedeutung zu. Diese ist für jede am Datenaustausch beteiligte Software dem Projektziel entsprechend durchzuführen. Mit den Industry Foundation Classes (IFC) steht ein etabliertes Format für die Beschreibung und den Austausch eines solchen Modells zur Verfügung. Für den Prozess der Qualitätssicherung wird eine serverbasierte Testumgebung Bestandteil des neuen Zertifizierungsverfahrens der IFC sein. Zu diesem Zweck wurde durch das „iabi - Institut für angewandte Bauinformatik” in Zusammenarbeit mit „buildingSMART e.V.“ (http://www.buildingsmart.de) ein Global Testing Documentation Server (GTDS) implementiert. Der GTDS ist eine, auf einer Datenbank basierte, Web-Applikation, die folgende Intentionen verfolgt:
• Bereitstellung eines Werkzeugs für das qualitative Testen IFC-basierter Modelle
• Unterstützung der Kommunikation zwischen IFC Entwicklern und Anwendern
• Dokumentation der Qualität von IFC-basierten Softwareanwendungen
• Bereitstellung einer Plattform für die Zertifizierung von IFC Anwendungen
Gegenstand der Arbeit ist die Planung und exemplarische Umsetzung eines Werkzeugs zur interaktiven Visualisierung von Qualitätsdefiziten, die vom GTDS im Modell erkannt wurden. Die exemplarische Umsetzung soll dabei aufbauend auf den OPEN IFC TOOLS (http://www.openifctools.org) erfolgen.
Multi-criteria decision analysis (MCDA) is an established methodology to support the decision-making of multi-objective problems. For conducting an MCDA, in most cases, a set of objectives (SOO) is required, which consists of a hierarchical structure comprised of objectives, criteria, and indicators. The development of an SOO is usually based on moderated development processes requiring high organizational and cognitive effort from all stakeholders involved. This article proposes elementary interactions as a key paradigm of an algorithm-driven development process for an SOO that requires little moderation efforts. Elementary interactions are self-contained information requests that may be answered with little cognitive effort. The pairwise comparison of elements in the well-known analytical hierarchical process (AHP) is an example of an elementary interaction. Each elementary interaction in the development process presented contributes to the stepwise development of an SOO. Based on the hypothesis that an SOO may be developed exclusively using elementary interactions (EIs), a concept for a multi-user platform is proposed. Essential components of the platform are a Model Aggregator, an Elementary Interaction Stream Generator, a Participant Manager, and a Discussion Forum. While the latter component serves the professional exchange of the participants, the first three components are intended to be automatable by algorithms. The platform concept proposed has been evaluated partly in an explorative validation study demonstrating the general functionality of the algorithms outlined. In summary, the platform concept suggested demonstrates the potential to ease SOO development processes as the platform concept does not restrict the application domain; it is intended to work with little administration moderation efforts, and it supports the further development of an existing SOO in the event of changes in external conditions. The algorithm-driven development of SOOs proposed in this article may ease the development of MCDA applications and, thus, may have a positive effect on the spread of MCDA applications.
In the field of engineering, surrogate models are commonly used for approximating the behavior of a physical phenomenon in order to reduce the computational costs. Generally, a surrogate model is created based on a set of training data, where a typical method for the statistical design is the Latin hypercube sampling (LHS). Even though a space filling distribution of the training data is reached, the sampling process takes no information on the underlying behavior of the physical phenomenon into account and new data cannot be sampled in the same distribution if the approximation quality is not sufficient. Therefore, in this study we present a novel adaptive sampling method based on a specific surrogate model, the least-squares support vector regresson. The adaptive sampling method generates training data based on the uncertainty in local prognosis capabilities of the surrogate model - areas of higher uncertainty require more sample data. The approach offers a cost efficient calculation due to the properties of the least-squares support vector regression. The opportunities of the adaptive sampling method are proven in comparison with the LHS on different analytical examples. Furthermore, the adaptive sampling method is applied to the calculation of global sensitivity values according to Sobol, where it shows faster convergence than the LHS method. With the applications in this paper it is shown that the presented adaptive sampling method improves the estimation of global sensitivity values, hence reducing the overall computational costs visibly.
Zu den diversen Unternehmungen sozialbewegter „Gegenwissenschaft“, die um 1980 auf der Bildfläche der BRD erschienen, zählte der 1982 gegründete Berliner Wissenschaftsladen e. V., kurz WILAB – eine Art „alternatives“ Spin-off der Technischen Universität Berlin. Der vorliegende Beitrag situiert die Ausgründung des „Ladens“ im Kontext zeitgenössischer Fortschritte der (regionalen) Forschungs- und Technologiepolitik. Gezeigt wird, wie der deindustrialisierenden Inselstadt, qua „innovationspolitischer“ Gegensteuerung, dabei sogar eine gewisse Vorreiterrolle zukam: über die Stadtgrenzen hinaus sichtbare Neuerungen wie die Gründermesse BIG TECH oder das 1983 eröffnete Berliner Innovations- und Gründerzentrum (BIG), der erste „Incubator“ [sic] der BRD, etwa gingen auf das Konto der 1977/78 lancierten Technologie-Transferstelle der TU Berlin, TU-transfer.
Anders gesagt: tendenziell bekam man es hier nun mit Verhältnissen zu tun, die immer weniger mit den Träumen einer „kritischen“, nicht-fremdbestimmten (Gegen‑)Wissenschaft kompatibel waren. Latent konträr zur historiographischen Prominenz des wissenschaftskritischen Zeitgeists fristeten „alternativen“ Zielsetzungen verpflichtete Unternehmungen wie „WILAB“ ein relativ marginalisiertes Nischendasein. Dennoch wirft das am WILAB verfolgte, so gesehen wenig aussichtsreiche Anliegen, eine andere, nämlich „humanere“ Informationstechnologie in die Wege zu leiten, ein instruktives Licht auf die Aufbrüche „unternehmerischer“ Wissenschaft in der BRD um 1980.
Radiodiskussion bei bauhaus.fm am 5. November 2012.
Harald S. Liehr ist Lektor und Leiter der Niederlassung Weimar des Böhlau-Verlags (Wien / Köln / Weimar), Dr. Frank Simon-Ritz ist Direktor der Universitätsbibliothek der Bauhaus-Universität Weimar.
Die Fragen stellten René Tauschke und Jean-Marie Schaldach.
Evaporation is a very important process; it is one of the most critical factors in agricultural, hydrological, and meteorological studies. Due to the interactions of multiple climatic factors, evaporation is considered as a complex and nonlinear phenomenon to model. Thus, machine learning methods have gained popularity in this realm. In the present study, four machine learning methods of Gaussian Process Regression (GPR), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Regression (SVR) were used to predict the pan evaporation (PE). Meteorological data including PE, temperature (T), relative humidity (RH), wind speed (W), and sunny hours (S) collected from 2011 through 2017. The accuracy of the studied methods was determined using the statistical indices of Root Mean Squared Error (RMSE), correlation coefficient (R) and Mean Absolute Error (MAE). Furthermore, the Taylor charts utilized for evaluating the accuracy of the mentioned models. The results of this study showed that at Gonbad-e Kavus, Gorgan and Bandar Torkman stations, GPR with RMSE of 1.521 mm/day, 1.244 mm/day, and 1.254 mm/day, KNN with RMSE of 1.991 mm/day, 1.775 mm/day, and 1.577 mm/day, RF with RMSE of 1.614 mm/day, 1.337 mm/day, and 1.316 mm/day, and SVR with RMSE of 1.55 mm/day, 1.262 mm/day, and 1.275 mm/day had more appropriate performances in estimating PE values. It was found that GPR for Gonbad-e Kavus Station with input parameters of T, W and S and GPR for Gorgan and Bandar Torkmen stations with input parameters of T, RH, W and S had the most accurate predictions and were proposed for precise estimation of PE. The findings of the current study indicated that the PE values may be accurately estimated with few easily measured meteorological parameters.
In crowdsourcingbasierten Systemen kommt der Qualitätssicherung des durch Benutzer generierten Inhaltes große Bedeutung für die Erhaltung der Benutzbarkeit zu. Das bauphysikalische Lehrspiel "BuildVille" benutzt für die Quiz-Anwendung einen Crowdsourcing-Ansatz: Die Quiz-Fragen werden von den Benutzern selbst generiert. Mit Hilfe dieser Arbeit soll sichergestellt werden, dass fehlerhafte, irrtümlicherweise oder zum Spaß eingegebene Fragen möglichst früh erkannt, korrigiert oder von der weiteren Verbreitung ausgeschlossen werden. Dazu soll mit Hilfe einer Analyse bestehender crowdsourcingbasierter Systeme bezüglich umgesetzter Qualitätssicherungsmaßnahmen ein Konzept für die QS-Maßnahmen in BuildVille entwickelt werden.
The release of the large language model-based chatbot ChatGPT 3.5 in November 2022 has brought considerable attention to the subject of artificial intelligence, not only to the public. From the perspective of higher education, ChatGPT challenges various learning and assessment formats as it significantly reduces the effectiveness of their learning and assessment functionalities. In particular, ChatGPT might be applied to formats that require learners to generate text, such as bachelor theses or student research papers. Accordingly, the research question arises to what extent writing of bachelor theses is still a valid learning and assessment format. Correspondingly, in this exploratory study, the first author was asked to write his bachelor’s thesis exploiting ChatGPT. For tracing the impact of ChatGPT methodically, an autoethnographic approach was used. First, all considerations on the potential use of ChatGPT were documented in logs, and second, all ChatGPT chats were logged. Both logs and chat histories were analyzed and are presented along with the recommendations for students regarding the use of ChatGPT suggested by a common framework. In conclusion, ChatGPT is beneficial for thesis writing during various activities, such as brainstorming, structuring, and text revision. However, there are limitations that arise, e.g., in referencing. Thus, ChatGPT requires continuous validation of the outcomes generated and thus fosters learning. Currently, ChatGPT is valued as a beneficial tool in thesis writing. However, writing a conclusive thesis still requires the learner’s meaningful engagement. Accordingly, writing a thesis is still a valid learning and assessment format. With further releases of ChatGPT, an increase in capabilities is to be expected, and the research question needs to be reevaluated from time to time.
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
In computer-aided design (CAD), industrial products are designed using a virtual 3D model. A CAD model typically consists of curves and surfaces in a parametric representation, in most cases, non-uniform rational B-splines (NURBS). The same representation is also used for the analysis, optimization and presentation of the model. In each phase of this process, different visualizations are required to provide an appropriate user feedback. Designers work with illustrative and realistic renderings, engineers need a
comprehensible visualization of the simulation results, and usability studies or product presentations benefit from using a 3D display. However, the interactive visualization of NURBS models and corresponding physical simulations is a challenging task because of the computational complexity and the limited graphics hardware support.
This thesis proposes four novel rendering approaches that improve the interactive visualization of CAD models and their analysis. The presented algorithms exploit latest graphics hardware capabilities to advance the state-of-the-art in terms of quality, efficiency and performance. In particular, two approaches describe the direct rendering of the parametric representation without precomputed approximations and timeconsuming pre-processing steps. New data structures and algorithms are presented for the efficient partition, classification, tessellation, and rendering of trimmed NURBS surfaces as well as the first direct isosurface ray-casting approach for NURBS-based isogeometric analysis. The other two approaches introduce the versatile concept of programmable order-independent semi-transparency for the illustrative and comprehensible visualization of depth-complex CAD models, and a novel method for the hybrid reprojection of opaque and semi-transparent image information to accelerate stereoscopic rendering. Both approaches are also applicable to standard polygonal geometry which contributes to the computer graphics and virtual reality research communities.
The evaluation is based on real-world NURBS-based models and simulation data. The results show that rendering can be performed directly on the underlying parametric representation with interactive frame rates and subpixel-precise image results. The computational costs of additional visualization effects, such as semi-transparency and stereoscopic rendering, are reduced to maintain interactive frame rates. The benefit of this performance gain was confirmed by quantitative measurements and a pilot user study.
Das Erzeugen räumlicher Konfigurationen ist eine zentrale Aufgabe im architektonischen bzw. städtebaulichen Entwurfsprozess und hat zum Ziel, eine für Menschen angenehme Umwelt zu schaffen. Der Geometrie der entstehenden Räume kommt hierbei eine zentrale Rolle zu, da sie einen großen Einfluss auf das Empfinden und Verhalten der Menschen ausübt und nur noch mit großem Aufwand verändert werden kann, wenn sie einmal gebaut wurde. Die meisten Entscheidungen zur Festlegung der Geometrie von Räumen werden während eines sehr kurzen Zeitraums (Entwurfsphase) getroffen. Fehlentscheidungen die in dieser Phase getroffen werden haben langfristige Auswirkungen auf das Leben von Menschen, und damit auch Konsequenzen auf ökonomische und ökologische Aspekte.
Mittels computerbasierten Layoutsystemen lässt sich der Entwurf räumlicher Konfigurationen sinnvoll unterstützen, da sie es ermöglichen, in kürzester Zeit eine große Anzahl an Varianten zu erzeugen und zu überprüfen. Daraus ergeben sich zwei Vorteile. Erstens kann die große Menge an Varianten dazu beitragen, bessere Lösungen zu finden. Zweitens kann das Formalisieren von Bewertungskriterien zu einer größeren Objektivität und Transparenz bei der Lösungsfindung führen. Um den Entwurf räumlicher Konfigurationen optimal zu unterstützen, muss ein Layoutsystem in der Lage sein, ein möglichst großes Spektrum an Grundrissvarianten zu erzeugen (Vielfalt); und zahlreiche Möglichkeiten und Detaillierungsstufen zur Problembeschreibung (Flexibilität), sowie Mittel anzubieten, mit denen sich die Anforderungen an die räumliche Konfiguration adäquat beschreiben lassen (Relevanz). Bezüglich Letzterem spielen wahrnehmungs- und nutzungsbezogene Kriterien (wie z. B. Grad an Privatheit, Gefühl von Sicherheit, Raumwirkung, Orientierbarkeit, Potenzial zu sozialer Interaktion) eine wichtige Rolle.
Die bislang entwickelten Layoutsysteme weisen hinsichtlich Vielfalt, Flexibilität und Relevanz wesentliche Beschränkungen auf, welche auf eine ungeeignete Methode zur Repräsentation von Räumen zurückzuführen sind. Die in einem Layoutsystem verwendeten Raumrepräsentationsmethoden bestimmen die Möglichkeiten zur Formerzeugung und Problembeschreibung wesentlich. Sichtbarkeitsbasierte Raumrepräsentationen (Sichtfelder, Sichtachsen, Konvexe Räume) eignen sich in besonderer Weise zur Abbildung von Räumen in Layoutsystemen, da sie einerseits ein umfangreiches Repertoire zur Verfügung stellen, um räumliche Konfigurationen hinsichtlich wahrnehmungs- und nutzungsbezogener Kriterien zu beschreiben. Andererseits lassen sie sich vollständig aus der Geometrie der begrenzenden Oberflächen ableiten und sind nicht an bestimmte zur Formerzeugung verwendete geometrische Objekte gebunden.
In der vorliegenden Arbeit wird ein Layoutsystem entwickelt, welches auf diesen Raumrepräsentationen basiert. Es wird ein Evaluationsmechanismus (EM) entwickelt, welcher es ermöglicht, beliebige zweidimensionale räumliche Konfigurationen hinsichtlich wahrnehmungs- und nutzungsrelevanter Kriterien zu bewerten. Hierzu wurde eine Methodik entwickelt, die es ermöglicht automatisch Raumbereiche (O-Spaces und P-Spaces) zu identifizieren, welche bestimmte Eigenschaften haben (z.B. sichtbare Fläche, Kompaktheit des Sichtfeldes, Tageslicht) und bestimmte Relationen zueinander (wie gegenseitige Sichtbarkeit, visuelle und physische Distanz) aufweisen. Der EM wurde mit Generierungsmechanismen (GM) gekoppelt, um zu prüfen, ob dieser sich eignet, um in großen Variantenräumen nach geeigneten räumlichen Konfigurationen zu suchen. Die Ergebnisse dieser Experimente zeigen, dass die entwickelte Methodik einen vielversprechenden Ansatz zur automatisierten Erzeugung von räumlichen Konfigurationen darstellt: Erstens ist der EM vollständig vom GM getrennt, wodurch es möglich ist, verschiedene GM in einem Entwurfssystem zu verwenden und somit den Variantenraum zu vergrößern (Vielfalt). Zweitens erlaubt der EM die Anforderungen an eine räumliche Konfiguration flexibel zu beschreiben (unterschiedliche Maßstäbe, unterschiedlicher Detaillierungsgrad). Letztlich erlauben die verwendeten Repräsentationsmethoden eine Problembeschreibung vorzunehmen, die stark an der Wirkung des Raumes auf den Menschen orientiert ist (Relevanz).
Die in der Arbeit entwickelte Methodik leistet einen wichtigen Beitrag zur Verbesserung evidenzbasierter Entwurfsprozesse, da sie eine Brücke zwischen der nutzerorientierten Bewertung von räumlichen Konfigurationen und deren Erzeugung schlägt.
Image Analysis Using Human Body Geometry and Size Proportion Science for Action Classification
(2020)
Gestures are one of the basic modes of human communication and are usually used to represent different actions. Automatic recognition of these actions forms the basis for solving more complex problems like human behavior analysis, video surveillance, event detection, and sign language recognition, etc. Action recognition from images is a challenging task as the key information like temporal data, object trajectory, and optical flow are not available in still images. While measuring the size of different regions of the human body i.e., step size, arms span, length of the arm, forearm, and hand, etc., provides valuable clues for identification of the human actions. In this article, a framework for classification of the human actions is presented where humans are detected and localized through faster region-convolutional neural networks followed by morphological image processing techniques. Furthermore, geometric features from human blob are extracted and incorporated into the classification rules for the six human actions i.e., standing, walking, single-hand side wave, single-hand top wave, both hands side wave, and both hands top wave. The performance of the proposed technique has been evaluated using precision, recall, omission error, and commission error. The proposed technique has been comparatively analyzed in terms of overall accuracy with existing approaches showing that it performs well in contrast to its counterparts.
In this work, molecular separation of aqueous-organic was simulated by using combined soft computing-mechanistic approaches. The considered separation system was a microporous membrane contactor for separation of benzoic acid from water by contacting with an organic phase containing extractor molecules. Indeed, extractive separation is carried out using membrane technology where complex of solute-organic is formed at the interface. The main focus was to develop a simulation methodology for prediction of concentration distribution of solute (benzoic acid) in the feed side of the membrane system, as the removal efficiency of the system is determined by concentration distribution of the solute in the feed channel. The pattern of Adaptive Neuro-Fuzzy Inference System (ANFIS) was optimized by finding the optimum membership function, learning percentage, and a number of rules. The ANFIS was trained using the extracted data from the CFD simulation of the membrane system. The comparisons between the predicted concentration distribution by ANFIS and CFD data revealed that the optimized ANFIS pattern can be used as a predictive tool for simulation of the process. The R2 of higher than 0.99 was obtained for the optimized ANFIS model. The main privilege of the developed methodology is its very low computational time for simulation of the system and can be used as a rigorous simulation tool for understanding and design of membrane-based systems.
Highlights are, Molecular separation using microporous membranes. Developing hybrid model based on ANFIS-CFD for the separation process, Optimization of ANFIS structure for prediction of separation process
Die vorliegende Arbeit untersucht das Potential von Webanwendungen in 3D zur Vermittlung von Informationen im Allgemeinen und zur Darstellung von städtebaulichen Zusammenhängen im Speziellen.
Als grundlegender Faktor der visuellen und funktionalen Qualität - welche die Wahrnehmung des Nutzers direkt beeinflusst -, erfolgt die Bewertung der Machbarkeit von 3D Webinhalten unter Anwendung einer explorativen, qualitativen Evaluierung von Webagenturen.
Darauf aufbauend wird das Potential von 3D Webanwendungen aus Nutzerperspektive untersucht, um Zusammenhänge herstellen zu können: einerseits zwischen der Machbarkeit bei der Entwicklung und anderseits die Akzeptanzkriterien beim Rezipienten betreffend.
Die empirische Studie, die mit dem Forschungspartner Bosch für diese Arbeit modelliert wurde, eruiert zum einen, inwiefern 3D im Vergleich zu 2D und 2,5D, und zum anderen WebGL im Vergleich zu bisherigen 3D Webtechnologien die visuelle Wahrnehmung und kognitive Leistungsfähigkeit des Nutzers beeinflusst.
Die Erkenntnisse der Untersuchung zeigen Parallelen zu bestehenden Studien aus web-fernen Bereichen.
Um die Bedeutung von 3D Webanwendungen zur Verbesserung von Entscheidungsprozessen in Stadtplanungsprojekten ableiten zu können, werden Aspekte zur Interaktion und visuellen Wahrnehmung in den speziellen Kontext von Stadtplanungswerkzeugen gebracht. Dabei wird überprüft, ob sich web-basierte 3D Visualisierungen sinnvoll zur Vermittlung städtebaulicher Zusammenhänge einbinden lassen und inwieweit bestehende Projekte, wie in dieser Arbeit beispielhaft das vom Fraunhofer IGD entwickelte Forschungsprojekt urbanAPI, die Technologie WebGL nutzen können.
Vor diesem Hintergrund soll die Arbeit Akzeptanzkriterien und Nutzungsbarrieren von 3D Webanwendungen auf Basis der Technologie WebGL identifizieren, um einen Beitrag zur Machbarkeit von Webanwendungen und zur Entwicklung entsprechender Stadtplanungswerkzeuge zu leisten.
Cultural Heritage on Mobile Devices: Building Guidelines for UNESCO World Heritage Sites' Apps
(2021)
Technological improvements and access provide a fertile scenario for creating and developing mobile applications (apps). This scenario results in a myriad of Apps providing information regarding touristic destinations, including those with a cultural profile, such as those dedicated to UNESCO World Heritage Sites (WHS). However, not all of the Apps have the same efficiency. In order to have a successful app, its development must consider usability aspects and features aligned with reliable content. Despite the guidelines for mobile usability being broadly available, they are generic, and none of them concentrates specifically into cultural heritage places, especially on those placed in an open-air scenario. This research aims to fulfil this literature gap and discusses how to adequate and develop specific guidelines for a better outdoor WHS experience. It uses an empirical approach applied to an open-air WHS city: Weimar and its Bauhaus and Classical Weimar sites. In order to build a new set of guidelines applied for open-air WHS, this research used a systematic approach to compare literature-based guidelines to industry-based ones (based on affordances), extracted from the available Apps dedicated to WHS set in Germany. The instructions compiled from both sources have been comparatively tested by using two built prototypes from the distinctive guidelines, creating a set of recommendations collecting the best approach from both sources, plus suggesting new ones the evaluation.
Estimating the solubility of carbon dioxide in ionic liquids, using reliable models, is of paramount importance from both environmental and economic points of view. In this regard, the current research aims at evaluating the performance of two data-driven techniques, namely multilayer perceptron (MLP) and gene expression programming (GEP), for predicting the solubility of carbon dioxide (CO2) in ionic liquids (ILs) as the function of pressure, temperature, and four thermodynamical parameters of the ionic liquid. To develop the above techniques, 744 experimental data points derived from the literature including 13 ILs were used (80% of the points for training and 20% for validation). Two backpropagation-based methods, namely Levenberg–Marquardt (LM) and Bayesian Regularization (BR), were applied to optimize the MLP algorithm. Various statistical and graphical assessments were applied to check the credibility of the developed techniques. The results were then compared with those calculated using Peng–Robinson (PR) or Soave–Redlich–Kwong (SRK) equations of state (EoS). The highest coefficient of determination (R2 = 0.9965) and the lowest root mean square error (RMSE = 0.0116) were recorded for the MLP-LMA model on the full dataset (with a negligible difference to the MLP-BR model). The comparison of results from this model with the vastly applied thermodynamic equation of state models revealed slightly better performance, but the EoS approaches also performed well with R2 from 0.984 up to 0.996. Lastly, the newly established correlation based on the GEP model exhibited very satisfactory results with overall values of R2 = 0.9896 and RMSE = 0.0201.
Calculating hydrocarbon components solubility of natural gases is known as one of the important issues for operational works in petroleum and chemical engineering. In this work, a novel solubility estimation tool has been proposed for hydrocarbon gases—including methane, ethane, propane, and butane—in aqueous electrolyte solutions based on extreme learning machine (ELM) algorithm. Comparing the ELM outputs with a comprehensive real databank which has 1175 solubility points yielded R-squared values of 0.985 and 0.987 for training and testing phases respectively. Furthermore, the visual comparison of estimated and actual hydrocarbon solubility led to confirm the ability of proposed solubility model. Additionally, sensitivity analysis has been employed on the input variables of model to identify their impacts on hydrocarbon solubility. Such a comprehensive and reliable study can help engineers and scientists to successfully determine the important thermodynamic properties, which are key factors in optimizing and designing different industrial units such as refineries and petrochemical plants.
This dissertation presents three studies on the design and implementation of interactive surface environments. It puts forward approaches to engineering interactive surface prototypes using prevailing methodologies and technologies. The scholarly findings from each study have been condensed into academic manuscripts, which are conferred herewith.
The first study identifies a communication gap between engineers of interactive surface systems (i.e., originators of concepts) and future developers. To bridge the gap, it explores a UML-based framework to establish a formal syntax for modeling hardware, middleware, and software of interactive surface prototypes. The proposed framework targets models-as-end-products, towards enabling a shared view of research prototypes thereby facilitating dialogue between concept originators and future developers.
The second study positions itself to support developers with an open-source solution for exploiting 3D point clouds for interactive tabletop applications using CPU architectures. Given dense 3D point-cloud representations of tabletop environments, the study aims toward mitigating high computational effort by segmenting candidate interaction regions as a preprocessing step. The study contributes a robust open-source solution for reducing computational costs when leveraging 3D point clouds for interactive tabletop applications. The solution itself is flexible and adaptable to variable interactive surface applications.
The third study contributes an archetypal concept for integrating mobile devices as active components in augmented tabletop surfaces. With emphasis on transparent development trails, the study demonstrates the utility of the open-source tool developed in the second study. In addition to leveraging 3D point clouds for real-time interaction, the research considers recent advances in computer vision and wireless communication to realize a modern, interactive tabletop application. A robust strategy that combines spatial augmented reality, point-cloud-based depth perception, CNN-based object detection, and Bluetooth communication is put forward. In addition to seamless communication between adhoc mobile devices and interactive tabletop systems, the archetypal concept demonstrates the benefits of preprocessing point clouds by segmenting candidate interaction regions, as suggested in the second study.
Collectively, the studies presented in this dissertation contribute; 1—bridging the gap between originators of interactive surface concepts and future developers, 2— promoting the exploration of 3D point clouds for interactive surface applications using CPU-based architectures, and 3—leveraging 3D point clouds together with emerging CNN-based object detection, and Bluetooth communication technologies to advance existing surface interaction concepts.
A Hybrid Clustering and Classification Technique for Forecasting Short-Term Energy Consumption
(2018)
Electrical energy distributor companies in Iran have to announce their energy demand at least three 3-day ahead of the market opening. Therefore, an accurate load estimation is highly crucial. This research invoked methodology based on CRISP data mining and used SVM, ANN, and CBA-ANN-SVM (a novel hybrid model of clustering with both widely used ANN and SVM) to predict short-term electrical energy demand of Bandarabbas. In previous studies, researchers introduced few effective parameters with no reasonable error about Bandarabbas power consumption. In this research we tried to recognize all efficient parameters and with the use of CBA-ANN-SVM model, the rate of error has been minimized. After consulting with experts in the field of power consumption and plotting daily power consumption for each week, this research showed that official holidays and weekends have impact on the power consumption. When the weather gets warmer, the consumption of electrical energy increases due to turning on electrical air conditioner. Also, con-sumption patterns in warm and cold months are different. Analyzing power consumption of the same month for different years had shown high similarity in power consumption patterns. Factors with high impact on power consumption were identified and statistical methods were utilized to prove their impacts. Using SVM, ANN and CBA-ANN-SVM, the model was built. Sine the proposed method (CBA-ANN-SVM) has low MAPE 5 1.474 (4 clusters) and MAPE 5 1.297 (3 clusters) in comparison with SVM (MAPE 5 2.015) and ANN (MAPE 5 1.790), this model was selected as the final model. The final model has the benefits from both models and the benefits of clustering. Clustering algorithm with discovering data structure, divides data into several clusters based on similarities and differences between them. Because data inside each cluster are more similar than entire data, modeling in each cluster will present better results. For future research, we suggest using fuzzy methods and genetic algorithm or a hybrid of both to forecast each cluster. It is also possible to use fuzzy methods or genetic algorithms or a hybrid of both without using clustering. It is issued that such models will produce better and more accurate results.
This paper presents a hybrid approach to predict the electric energy usage of weather-sensitive loads. The presented methodutilizes the clustering paradigm along with ANN and SVMapproaches for accurate short-term prediction of electric energyusage, using weather data. Since the methodology beinginvoked in this research is based on CRISP data mining, datapreparation has received a gr eat deal of attention in thisresear ch. Once data pre-processing was done, the underlyingpattern of electric energy consumption was extracted by themeans of machine learning methods to precisely forecast short-term energy consumption. The proposed approach (CBA-ANN-SVM) was applied to real load data and resulting higher accu-racy comparing to the existing models.
2018 American Institute of Chemical Engineers Environ Prog, 2018
https://doi.org/10.1002/ep.12934
This research aims to model soil temperature (ST) using machine learning models of multilayer perceptron (MLP) algorithm and support vector machine (SVM) in hybrid form with the Firefly optimization algorithm, i.e. MLP-FFA and SVM-FFA. In the current study, measured ST and meteorological parameters of Tabriz and Ahar weather stations in a period of 2013–2015 are used for training and testing of the studied models with one and two days as a delay. To ascertain conclusive results for validation of the proposed hybrid models, the error metrics are benchmarked in an independent testing period. Moreover, Taylor diagrams utilized for that purpose. Obtained results showed that, in a case of one day delay, except in predicting ST at 5 cm below the soil surface (ST5cm) at Tabriz station, MLP-FFA produced superior results compared with MLP, SVM, and SVM-FFA models. However, for two days delay, MLP-FFA indicated increased accuracy in predicting ST5cm and ST 20cm of Tabriz station and ST10cm of Ahar station in comparison with SVM-FFA. Additionally, for all of the prescribed models, the performance of the MLP-FFA and SVM-FFA hybrid models in the testing phase was found to be meaningfully superior to the classical MLP and SVM models.
Following restructuring of power industry, electricity supply to end-use customers has undergone fundamental changes. In the restructured power system, some of the responsibilities of the vertically integrated distribution companies have been assigned to network managers and retailers. Under the new situation, retailers are in charge of providing electrical energy to electricity consumers who have already signed contract with them. Retailers usually provide the required energy at a variable price, from wholesale electricity markets, forward contracts with energy producers, or distributed energy generators, and sell it at a fixed retail price to its clients. Different strategies are implemented by retailers to reduce the potential financial losses and risks associated with the uncertain nature of wholesale spot electricity market prices and electrical load of the consumers. In this paper, the strategic behavior of retailers in implementing forward contracts, distributed energy sources, and demand-response programs with the aim of increasing their profit and reducing their risk, while keeping their retail prices as low as possible, is investigated. For this purpose, risk management problem of the retailer companies collaborating with wholesale electricity markets, is modeled through bi-level programming approach and a comprehensive framework for retail electricity pricing, considering customers’ constraints, is provided in this paper. In the first level of the proposed bi-level optimization problem, the retailer maximizes its expected profit for a given risk level of profit variability, while in the second level, the customers minimize their consumption costs. The proposed programming problem is modeled as Mixed Integer programming (MIP) problem and can be efficiently solved using available commercial solvers. The simulation results on a test case approve the effectiveness of the proposed demand-response program based on dynamic pricing approach on reducing the retailer’s risk and increasing its profit.
In this paper, the decision-making problem of the retailers under dynamic pricing approach for demand response integration have been investigated. The retailer was supposed to rely on forward contracts, DGs, and spot electricity market to supply the required active and reactive power of its customers. To verify the effectiveness of the proposed model, four schemes for retailer’s scheduling problem are considered and the resulted profit under each scheme are analyzed and compared. The simulation results on a test case indicate that providing more options for the retailer to buy the required power of its customers and increase its flexibility in buying energy from spot electricity market reduces the retailers’ risk and increases its profit. From the customers’ perspective also the retailers’accesstodifferentpowersupplysourcesmayleadtoareductionintheretailelectricityprices. Since the retailer would be able to decrease its electricity selling price to the customers without losing its profitability, with the aim of attracting more customers. Inthiswork,theconditionalvalueatrisk(CVaR)measureisusedforconsideringandquantifying riskinthedecision-makingproblems. Amongallthepossibleoptioninfrontoftheretailertooptimize its profit and risk, demand response programs are the most beneficial option for both retailer and its customers. The simulation results on the case study prove that implementing dynamic pricing approach on retail electricity prices to integrate demand response programs can successfully provoke customers to shift their flexible demand from peak-load hours to mid-load and low-load hours. Comparing the simulation results of the third and fourth schemes evidences the impact of DRPs and customers’ load shifting on the reduction of retailer’s risk, as well as the reduction of retailer’s payment to contract holders, DG owners, and spot electricity market. Furthermore, the numerical results imply on the potential of reducing average retail prices up to 8%, under demand response activation. Consequently, it provides a win–win solution for both retailer and its customers.
The automotive industry requires realistic virtual reality applications more than other domains to increase the efficiency of product development. Currently, the visual quality of virtual invironments resembles reality, but interaction within these environments is usually far from what is known in everyday life. Several realistic research approaches exist, however they are still not all-encompassing enough to be usable in industrial processes. This thesis realizes lifelike direct multi-hand and multi-finger interaction with arbitrary objects, and proposes algorithmic and technical improvements that also approach lifelike usability. In addition, the thesis proposes methods to measure the effectiveness and usability of such interaction techniques as well as discusses different types of grasping feedback that support the user during interaction. Realistic and reliable interaction is reached through the combination of robust grasping heuristics and plausible pseudophysical object reactions. The easy-to-compute grasping rules use the objects’ surface normals, and mimic human grasping behavior. The novel concept of Normal Proxies increases grasping stability and diminishes challenges induced by adverse normals. The intricate act of picking-up thin and tiny objects remains challenging for some users. These cases are further supported by the consideration of finger pinches, which are measured with a specialized finger tracking device. With regard to typical object constraints, realistic object motion is geometrically calculated as a plausible reaction on user input. The resulting direct finger-based
interaction technique enables realistic and intuitive manipulation of arbitrary objects. The thesis proposes two methods that prove and compare effectiveness and usability. An expert review indicates that experienced users quickly familiarize themselves with the technique. A quantitative and qualitative user study shows that direct finger-based interaction is preferred over indirect interaction in the context of functional car assessments. While controller-based interaction is more robust, the direct finger-based interaction provides greater realism, and becomes nearly as reliable when the pinch-sensitive mechanism is used. At present, the haptic channel is not used in industrial virtual reality applications. That is why it can be used for grasping feedback which improves the users’ understanding of the grasping situation. This thesis realizes a novel pressure-based tactile feedback at the fingertips. As an alternative, vibro-tactile feedback at the same location is realized as well as visual feedback by the coloring of grasp-involved finger segments. The feedback approaches are also compared within the user study, which reveals that grasping feedback is a requirement to judge grasp status and that tactile feedback improves interaction independent of the used display system. The considerably stronger vibrational tactile feedback can quickly become annoying during interaction. The interaction improvements and hardware enhancements make it possible to interact with virtual objects in a realistic and reliable manner. By addressing realism and reliability, this thesis paves the way for the virtual evaluation of human-object interaction, which is necessary for a broader application of virtual environments in the automotive industry and other domains.
Polylactic acid (PLA) is a highly applicable material that is used in 3D printers due to some significant features such as its deformation property and affordable cost. For improvement of the end-use quality, it is of significant importance to enhance the quality of fused filament fabrication (FFF)-printed objects in PLA. The purpose of this investigation was to boost toughness and to reduce the production cost of the FFF-printed tensile test samples with the desired part thickness. To remove the need for numerous and idle printing samples, the response surface method (RSM) was used. Statistical analysis was performed to deal with this concern by considering extruder temperature (ET), infill percentage (IP), and layer thickness (LT) as controlled factors. The artificial intelligence method of artificial neural network (ANN) and ANN-genetic algorithm (ANN-GA) were further developed to estimate the toughness, part thickness, and production-cost-dependent variables. Results were evaluated by correlation coefficient and RMSE values. According to the modeling results, ANN-GA as a hybrid machine learning (ML) technique could enhance the accuracy of modeling by about 7.5, 11.5, and 4.5% for toughness, part thickness, and production cost, respectively, in comparison with those for the single ANN method. On the other hand, the optimization results confirm that the optimized specimen is cost-effective and able to comparatively undergo deformation, which enables the usability of printed PLA objects.
Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources.
In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts.
Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques.
The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches.
Modern cryptography has become an often ubiquitous but essential part of our daily lives. Protocols for secure authentication and encryption protect our communication with various digital services, from private messaging, online shopping, to bank transactions or exchanging sensitive information. Those high-level protocols can naturally be only as secure as the authentication or encryption schemes underneath. Moreover, on a more detailed level, those schemes can also at best inherit the security of their underlying primitives. While widespread standards in modern symmetric-key cryptography, such as the Advanced Encryption Standard (AES), have shown to resist analysis until now, closer analysis and design of related primitives can deepen our understanding.
The present thesis consists of two parts that portray six contributions: The first part considers block-cipher cryptanalysis of the round-reduced AES, the AES-based tweakable block cipher Kiasu-BC, and TNT. The second part studies the design, analysis, and implementation of provably secure authenticated encryption schemes.
In general, cryptanalysis aims at finding distinguishable properties in the output distribution. Block ciphers are a core primitive of symmetric-key cryptography which are useful for the construction of various higher-level schemes, ranging from authentication, encryption, authenticated encryption up to integrity protection. Therefore, their analysis is crucial to secure cryptographic schemes at their lowest level. With rare exceptions, block-cipher cryptanalysis employs a systematic strategy of investigating known attack techniques. Modern proposals are expected to be evaluated against these techniques. The considerable effort for evaluation, however, demands efforts not only from the designers but also from external sources.
The Advanced Encryption Standard (AES) is one of the most widespread block ciphers nowadays. Therefore, it is naturally an interesting target for further analysis. Tweakable block ciphers augment the usual inputs of a secret key and a public plaintext by an additional public input called tweak. Among various proposals through the previous decade, this thesis identifies Kiasu-BC as a noteworthy attempt to construct a tweakable block cipher that is very close to the AES. Hence, its analysis intertwines closely with that of the AES and illustrates the impact of the tweak on its security best. Moreover, it revisits a generic tweakable block cipher Tweak-and-Tweak (TNT) and its instantiation based on the round-reduced AES.
The first part investigates the security of the AES against several forms of differential cryptanalysis, developing distinguishers on four to six (out of ten) rounds of AES. For Kiasu-BC, it exploits the additional freedom in the tweak to develop two forms of differential-based attacks: rectangles and impossible differentials. The results on Kiasu-BC consider an additional round compared to attacks on the (untweaked) AES. The authors of TNT had provided an initial security analysis that still left a gap between provable guarantees and attacks. Our analysis conducts a considerable step towards closing this gap. For TNT-AES - an instantiation of TNT built upon the AES round function - this thesis further shows how to transform our distinguisher into a key-recovery attack.
Many applications require the simultaneous authentication and encryption of transmitted data. Authenticated encryption (AE) schemes provide both properties. Modern AE schemes usually demand a unique public input called nonce that must not repeat. Though, this requirement cannot always be guaranteed in practice. As part of a remedy, misuse-resistant and robust AE tries to reduce the impact of occasional misuses. However, robust AE considers not only the potential reuse of nonces. Common authenticated encryption also demanded that the entire ciphertext would have to be buffered until the authentication tag has been successfully verified. In practice, this approach is difficult to ensure since the setting may lack the resources for buffering the messages. Moreover, robustness guarantees in the case of misuse are valuable features.
The second part of this thesis proposes three authenticated encryption schemes: RIV, SIV-x, and DCT. RIV is robust against nonce misuse and the release of unverified plaintexts. Both SIV-x and DCT provide high security independent from nonce repetitions. As the core under SIV-x, this thesis revisits the proof of a highly secure parallel MAC, PMAC-x, revises its details, and proposes SIV-x as a highly secure authenticated encryption scheme. Finally, DCT is a generic approach to have n-bit secure deterministic AE but without the need of expanding the ciphertext-tag string by more than n bits more than the plaintext.
From its first part, this thesis aims to extend the understanding of the (1) cryptanalysis of round-reduced AES, as well as the understanding of (2) AES-like tweakable block ciphers. From its second part, it demonstrates how to simply extend known approaches for (3) robust nonce-based as well as (4) highly secure deterministic authenticated encryption.
Diese Arbeit beschäftigt sich mit der Nutzung von Worteinbettungen in der automatischen Analyse von argumentativen Texten. Die Arbeit diskutiert wichtige Einstellungen des Einbettungsverfahren sowie diverse Anwendungsmethoden der eingebetteten Wortvektoren für drei Aufgaben der automatischen argumentativen Analyse: Textsegmentierung, Argumentativitäts-Klassifikation und Relationenfindung. Meine Experimente auf zwei Standard-Argumentationsdatensätzen zeigen die folgenden Haupterkenntnisse: Bei der Textsegmentierung konnten keine Verbesserungen erzielt werden, während in der Argumentativitäts-Klassifikation und der Relationenfindung sich kleine Erfolge gezeigt haben und weitere bestimmte Forschungsthesen bewahrheitet werden konnten. In der Diskussion wird darauf eingegangen, warum bei der einfachen Worteinbettung in der argumentativen Analyse sich kaum nutzbare Ergebnisse erzielen lassen konnten, diese sich aber in Zukunft durch erweiterte Worteinbettungsverfahren verbessern können.
The need for finding persuasive arguments can arise in a variety of domains such as politics, finance, marketing or personal entertainment. In these domains, there is a demand to make decisions by oneself or to convince somebody about a specific topic. To obtain a conclusion, one has to search thoroughly different sources in literature and on the web to compare various arguments. Voice interfaces, in form of smartphone applications or smart speakers, present the user with natural conversations in a comfortable way to make search requests in contrast to a traditional search interface with keyboard and display. Benefits and obstacles of such a new interface are analyzed by conducting two studies. The first one consists of a survey for analyzing the target group with questions about situations, motivations, and possible demanding features. The latter one is a wizard-of-oz experiment to investigate possible queries on how a user formulates requests to such a novel system. The results indicate that a search interface with conversational abilities can build a helpful assistant, but to satisfy the demands of a broader audience some additional information retrieval and visualization features need to be implemented.
In the Space Syntax community, the standard tool for computing all kinds of spatial graph network measures is depthmapX (Turner, 2004; Varoudis, 2012). The process of evaluating many design variants of networks is relatively complicated, since they need to be drawn in a separated CAD system, exported and imported in depthmapX via dxf file format. This procedure disables a continuous integration into a design process. Furthermore, the standalone character of depthmapX makes it impossible to use its network centrality calculation for optimization processes. To overcome this limitations, we present in this paper the first steps of experimenting with a Grasshopper component (reference omitted until final version) that can access the functions of depthmapX and integrate them into Grasshopper/Rhino3D. Here the component is implemented in a way that it can be used directly for an evolutionary algorithm (EA) implemented in a Python scripting component in Grasshopper
Based on the description of a conceptual framework for the representation of planning problems on various scales, we introduce an evolutionary design optimization system. This system is exemplified by means of the generation of street networks with locally defined properties for centrality. We show three different scenarios for planning requirements and evaluate the resulting structures with respect to the requirements of our framework. Finally the potentials and challenges of the presented approach are discussed in detail.
When working on urban planning projects there are usually multiple aspects to consider. Often these aspects are contradictory and it is not possible to choose one over the other; instead, they each need to be fulfilled as well as possible. Planners typically draw on past experience when subjectively prioritising which aspects to consider with which degree of importance for their planning concepts. This practice, although understandable, places power and authority in the hands of people who have varying degrees of expertise, which means that the best possible solution is not always found, because it is either not sought or the problem is regarded as being too complex for human capabilities. To improve this situation, the project presented here shows the potential of multi-criteria optimisation algorithms using the example of a new housing layout for an urban block. In addition it is shown, how Self-Organizing-Maps can be used to visualise multi-dimensional solution spaces in an easy analysable and comprehensible form.
It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future.
The structure and development of cities can be seen and evaluated from different points of view. By replicating the growth or shrinkage of a city using historical maps depicting different time states, we can obtain momentary snapshots of the dynamic mechanisms of the city. An examination of how these snapshots change over the course of time and a comparison of the different static time states reveals the various interdependencies of population density, technical infrastructure and the availability of public transport facilities. Urban infrastructure and facilities are not distributed evenly across the city – rather they are subject to different patterns and speeds of spread over the course of time and follow different spatial and temporal regularities. The reasons and underlying processes that cause the transition from one state to another result from the same recurring but varyingly pronounced hidden forces and their complex interactions. Such forces encompass a variety of economic, social, cultural and ecological conditions whose respective weighting defines the development of a city in general. Urban development is, however, not solely a product of the different spatial distribution of economic, legal or social indicators but also of the distribution of infrastructure. But to what extent is the development of a city affected by the changing provision of infrastructure? As
How does it come to particular structure formations in the cities and which strengths play a role in this process? On which elements can the phenomena be reduced to find the respective combination rules? How do general principles have to be formulated to be able to describe the urban processes so that different structural qualities can be produced? With the aid of mathematic methods, models based on four basic levels are generated in the computer, through which the connections between the elements and the rules of their interaction can be examined. Conclusions on the function of developing processes and the further urban origin can be derived.
Previous models for the explanation of settlement processes pay little attention to the interactions between settlement spreading and road networks. On the basis of a dielectric breakdown model in combination with cellular automata, we present a method to steer precisely the generation of settlement structures with regard to their global and local density as well as the size and number of forming clusters. The resulting structures depend on the logic of how the dependence of the settlements and the road network is implemented to the simulation model. After analysing the state of the art we begin with a discussion of the mutual dependence of roads and land development. Next, we elaborate a model that permits the precise control of permeability in the developing structure as well as the settlement density, using the fewest necessary control parameters. On the basis of different characteristic values, possible settlement structures are analysed and compared with each other. Finally, we reflect on the theoretical contribution of the model with regard to the context of urban dynamics.
Some caad packages offer additional support for the optimization of spatial configurations, but the possibilities for applying optimization are usually limited either by the complexity of the data model or by the constraints of the underlying caad system. Since we missed a system that allows to experiment with optimization techniques for the synthesis of spatial configurations, we developed a collection of methods over the past years. This collection is now combined in the presented open source library for computational planning synthesis, called CPlan. The aim of the library is to provide an easy to use programming framework with a flat learning curve for people with basic programming knowledge. It offers an extensible structure that allows to add new customized parts for various purposes. In this paper the existing functionality of the CPlan library is described.
At the end of the 1960s, architects at various universities world- wide began to explore the potential of computer technology for their profession. With the decline in prices for PCs in the 1990s and the development of various computer-aided architectural design systems (CAAD), the use of such systems in architectural and planning offices grew continuously. Because today no ar- chitectural office manages without a costly CAAD system and because intensive soſtware training has become an integral part of a university education, the question arises about what influence the various computer systems have had on the design process forming the core of architectural practice. The text at hand devel- ops ten theses about why there has been no success to this day in introducing computers such that new qualitative possibilities for design result. RESTRICTEDNESS