000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Has Fulltext
- yes (98) (remove)
Document Type
- Article (38)
- Doctoral Thesis (23)
- Conference Proceeding (12)
- Master's Thesis (6)
- Preprint (6)
- Bachelor Thesis (5)
- Report (4)
- Book (2)
- Sound (1)
- Study Thesis (1)
Institute
- Junior-Professur Computational Architecture (25)
- Institut für Strukturmechanik (ISM) (19)
- Professur Informatik in der Architektur (10)
- Professur Bauphysik (5)
- Professur Content Management und Webtechnologien (5)
- Professur Systeme der Virtuellen Realität (5)
- Professur Informatik im Bauwesen (4)
- Professur Modellierung und Simulation - Konstruktion (4)
- Bauhaus-Institut für zukunftsweisende Infrastruktursysteme (b.is) (3)
- Professur Mediensicherheit (3)
Keywords
- Maschinelles Lernen (10)
- Architektur (9)
- Machine learning (7)
- machine learning (6)
- CAD (5)
- Städtebau (5)
- BIM (4)
- Deep learning (4)
- OA-Publikationsfonds2020 (4)
- Simulation (4)
Modern immersive telepresence systems enable people at different locations to meet in virtual environments using realistic three-dimensional representations of their bodies. For the realization of such a three-dimensional version of a video conferencing system, each user is continuously recorded in 3D. These 3D recordings are exchanged over the network between remote sites. At each site, the remote recordings of the users, referred to as 3D video avatars, are seamlessly integrated into a shared virtual scenery and displayed in stereoscopic 3D for each user from his or her perspective.
This thesis reports on algorithmic and technical contributions to modern immersive telepresence systems and presents the design, implementation and evaluation of the first immersive group-to-group telepresence system in which each user is represented as realistic life-size 3D video avatar. The system enabled two remote user groups to meet and collaborate in a consistent shared virtual environment. The system relied on novel methods for the precise calibration and registration of color- and depth- sensors (RGBD) into the coordinate system of the application as well as an advanced distributed processing pipeline that reconstructs realistic 3D video avatars in real-time. During the course of this thesis, the calibration of 3D capturing systems was greatly improved. While the first development focused on precisely calibrating individual RGBD-sensors, the second stage presents a new method for calibrating and registering multiple color and depth sensors at a very high precision throughout a large 3D capturing volume. This method was further refined by a novel automatic optimization process that significantly speeds up the manual operation and yields similarly high accuracy. A core benefit of the new calibration method is its high runtime efficiency by directly mapping from raw depth sensor measurements into an application coordinate system and to the coordinates of its associated color sensor. As a result, the calibration method is an efficient solution in terms of precision and applicability in virtual reality and immersive telepresence applications. In addition to the core contributions, the results of two case studies which address 3D reconstruction and data streaming lead to the final conclusion of this thesis and to directions of future work in the rapidly advancing field of immersive telepresence research.
As one of its primary objectives, Computer Graphics aims at the simulation of fabrics’ complex reflection behaviour. Characteristic surface reflectance of fabrics, such as highlights, anisotropy or retro-reflection arise the difficulty of synthesizing. This problem can be solved by using Bidirectional Texture Functions (BTFs), a 2D-texture under various light and view direction. But the acquisition of Bidirectional Texture Functions requires an expensive setup and the measurement process is very time-consuming. Moreover, the size of BTF data can range from hundreds of megabytes to several gigabytes, as a large number of high resolution pictures have to be used in any ideal cases. Furthermore, the three-dimensional textured models rendered through BTF rendering method are subject to various types of distortion during acquisition, synthesis, compression, and processing. An appropriate image quality assessment scheme is a useful tool for evaluating image processing algorithms, especially algorithms designed to leave the image visually unchanged. In this contribution, we present and conduct an investigation aimed at locating a robust threshold for downsampling BTF images without loosing perceptual quality. To this end, an experimental study on how decreasing the texture resolution influences perceived quality of the rendered images has been presented and discussed.
Next, two basic improvements to the use of BTFs for rendering are presented: firstly, the study addresses the cost of BTF acquisition by introducing a flexible low-cost step motor setup for BTF acquisition allowing to generate a high quality BTF database taken at user-defined arbitrary angles. Secondly, the number of acquired textures to the perceptual quality of renderings is adapted so that the database size is not overloaded and can fit better in memory when rendered.
Although visual attention is one of the essential attributes of HVS, it is neglected in most existing quality metrics. In this thesis an appropriate objective quality metric based on extracting visual attention regions from images and adequate investigation of the influence of visual attention on perceived image quality assessment, called Visual Attention Based Image Quality Metric (VABIQM), has been proposed. The novel metric indicates that considering visual saliency can offer significant benefits with regard to constructing objective quality metrics to predict the visible quality differences in images rendered by compressed and non-compressed BTFs and also outperforms straightforward existing image quality metrics at detecting perceivable differences.
Web applications that are based on user-generated content are often criticized for containing low-quality information; a popular example is the online encyclopedia Wikipedia. The major points of criticism pertain to the accuracy, neutrality, and reliability of information. The identification of low-quality information is an important task since for a huge number of people around the world it has become a habit to first visit Wikipedia in case of an information need. Existing research on quality assessment in Wikipedia either investigates only small samples of articles, or else deals with the classification of content into high-quality or low-quality. This thesis goes further, it targets the investigation of quality flaws, thus providing specific indications of the respects in which low-quality content needs improvement. The original contributions of this thesis, which relate to the fields of user-generated content analysis, data mining, and machine learning, can be summarized as follows:
(1) We propose the investigation of quality flaws in Wikipedia based on user-defined cleanup tags. Cleanup tags are commonly used in the Wikipedia community to tag content that has some shortcomings. Our approach is based on the hypothesis that each cleanup tag defines a particular quality flaw.
(2) We provide the first comprehensive breakdown of Wikipedia's quality flaw structure. We present a flaw organization schema, and we conduct an extensive exploratory data analysis which reveals (a) the flaws that actually exist, (b) the distribution of flaws in Wikipedia, and, (c) the extent of flawed content.
(3) We present the first breakdown of Wikipedia's quality flaw evolution. We consider the entire history of the English Wikipedia from 2001 to 2012, which comprises more than 508 million page revisions, summing up to 7.9 TB. Our analysis reveals (a) how the incidence and the extent of flaws have evolved, and, (b) how the handling and the perception of flaws have changed over time.
(4) We are the first who operationalize an algorithmic prediction of quality flaws in Wikipedia. We cast quality flaw prediction as a one-class classification problem, develop a tailored quality flaw model, and employ a dedicated one-class machine learning approach. A comprehensive evaluation based on human-labeled Wikipedia articles underlines the practical applicability of our approach.
The computational analysis of argumentation strategies is substantial for many downstream applications. It is required for nearly all kinds of text synthesis, writing assistance, and dialogue-management tools. While various tasks have been tackled in the area of computational argumentation, such as argumentation mining and quality assessment, the task of the computational analysis of argumentation strategies in texts has so far been overlooked.
This thesis principally approaches the analysis of the strategies manifested in the persuasive argumentative discourses that aim for persuasion as well as in the deliberative argumentative discourses that aim for consensus. To this end, the thesis presents a novel view of argumentation strategies for the above two goals. Based on this view, new models for pragmatic and stylistic argument attributes are proposed, new methods for the identification of the modelled attributes have been developed, and a new set of strategy principles in texts according to the identified attributes is presented and explored.
Overall, the thesis contributes to the theory, data, method, and evaluation aspects of the analysis of argumentation strategies. The models, methods, and principles developed and explored in this thesis can be regarded as essential for promoting the applications mentioned above, among others.
In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations.
Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars.
The assessment of wind-induced vibrations is considered vital for the design of long-span bridges. The aim of this research is to develop a methodological framework for robust and efficient prediction strategies for complex aerodynamic phenomena using hybrid models that employ numerical analyses as well as meta-models. Here, an approach to predict motion-induced aerodynamic forces is developed using artificial neural network (ANN). The ANN is implemented in the classical formulation and trained with a comprehensive dataset which is obtained from computational fluid dynamics forced vibration simulations. The input to the ANN is the response time histories of a bridge section, whereas the output is the motion-induced forces. The developed ANN has been tested for training and test data of different cross section geometries which provide promising predictions. The prediction is also performed for an ambient response input with multiple frequencies. Moreover, the trained ANN for aerodynamic forcing is coupled with the structural model to perform fully-coupled fluid--structure interaction analysis to determine the aeroelastic instability limit. The sensitivity of the ANN parameters to the model prediction quality and the efficiency has also been highlighted. The proposed methodology has wide application in the analysis and design of long-span bridges.
Es ist ein Bild aus alten Tagen: ein wissbegieriger Student, auf der Suche nach fundierter wissenschaftlicher Information, begibt sich an den heiligsten Ort aller Bücher – die Universitätsbibliothek. Doch seit einiger Zeit tummeln sich Studierende nicht mehr nur in Bibliotheken, sondern auch immer häufiger im Internet. Sie suchen und finden dort digitale Bücher, sogenannte E-Books.
Wie lässt sich der Wandel durch den Einzug des E-Books in das etablierte Forschungssystem beschreiben, welche Konsequenzen lassen sich daraus ablesen und wird schließlich alles digital, sogar die Bibliothek? Diesen Fragen geht ein elfköpfiges Expertenteam aus Deutschland und der Schweiz während der zweitägigen Konferenz auf den Grund.
Bei den Weimarer E-DOC-Tagen geht es nun um die Veränderung des institutionellen Gefüges rund um das digitale Buch. Denn traditionell sind Verlage und Bibliotheken wichtige Bestandteile der Wissensversorgung in Studium und Lehre. Doch mit dem Aufkommen des E-Books verlagert sich die Recherche mehr und mehr ins Internet. Die Suchmaschine Google tritt als neuer Konkurrent der klassischen Bibliotheksrecherche auf. Aber auch Verlage müssen verstärkt auf die neuen Herausforderungen eines digitalen Buchmarktes reagieren.
In Kooperation mit der Universitätsbibliothek und dem Master-Studiengang Medienmanagement diskutieren Studierende, Wissenschaftler, Bibliothekare und Verleger, wie das E-Book unseren Umgang mit Literatur verändert. Der Tagungsband stellt alle Perspektiven und Ergebnisse zum Nachlesen zusammen.