TY - BOOK ED - Maier, Matthias ED - Simon-Ritz, Frank T1 - Alles digital? E-Books in Studium und Forschung : Weimarer EDOC-Tage 2011 N2 - Es ist ein Bild aus alten Tagen: ein wissbegieriger Student, auf der Suche nach fundierter wissenschaftlicher Information, begibt sich an den heiligsten Ort aller Bücher – die Universitätsbibliothek. Doch seit einiger Zeit tummeln sich Studierende nicht mehr nur in Bibliotheken, sondern auch immer häufiger im Internet. Sie suchen und finden dort digitale Bücher, sogenannte E-Books. Wie lässt sich der Wandel durch den Einzug des E-Books in das etablierte Forschungssystem beschreiben, welche Konsequenzen lassen sich daraus ablesen und wird schließlich alles digital, sogar die Bibliothek? Diesen Fragen geht ein elfköpfiges Expertenteam aus Deutschland und der Schweiz während der zweitägigen Konferenz auf den Grund. Bei den Weimarer E-DOC-Tagen geht es nun um die Veränderung des institutionellen Gefüges rund um das digitale Buch. Denn traditionell sind Verlage und Bibliotheken wichtige Bestandteile der Wissensversorgung in Studium und Lehre. Doch mit dem Aufkommen des E-Books verlagert sich die Recherche mehr und mehr ins Internet. Die Suchmaschine Google tritt als neuer Konkurrent der klassischen Bibliotheksrecherche auf. Aber auch Verlage müssen verstärkt auf die neuen Herausforderungen eines digitalen Buchmarktes reagieren. In Kooperation mit der Universitätsbibliothek und dem Master-Studiengang Medienmanagement diskutieren Studierende, Wissenschaftler, Bibliothekare und Verleger, wie das E-Book unseren Umgang mit Literatur verändert. Der Tagungsband stellt alle Perspektiven und Ergebnisse zum Nachlesen zusammen. KW - Elektronisches Buch KW - Studium KW - Lehre KW - Forschung KW - EDOC-Tage Weimar 2011 KW - E-Book KW - Digitaler Buchmarkt Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20120223-15699 SN - 978-3-86068-454-2 N1 - Die Tagung fand am 27. - 28. Mai 2011 an der Bauhaus-Univesität Weimar im Audimax statt. PB - Verlag der Bauhaus-Universität CY - Weimar ER - TY - INPR A1 - Abbas, Tajammal A1 - Kavrakov, Igor A1 - Morgenthal, Guido A1 - Lahmer, Tom T1 - Prediction of aeroelastic response of bridge decks using artificial neural networks N2 - The assessment of wind-induced vibrations is considered vital for the design of long-span bridges. The aim of this research is to develop a methodological framework for robust and efficient prediction strategies for complex aerodynamic phenomena using hybrid models that employ numerical analyses as well as meta-models. Here, an approach to predict motion-induced aerodynamic forces is developed using artificial neural network (ANN). The ANN is implemented in the classical formulation and trained with a comprehensive dataset which is obtained from computational fluid dynamics forced vibration simulations. The input to the ANN is the response time histories of a bridge section, whereas the output is the motion-induced forces. The developed ANN has been tested for training and test data of different cross section geometries which provide promising predictions. The prediction is also performed for an ambient response input with multiple frequencies. Moreover, the trained ANN for aerodynamic forcing is coupled with the structural model to perform fully-coupled fluid--structure interaction analysis to determine the aeroelastic instability limit. The sensitivity of the ANN parameters to the model prediction quality and the efficiency has also been highlighted. The proposed methodology has wide application in the analysis and design of long-span bridges. KW - Aerodynamik KW - Artificial neural network KW - Ingenieurwissenschaften KW - Bridge KW - Bridge aerodynamics KW - Aerodynamic derivatives KW - Motion-induced forces KW - Bridges Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200225-40974 N1 - This is the pre-peer reviewed version of the following article: https://www.sciencedirect.com/science/article/abs/pii/S0045794920300018?via%3Dihub, https://doi.org/10.1016/j.compstruc.2020.106198 ER - TY - JOUR A1 - Abbaspour-Gilandeh, Yousef A1 - Molaee, Amir A1 - Sabzi, Sajad A1 - Nabipour, Narjes A1 - Shamshirband, Shahaboddin A1 - Mosavi, Amir T1 - A Combined Method of Image Processing and Artificial Neural Network for the Identification of 13 Iranian Rice Cultivars JF - agronomy N2 - Due to the importance of identifying crop cultivars, the advancement of accurate assessment of cultivars is considered essential. The existing methods for identifying rice cultivars are mainly time-consuming, costly, and destructive. Therefore, the development of novel methods is highly beneficial. The aim of the present research is to classify common rice cultivars in Iran based on color, morphologic, and texture properties using artificial intelligence (AI) methods. In doing so, digital images of 13 rice cultivars in Iran in three forms of paddy, brown, and white are analyzed through pre-processing and segmentation of using MATLAB. Ninety-two specificities, including 60 color, 14 morphologic, and 18 texture properties, were identified for each rice cultivar. In the next step, the normal distribution of data was evaluated, and the possibility of observing a significant difference between all specificities of cultivars was studied using variance analysis. In addition, the least significant difference (LSD) test was performed to obtain a more accurate comparison between cultivars. To reduce data dimensions and focus on the most effective components, principal component analysis (PCA) was employed. Accordingly, the accuracy of rice cultivar separations was calculated for paddy, brown rice, and white rice using discriminant analysis (DA), which was 89.2%, 87.7%, and 83.1%, respectively. To identify and classify the desired cultivars, a multilayered perceptron neural network was implemented based on the most effective components. The results showed 100% accuracy of the network in identifying and classifying all mentioned rice cultivars. Hence, it is concluded that the integrated method of image processing and pattern recognition methods, such as statistical classification and artificial neural networks, can be used for identifying and classification of rice cultivars. KW - Maschinelles Lernen KW - Machine learning KW - food informatics KW - big data KW - artificial neural networks KW - artificial intelligence KW - image processing KW - rice Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200123-40695 UR - https://www.mdpi.com/2073-4395/10/1/117 VL - 2020 IS - Volume 10, Issue 1, 117 PB - MDPI ER - TY - JOUR A1 - Ahmadi, Mohammad Hossein A1 - Baghban, Alireza A1 - Sadeghzadeh, Milad A1 - Zamen, Mohammad A1 - Mosavi, Amir A1 - Shamshirband, Shahaboddin A1 - Kumar, Ravinder A1 - Mohammadi-Khanaposhtani, Mohammad T1 - Evaluation of electrical efficiency of photovoltaic thermal solar collector JF - Engineering Applications of Computational Fluid Mechanics N2 - In this study, machine learning methods of artificial neural networks (ANNs), least squares support vector machines (LSSVM), and neuro-fuzzy are used for advancing prediction models for thermal performance of a photovoltaic-thermal solar collector (PV/T). In the proposed models, the inlet temperature, flow rate, heat, solar radiation, and the sun heat have been considered as the input variables. Data set has been extracted through experimental measurements from a novel solar collector system. Different analyses are performed to examine the credibility of the introduced models and evaluate their performances. The proposed LSSVM model outperformed the ANFIS and ANNs models. LSSVM model is reported suitable when the laboratory measurements are costly and time-consuming, or achieving such values requires sophisticated interpretations. KW - Fotovoltaik KW - Erneuerbare Energien KW - Solar KW - Deep learning KW - Machine learning KW - Renewable energy KW - neural networks (NNs) KW - adaptive neuro-fuzzy inference system (ANFIS) KW - least square support vector machine (LSSVM) KW - photovoltaic-thermal (PV/T) KW - hybrid machine learning model KW - OA-Publikationsfonds2020 Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20200304-41049 UR - https://www.tandfonline.com/doi/full/10.1080/19942060.2020.1734094 VL - 2020 IS - volume 14, issue 1 SP - 545 EP - 565 PB - Taylor & Francis ER - TY - THES A1 - Al Khatib, Khalid T1 - Computational Analysis of Argumentation Strategies N2 - The computational analysis of argumentation strategies is substantial for many downstream applications. It is required for nearly all kinds of text synthesis, writing assistance, and dialogue-management tools. While various tasks have been tackled in the area of computational argumentation, such as argumentation mining and quality assessment, the task of the computational analysis of argumentation strategies in texts has so far been overlooked. This thesis principally approaches the analysis of the strategies manifested in the persuasive argumentative discourses that aim for persuasion as well as in the deliberative argumentative discourses that aim for consensus. To this end, the thesis presents a novel view of argumentation strategies for the above two goals. Based on this view, new models for pragmatic and stylistic argument attributes are proposed, new methods for the identification of the modelled attributes have been developed, and a new set of strategy principles in texts according to the identified attributes is presented and explored. Overall, the thesis contributes to the theory, data, method, and evaluation aspects of the analysis of argumentation strategies. The models, methods, and principles developed and explored in this thesis can be regarded as essential for promoting the applications mentioned above, among others. KW - Argumentation KW - Natürliche Sprache KW - Argumentation Strategies KW - Sprachverarbeitung KW - Natural Language Processing KW - Computational Argumentation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20210719-44612 ER - TY - THES A1 - Anderka, Maik T1 - Analyzing and Predicting Quality Flaws in User-generated Content: The Case of Wikipedia N2 - Web applications that are based on user-generated content are often criticized for containing low-quality information; a popular example is the online encyclopedia Wikipedia. The major points of criticism pertain to the accuracy, neutrality, and reliability of information. The identification of low-quality information is an important task since for a huge number of people around the world it has become a habit to first visit Wikipedia in case of an information need. Existing research on quality assessment in Wikipedia either investigates only small samples of articles, or else deals with the classification of content into high-quality or low-quality. This thesis goes further, it targets the investigation of quality flaws, thus providing specific indications of the respects in which low-quality content needs improvement. The original contributions of this thesis, which relate to the fields of user-generated content analysis, data mining, and machine learning, can be summarized as follows: (1) We propose the investigation of quality flaws in Wikipedia based on user-defined cleanup tags. Cleanup tags are commonly used in the Wikipedia community to tag content that has some shortcomings. Our approach is based on the hypothesis that each cleanup tag defines a particular quality flaw. (2) We provide the first comprehensive breakdown of Wikipedia's quality flaw structure. We present a flaw organization schema, and we conduct an extensive exploratory data analysis which reveals (a) the flaws that actually exist, (b) the distribution of flaws in Wikipedia, and, (c) the extent of flawed content. (3) We present the first breakdown of Wikipedia's quality flaw evolution. We consider the entire history of the English Wikipedia from 2001 to 2012, which comprises more than 508 million page revisions, summing up to 7.9 TB. Our analysis reveals (a) how the incidence and the extent of flaws have evolved, and, (b) how the handling and the perception of flaws have changed over time. (4) We are the first who operationalize an algorithmic prediction of quality flaws in Wikipedia. We cast quality flaw prediction as a one-class classification problem, develop a tailored quality flaw model, and employ a dedicated one-class machine learning approach. A comprehensive evaluation based on human-labeled Wikipedia articles underlines the practical applicability of our approach. KW - Data Mining KW - Machine Learning KW - Wikipedia KW - User-generated Content Analysis KW - Information Quality Assessment KW - Quality Flaw Prediction Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20130709-19778 ER - TY - THES A1 - Azari, Banafsheh T1 - Bidirectional Texture Functions: Acquisition, Rendering and Quality Evaluation N2 - As one of its primary objectives, Computer Graphics aims at the simulation of fabrics’ complex reflection behaviour. Characteristic surface reflectance of fabrics, such as highlights, anisotropy or retro-reflection arise the difficulty of synthesizing. This problem can be solved by using Bidirectional Texture Functions (BTFs), a 2D-texture under various light and view direction. But the acquisition of Bidirectional Texture Functions requires an expensive setup and the measurement process is very time-consuming. Moreover, the size of BTF data can range from hundreds of megabytes to several gigabytes, as a large number of high resolution pictures have to be used in any ideal cases. Furthermore, the three-dimensional textured models rendered through BTF rendering method are subject to various types of distortion during acquisition, synthesis, compression, and processing. An appropriate image quality assessment scheme is a useful tool for evaluating image processing algorithms, especially algorithms designed to leave the image visually unchanged. In this contribution, we present and conduct an investigation aimed at locating a robust threshold for downsampling BTF images without loosing perceptual quality. To this end, an experimental study on how decreasing the texture resolution influences perceived quality of the rendered images has been presented and discussed. Next, two basic improvements to the use of BTFs for rendering are presented: firstly, the study addresses the cost of BTF acquisition by introducing a flexible low-cost step motor setup for BTF acquisition allowing to generate a high quality BTF database taken at user-defined arbitrary angles. Secondly, the number of acquired textures to the perceptual quality of renderings is adapted so that the database size is not overloaded and can fit better in memory when rendered. Although visual attention is one of the essential attributes of HVS, it is neglected in most existing quality metrics. In this thesis an appropriate objective quality metric based on extracting visual attention regions from images and adequate investigation of the influence of visual attention on perceived image quality assessment, called Visual Attention Based Image Quality Metric (VABIQM), has been proposed. The novel metric indicates that considering visual saliency can offer significant benefits with regard to constructing objective quality metrics to predict the visible quality differences in images rendered by compressed and non-compressed BTFs and also outperforms straightforward existing image quality metrics at detecting perceivable differences. KW - Wahrnehmung KW - Bildqualität KW - BTF Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20180820-37790 ER - TY - THES A1 - Beck, Stephan T1 - Immersive Telepresence Systems and Technologies N2 - Modern immersive telepresence systems enable people at different locations to meet in virtual environments using realistic three-dimensional representations of their bodies. For the realization of such a three-dimensional version of a video conferencing system, each user is continuously recorded in 3D. These 3D recordings are exchanged over the network between remote sites. At each site, the remote recordings of the users, referred to as 3D video avatars, are seamlessly integrated into a shared virtual scenery and displayed in stereoscopic 3D for each user from his or her perspective. This thesis reports on algorithmic and technical contributions to modern immersive telepresence systems and presents the design, implementation and evaluation of the first immersive group-to-group telepresence system in which each user is represented as realistic life-size 3D video avatar. The system enabled two remote user groups to meet and collaborate in a consistent shared virtual environment. The system relied on novel methods for the precise calibration and registration of color- and depth- sensors (RGBD) into the coordinate system of the application as well as an advanced distributed processing pipeline that reconstructs realistic 3D video avatars in real-time. During the course of this thesis, the calibration of 3D capturing systems was greatly improved. While the first development focused on precisely calibrating individual RGBD-sensors, the second stage presents a new method for calibrating and registering multiple color and depth sensors at a very high precision throughout a large 3D capturing volume. This method was further refined by a novel automatic optimization process that significantly speeds up the manual operation and yields similarly high accuracy. A core benefit of the new calibration method is its high runtime efficiency by directly mapping from raw depth sensor measurements into an application coordinate system and to the coordinates of its associated color sensor. As a result, the calibration method is an efficient solution in terms of precision and applicability in virtual reality and immersive telepresence applications. In addition to the core contributions, the results of two case studies which address 3D reconstruction and data streaming lead to the final conclusion of this thesis and to directions of future work in the rapidly advancing field of immersive telepresence research. N2 - In modernen 3D-Telepresence-Systemen werden die Nutzer realistisch dreidimensional repräsentiert und können sich in einer gemeinsamen virtuellen Umgebung treffen. Da sich die Nutzer gegenseitig realistisch sehen können, werden Limitierungen von herkömmlichen zweidimensionalen Videokonferenzsystemen überwunden und neue Möglichkeiten für die Kollaboration geschaffen. Für die Realisierung eines immersiven Telepresence-Systems wird jeder Nutzer kontinuierlich in 3D aufgenommen und als sogenannter 3D-Video-Avatar rekonstruiert. Die 3D-Video-Avatare werden über eine Netzwerkverbindung zwischen den entfernten Orten ausgetauscht, auf jeder Seite in eine gemeinsame virtuelle Szene integriert und für jeden Nutzer perspektivisch korrekt dreidimensional angezeigt. Diese Arbeit trägt algorithmisch und technisch zur aktuellen Forschung im Bereich 3D-Telepresence bei und präsentiert das Design, die Implementierung und die Evaluation eines neuen immersiven Telepresence-Systems. Benutzergruppen können sich dadurch zum ersten Mal von unterschiedlichen Orten in einer konsistenten gemeinsamen virtuellen Umgebung treffen und als realistische lebensgroße 3D-Video-Avatare sehen. Das System basiert auf neu entwickelten Methoden, welche die präzise Kalibrierung und Registrierung von mehreren Farb- und Tiefenkameras in ein gemeinsames Koordinatensystem ermöglichen, sowie auf einer neu entwickelten verteilten Prozesskette, welche die realistische Rekonstruktion von 3D-Video-Avataren in Echtzeit ermöglicht. Im Rahmen dieser Arbeit wurde die Kalibrierung von 3D-Aufnahmesystemen, die auf mehreren Farb- und Tiefenkameras basieren, deutlich verbessert. Eine erste Entwicklung konzentrierte sich auf die präzise Kalibrierung und Registrierung ein- zelner Tiefenkameras. Eine wesentliche Neuentwicklung ermöglicht es, mehrere Farb- und Tiefenkameras mit sehr hoher Genauigkeit innerhalb eines großen 3D-Aufnahmebereichs volumetrisch zu kalibrieren und in ein übergeordnetes Koordinatensystem zu registrieren. Im Laufe der Arbeit wurde die notwendige Nutzerinteraktion durch ein automatisches Optimierungsverfahren deutlich verringert, was die Kalibrierung von 3D-Aufnahmesystemen innerhalb weniger Minuten mit hoher Genauigkeit ermöglicht. Ein wesentlicher Vorteil dieser neuen volumetrischen Kalibrierungsmethode besteht darin, dass gemessene Tiefenwerte direkt in das Koordinatensystem der Anwendung und in das Koordinatensystem der korrespondierenden Farbkamera abgebildet werden. Insbesondere sind während der Anwendungslaufzeit keine Berechnungen zur Linsenentzerrung nötig, da diese bereits implizit durch die volumetrische Kalibrierung ausgeglichen sind. Das in dieser Arbeit entwickelte immersive Telepresence-System hebt sich von verwandten Arbeiten ab. Der durch das System geschaffene virtuelle Begegnungsraum ermöglicht natürliche Interaktionsformen, wie zum Beispiel Gestik oder Mimik, und bietet gleichzeitig etablierte Interaktionstechniken der Virtuellen Realität, welche die gemeinsame Exploration und Analyse von 3D-Inhalten unterstützen. Die in dieser Arbeit neu entwickelte Kalibrierungsmethode stellt eine effiziente Lösung hinsichtlich Genauigkeit und Flexibilität für Virtual-Reality- und moderne 3D-Telepresence-Anwendungen dar. Zusätzlich zu den vorgestellten Entwicklungen tragen die Ergebnisse zweier Fallstudien im Bereich 3D-Rekonstruktion und Netzwerkübertragungzu dieser Arbeit bei und unterstützen Vorschläge und Ausblicke für zukünftige Entwicklungen im fortschreitenden Gebiet der 3D-Telepresence-Forschung. KW - Virtuelle Realität KW - Telepräsenz KW - Mensch-Maschine-Kommunikation KW - Tiefensensor KW - Camera Calibration KW - Depth Camera KW - 3D Telepresence KW - Virtual Reality Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190218-38569 ER - TY - THES A1 - Berhe, Asgedom Haile T1 - Mitigating Risks of Corruption in Construction: A theoretical rationale for BIM adoption in Ethiopia N2 - This PhD thesis sets out to investigate the potentials of Building Information Modeling (BIM) to mitigate risks of corruption in the Ethiopian public construction sector. The wide-ranging capabilities and promises of BIM have led to the strong perception among researchers and practitioners that it is an indispensable technology. Consequently, it has become the frequent subject of science and research. Meanwhile, many countries, especially the developed ones, have committed themselves to applying the technology extensively. Increasing productivity is the most common and frequently cited reason for that. However, both technology developers and adopters are oblivious to the potentials of BIM in addressing critical challenges in the construction sector, such as corruption. This particularly would be significant in developing countries like Ethiopia, where its problems and effects are acute. Studies reveal that bribery and corruption have long pervaded the construction industry worldwide. The complex and fragmented nature of the sector provides an environment for corruption. The Ethiopian construction sector is not immune from this epidemic reality. In fact, it is regarded as one of the most vulnerable sectors owing to varying socio-economic and political factors. Since 2015, Ethiopia has started adopting BIM, yet without clear goals and strategies. As a result, the potential of BIM for combating concrete problems of the sector remains untapped. To this end, this dissertation does pioneering work by showing how collaboration and coordination features of the technology contribute to minimizing the opportunities for corruption. Tracing loopholes, otherwise, would remain complex and ineffective in the traditional documentation processes. Proceeding from this anticipation, this thesis brings up two primary questions: what are areas and risks of corruption in case of the Ethiopian public construction projects; and how could BIM be leveraged to mitigate these risks? To tackle these and other secondary questions, the research employs a mixed-method approach. The selected main research strategies are Survey, Grounded Theory (GT) and Archival Study. First, the author disseminates an online questionnaire among Ethiopian construction engineering professionals to pinpoint areas of vulnerability to corruption. 155 responses are compiled and scrutinized quantitatively. Then, a semi-structured in-depth interview is conducted with 20 senior professionals, primarily to comprehend opportunities for and risks of corruption in those identified highly vulnerable project stages and decision points. At the same time, open interviews (consultations) are held with 14 informants to be aware of state of the construction documentation, BIM and loopholes for corruption in the country. Consequently, these qualitative data are analyzed utilizing the principles of GT, heat/risk mapping and Social Network Analysis (SNA). The risk mapping assists the researcher in the course of prioritizing corruption risks; whilst through SNA, methodically, it is feasible to identify key actors/stakeholders in the corruption venture. Based on the generated research data, the author constructs a [substantive] grounded theory around the elements of corruption in the Ethiopian public construction sector. This theory, later, guides the subsequent strategic proposition of BIM. Finally, 85 public construction related cases are also analyzed systematically to substantiate and confirm previous findings. By ways of these multiple research endeavors that is based, first and foremost, on the triangulation of qualitative and quantitative data analysis, the author conveys a number of key findings. First, estimations, tender document preparation and evaluation, construction material as well as quality control and additional work orders are found to be the most vulnerable stages in the design, tendering and construction phases respectively. Second, middle management personnel of contractors and clients, aided by brokers, play most critical roles in corrupt transactions within the prevalent corruption network. Third, grand corruption persists in the sector, attributed to the fact that top management and higher officials entertain their overriding power, supported by the lack of project audits and accountability. Contrarily, individuals at operation level utilize intentional and unintentional 'errors’ as an opportunity for corruption. In light of these findings, two conceptual BIM-based risk mitigation strategies are prescribed: active and passive automation of project audits; and the monitoring of project information throughout projects’ value chain. These propositions are made in reliance on BIM’s present dimensional capabilities and the promises of Integrated Project Delivery (IPD). Moreover, BIM’s synchronous potentials with other technologies such as Information and Communication Technology (ICT), and Radio Frequency technologies are topics which received a treatment. All these arguments form the basis for the main thesis of this dissertation, that BIM is able to mitigate corruption risks in the Ethiopian public construction sector. The discourse on the skepticisms about BIM that would stem from the complex nature of corruption and strategic as well as technological limitations of BIM is also illuminated and complemented by this work. Thus, the thesis uncovers possible research gaps and lays the foundation for further studies. KW - Building Information Modeling KW - Korruption KW - Risikomanagement KW - Äthiopien KW - Corruption KW - BIM KW - Risk management KW - Construction KW - Ethiopia Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20211007-45175 ER - TY - JOUR A1 - Bielik, Martin A1 - Schneider, Sven A1 - Kuliga, Saskia A1 - Griego, Danielle A1 - Ojha, Varun A1 - König, Reinhard A1 - Schmitt, Gerhard A1 - Donath, Dirk ED - Resch, Bernd ED - Szell, Michael T1 - Examining Trade-Offs between Social, Psychological, and Energy Potential of Urban Form JF - ISPRS International Journal of Geo-Information N2 - Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal mehods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility. For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages. KW - Planung KW - social accessibility KW - urban perception KW - energy demand KW - urban form KW - trade-offs KW - OA-Publikationsfonds2019 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20190408-38695 UR - https://www.mdpi.com/2220-9964/8/2/52 VL - 2019 EP - Volume 8, Issue 2, 52 ER -