000 Informatik, Wissen, Systeme
Refine
Document Type
- Conference Proceeding (147)
- Article (15)
- Doctoral Thesis (4)
- Part of a Book (2)
- Study Thesis (1)
Institute
- In Zusammenarbeit mit der Bauhaus-Universität Weimar (86)
- Graduiertenkolleg 1462 (34)
- Institut für Strukturmechanik (ISM) (16)
- Professur Angewandte Mathematik (13)
- Professur Informatik im Bauwesen (5)
- Institut für Konstruktiven Ingenieurbau (IKI) (4)
- Professur Stochastik und Optimierung (3)
- Geschichte und Theorie der Visuellen Kommunikation (2)
- Professur Computer Vision in Engineering (2)
- Professur Stahlbau (2)
Keywords
- Angewandte Informatik (146)
- Angewandte Mathematik (146)
- Computerunterstütztes Verfahren (145)
- Architektur <Informatik> (74)
- Computer Science Models in Engineering; Multiscale and Multiphysical Models; Scientific Computing (74)
- Building Information Modeling (35)
- Data, information and knowledge modeling in civil engineering; Function theoretic methods and PDE in engineering sciences; Mathematical methods for (robotics and) computer vision; Numerical modeling in engineering; Optimization in engineering applications (34)
- Maschinelles Lernen (3)
- Finite-Elemente-Methode (2)
- Massendaten (2)
Search engines are very good at answering queries that look for facts. Still, information needs that concern forming opinions on a controversial topic or making a decision remain a challenge for search engines. Since they are optimized to retrieve satisfying answers, search engines might emphasize a specific stance on a controversial topic in their ranking, amplifying bias in society in an undesired way. Argument retrieval systems support users in forming opinions about controversial topics by retrieving arguments for a given query. In this thesis, we address challenges in argument retrieval systems that concern integrating them in search engines, developing generalizable argument mining approaches, and enabling frame-guided delivery of arguments.
Adapting argument retrieval systems to search engines should start by identifying and analyzing information needs that look for arguments. To identify questions that look for arguments we develop a two-step annotation scheme that first identifies whether the context of a question is controversial, and if so, assigns it one of several question types: factual, method, and argumentative. Using this annotation scheme, we create a question dataset from the logs of a major search engine and use it to analyze the characteristics of argumentative questions. The analysis shows that the proportion of argumentative questions on controversial topics is substantial and that they mainly ask for reasons and predictions. The dataset is further used to develop a classifier to uniquely map questions to the question types, reaching a convincing F1-score of 0.78.
While the web offers an invaluable source of argumentative content to respond to argumentative questions, it is characterized by multiple genres (e.g., news articles and social fora). Exploiting the web as a source of arguments relies on developing argument mining approaches that generalize over genre. To this end, we approach the problem of how to extract argument units in a genre-robust way. Our experiments on argument unit segmentation show that transfer across genres is rather hard to achieve using existing sequence-to-sequence models.
Another property of text which argument mining approaches should generalize over is topic. Since new topics appear daily on which argument mining approaches are not trained, argument mining approaches should be developed in a topic-generalizable way. Towards this goal, we analyze the coverage of 31 argument corpora across topics using three topic ontologies. The analysis shows that the topics covered by existing argument corpora are biased toward a small subset of easily accessible controversial topics, hinting at the inability of existing approaches to generalize across topics. In addition to corpus construction standards, fostering topic generalizability requires a careful formulation of argument mining tasks. Same side stance classification is a reformulation of stance classification that makes it less dependent on the topic. First experiments on this task show promising results in generalizing across topics.
To be effective at persuading their audience, users of an argument retrieval system should select arguments from the retrieved results based on what frame they emphasize of a controversial topic. An open challenge is to develop an approach to identify the frames of an argument. To this end, we define a frame as a subset of arguments that share an aspect. We operationalize this model via an approach that identifies and removes the topic of arguments before clustering them into frames. We evaluate the approach on a dataset that covers 12,326 frames and show that identifying the topic of an argument and removing it helps to identify its frames.
In this paper we propose a novel and efficient rasterization-based approach for direct rendering of isosurfaces. Our method exploits the capabilities of task and mesh shader pipelines to identify subvolumes containing potentially visible isosurface geometry, and to efficiently extract primitives which are consumed on the fly by the rasterizer. As a result, our approach requires little preprocessing and negligible additional memory. Direct isosurface rasterization is competitive in terms of rendering performance when compared with ray-marching-based approaches, and significantly outperforms them for increasing resolution in most situations. Since our approach is entirely rasterization based, it affords straightforward integration into existing rendering pipelines, while allowing the use of modern graphics hardware features, such as multi-view stereo for efficient rendering of stereoscopic image pairs for geometry-bound applications. Direct isosurface rasterization is suitable for applications where isosurface geometry is highly variable, such as interactive analysis scenarios for static and dynamic data sets that require frequent isovalue adjustment.
Die Form der Datenbank
(2023)
Datenbanken sind heute die wichtigste Technik zur Organisation und Verarbeitung von Daten. Wie wurden sie zu einer der allgegenwärtigsten und gleichzeitig unsichtbarsten Praxis, die menschliche Zusammenarbeit ermöglicht? Diese Studie beginnt mit einer historiographischen Erkundung der zentralen medialen Konzepte von Datenbanken und mündet in das praxeologische Konzept der "Daten als Formation", kurz: In-Formation.
Der erste Hauptteil befasst sich mit der Formatierung von Daten durch die Verarbeitung strukturierter Daten mittels relationaler Algebra. Es wird erarbeitet, auf welche Weise Struktur neues Wissen schafft. Im zweiten Teil wird erörtert, wie Datenbanken durch den diagrammatisch-epistemischen Raum der Tabelle operationalisiert werden. Drittens untersucht die Studie Transaktionen als Erklärungen dafür, wie Daten und reale Handlungen koordiniert und synchronisiert werden können.
Im zweiten Hauptteil wird untersucht, wie relationale Datenbanken zunehmend zum Zentrum von Softwareanwendungen und Infrastrukturen wurden, wobei der Schwerpunkt auf wirtschaftlichen Praktiken liegt. In einem vergleichenden Ansatz wird anhand von Fallstudien in der DDR der 1970er bis 1990er Jahren die Frage gestellt, ob es eine „sozialistische“ Datenbankmanagement-Software gegeben hat. Dabei werden die „westlichen“ Produktionsdatenbanken BOMP, COPICS und MAPICS (IBM) sowie R2 (SAP) im Zusammenspiel mit den ostdeutschen Sachgebietsorientierten Programmiersystemen (SOPS) von Robotron diskutiert. Schließlich untersucht dieser Teil, wie die DDR ihr eigenes relationales Datenbankmanagementsystem, DABA 1600, entwickelte und dabei „westliche“ Technologie neu interpretierte.
Das abschließende Kapitel fasst die Konzepte der relationalen Datenbanken als heute wichtigsten Datenorganisationstechnik zusammen. Es erörtert, inwiefern es möglich ist, die historiographische Erzählung über die Entstehung von Datenbankmanagementsystemen und ihre Folgen für die Geschichte der Informatik zu dezentrieren. Es schließt mit der Erkenntnis, dass östliche und westliche Medien der Kooperation sich in Form und Funktion erstaunlich ähnlich sind, beide wurzeln in den tiefen Genealogien von organisatorischen und wissensbildenden Datenpraktiken.
Neben dieser medienwissenschaftlichen Arbeit besteht die Dissertation aus einem künstlerischen Teil, der dokumentiert wird: Anhand einer Reihe von Vlogs erkundet die fiktionale Figur „Data Proxy“ aktuelle Datenökologien.
For this paper, the problem of energy/voltage management in photovoltaic (PV)/battery systems was studied, and a new fractional-order control system on basis of type-3 (T3) fuzzy logic systems (FLSs) was developed. New fractional-order learning rules are derived for tuning of T3-FLSs such that the stability is ensured. In addition, using fractional-order calculus, the robustness was studied versus dynamic uncertainties, perturbation of irradiation, and temperature and abruptly faults in output loads, and, subsequently, new compensators were proposed. In several examinations under difficult operation conditions, such as random temperature, variable irradiation, and abrupt changes in output load, the capability of the schemed controller was verified. In addition, in comparison with other methods, such as proportional-derivative-integral (PID), sliding mode controller (SMC), passivity-based control systems (PBC), and linear quadratic regulator (LQR), the superiority of the suggested method was demonstrated.
Piping erosion is one form of water erosion that leads to significant changes in the landscape and environmental degradation. In the present study, we evaluated piping erosion modeling in the Zarandieh watershed of Markazi province in Iran based on random forest (RF), support vector machine (SVM), and Bayesian generalized linear models (Bayesian GLM) machine learning algorithms. For this goal, due to the importance of various geo-environmental and soil properties in the evolution and creation of piping erosion, 18 variables were considered for modeling the piping erosion susceptibility in the Zarandieh watershed. A total of 152 points of piping erosion were recognized in the study area that were divided into training (70%) and validation (30%) for modeling. The area under curve (AUC) was used to assess the effeciency of the RF, SVM, and Bayesian GLM. Piping erosion susceptibility results indicated that all three RF, SVM, and Bayesian GLM models had high efficiency in the testing step, such as the AUC shown with values of 0.9 for RF, 0.88 for SVM, and 0.87 for Bayesian GLM. Altitude, pH, and bulk density were the variables that had the greatest influence on the piping erosion susceptibility in the Zarandieh watershed. This result indicates that geo-environmental and soil chemical variables are accountable for the expansion of piping erosion in the Zarandieh watershed.
Rapid advancements of modern technologies put high demands on mathematical modelling of engineering systems. Typically, systems are no longer “simple” objects, but rather coupled systems involving multiphysics phenomena, the modelling of which involves coupling of models that describe different phenomena. After constructing a mathematical model, it is essential to analyse the correctness of the coupled models and to detect modelling errors compromising the final modelling result. Broadly, there are two classes of modelling errors: (a) errors related to abstract modelling, eg, conceptual errors concerning the coherence of a model as a whole and (b) errors related to concrete modelling or instance modelling, eg, questions of approximation quality and implementation. Instance modelling errors, on the one hand, are relatively well understood. Abstract modelling errors, on the other, are not appropriately addressed by modern modelling methodologies. The aim of this paper is to initiate a discussion on abstract approaches and their usability for mathematical modelling of engineering systems with the goal of making it possible to catch conceptual modelling errors early and automatically by computer assistant tools. To that end, we argue that it is necessary to identify and employ suitable mathematical abstractions to capture an accurate conceptual description of the process of modelling engineering systems.
FCS-MBFLEACH: Designing an Energy-Aware Fault Detection System for Mobile Wireless Sensor Networks
(2019)
Wireless sensor networks (WSNs) include large-scale sensor nodes that are densely distributed over a geographical region that is completely randomized for monitoring, identifying, and analyzing physical events. The crucial challenge in wireless sensor networks is the very high dependence of the sensor nodes on limited battery power to exchange information wirelessly as well as the non-rechargeable battery of the wireless sensor nodes, which makes the management and monitoring of these nodes in terms of abnormal changes very difficult. These anomalies appear under faults, including hardware, software, anomalies, and attacks by raiders, all of which affect the comprehensiveness of the data collected by wireless sensor networks. Hence, a crucial contraption should be taken to detect the early faults in the network, despite the limitations of the sensor nodes. Machine learning methods include solutions that can be used to detect the sensor node faults in the network. The purpose of this study is to use several classification methods to compute the fault detection accuracy with different densities under two scenarios in regions of interest such as MB-FLEACH, one-class support vector machine (SVM), fuzzy one-class, or a combination of SVM and FCS-MBFLEACH methods. It should be noted that in the study so far, no super cluster head (SCH) selection has been performed to detect node faults in the network. The simulation outcomes demonstrate that the FCS-MBFLEACH method has the best performance in terms of the accuracy of fault detection, false-positive rate (FPR), average remaining energy, and network lifetime compared to other classification methods.
Hochschulwege 2015
(2017)
Die in diesem Tagungsband zusammengeführten Beiträge beschäftigen sich mit dem Spannungsfeld, das sich zwischen externen Förderprogrammen, Veränderungsprojekten und den Zielen, Strukturen und Bedingungen der jeweiligen Hochschule ergibt. In diesem Spannungsfeld kommt es unweigerlich zu Reibungen, da vorhandene Strukturen und Ziele in Konflikt mit neuen Vorhaben und Ideen geraten. Ein Teil der Projekte stellt allein durch ihr finanzielles Volumen und die daraus resultierende Wirkkraft die tradierten Verhältnisse zwischen Lehre, Forschung und den wissenschaftsstützenden Bereichen in Frage und teils auf den Kopf. Die leitenden Fragen der Tagung und der hier versammelten Beiträge waren daher: Wie bringen Hochschulen ihre individuellen Ziele mit denen der bundesweiten Programme oder länderspezfifischer Fördermaßnahmen überein? Wie gehen Hochschulen mit ihren Projekten um? Wie vollzieht sich Veränderung an den Hochschulen? Und schließlich: Was bleibt von den Impulsen, die Projekte setzen? Die in diesem Tagungsband versammelten Beiträge geben darauf erste, auf dem bisherigen Erfahrungswissen basierende Antworten. Sie setzen sich intensiv mit den Faktoren auseinander, die den Erfolg von Veränderungsprozessen und Projekten befördern oder behindern können und leiten daraus Empfehlungen für Gestaltungsprozesse an Hochschulen ab.
30. Forum Bauinformatik
(2018)
Die Bauhaus-Universität Weimar ist seit langer Zeit mit dem Forum Bauinformatik eng verbunden. So wurde die Veranstaltung 1989 hier durch den Arbeitskreis Bauinformatik ins Leben gerufen und auch das 10. und 18. Forum Bauinformatik (1998 bzw. 2006) fand in Weimar statt. In diesem Jahr freuen wir uns daher besonders, das 30. Jubiläum an der Bauhaus-Universität Weimar ausrichten zu dürfen und viele interessierte Wissenschaftler und Wissenschaftlerinnen aus dem Bereich der Bauinformatik in Weimar willkommen zu heißen.
Das Forum Bauinformatik hat sich längst zu einem festen Bestandteil der Bauinformatik im deutschsprachigen Raum entwickelt. Dabei steht es traditionsgemäß unter dem Motto „von jungen Forschenden für junge Forschende“, wodurch insbesondere Nachwuchswissenschaftlerinnen und ‑wissenschaftlern die Möglichkeit geboten wird, ihre Forschungsarbeiten zu präsentieren, Problemstellungen fachspezifisch zu diskutieren und sich über den neuesten Stand der Forschung zu informieren. Zudem wird eine ausgezeichnete Gelegenheit geboten, in die wissenschaftliche Gemeinschaft im Bereich der Bauinformatik einzusteigen und Kontakte mit anderen Forschenden zu knüpfen.
In diesem Jahr erhielten wir 49 interessante und qualitativ hochwertige Beiträge vor allem in den Themenbereichen Simulation, Modellierung, Informationsverwaltung, Geoinformatik, Structural Health Monitoring, Visualisierung, Verkehrssimulation und Optimierung. Dafür möchten wir uns ganz besonders bei allen Autoren, Co-Autoren und Reviewern bedanken, die durch ihr Engagement das diesjährige Forum Bauinformatik erst möglich gemacht haben. Wir danken zudem Professor Große und Professor Díaz für die Unterstützung bei der Auswahl der Beiträge für die Best Paper Awards.
Ein herzliches Dankeschön geht an die Kollegen an der Professur Informatik im Bauwesen der Bauhaus-Universität Weimar für die organisatorische, technische und beratende Unterstützung während der Planung der Veranstaltung.
During the previous decades, the upcoming demand for security in the digital world, e.g., the Internet, lead to numerous groundbreaking research topics in the field of cryptography. This thesis focuses on the design and analysis of cryptographic primitives and schemes to be used for authentication of data and communication endpoints, i.e., users. It is structured into three parts, where we present the first freely scalable multi-block-length block-cipher-based compression function (Counter-bDM) in the first part. The presented design is accompanied by a thorough security analysis regarding its preimage and collision security. The second and major part is devoted to password hashing. It is motivated by the large amount of leaked password during the last years and our discovery of side-channel attacks on scrypt – the first modern password scrambler that allowed to parameterize the amount of memory required to compute a password hash. After summarizing which properties we expect from a modern password scrambler, we (1) describe a cache-timing attack on scrypt based on its password-dependent memory-access pattern and (2) outline an additional attack vector – garbage-collector attacks – that exploits optimization which may disregard to overwrite the internally used memory. Based on our observations, we introduce Catena – the first memory-demanding password-scrambling framework that allows a password-independent memory-access pattern for resistance to the aforementioned attacks. Catena was submitted to the Password Hashing Competition (PHC) and, after two years of rigorous analysis, ended up as a finalist gaining special recognition for its agile framework approach and side-channel resistance. We provide six instances of Catena suitable for a variety of applications. We close the second part of this thesis with an overview of modern password scramblers regarding their functional, security, and general properties; supported by a brief analysis of their resistance to garbage-collector attacks. The third part of this thesis is dedicated to the integrity (authenticity of data) of nonce-based authenticated encryption schemes (NAE). We introduce the so-called j-IV-Collision Attack, allowing to obtain an upper bound for an adversary that is provided with a first successful forgery and tries to efficiently compute j additional forgeries for a particular NAE scheme (in short: reforgeability). Additionally, we introduce the corresponding security notion j-INT-CTXT and provide a comparative analysis (regarding j-INT-CTXT security) of the third-round submission to the CAESAR competition and the four classical and widely used NAE schemes CWC, CCM, EAX, and GCM.
We apply keyquery-based taxonomy composition to compute a classification system for the CORE dataset, a shared crawl of about 850,000 scientific papers. Keyquery-based taxonomy composition can be understood as a two-phase hierarchical document clustering technique that utilizes search queries as cluster labels: In a first phase, the document collection is indexed by a reference search engine, and the documents are tagged with the search queries they are relevant—for their so-called keyqueries. In a second phase, a hierarchical clustering is formed from the keyqueries within an iterative process. We use the explicit topic model ESA as document retrieval model in order to index the CORE dataset in the reference search engine. Under the ESA retrieval model, documents are represented as vectors of similarities to Wikipedia articles; a methodology proven to be advantageous for text categorization tasks. Our paper presents the generated taxonomy and reports on quantitative properties such as document coverage and processing requirements.
We present StarWatch, our application for real-time analysis of radio astronomical data in Virtual Environment. Serving as an interface to radio astronomical databases or being applied to live data from the radio telescopes, the application supports various data filters measuring signal-to-noise ratio (SNR), Doppler's drift, degree of signal localization on celestial sphere and other useful tools for signal extraction and classification. Originally designed for the database of narrow band signals from SETI Institute (setilive.org), the application has been recently extended for the detection of wide band periodic signals, necessary for the search of pulsars. We will also address the detection of week signals possessing arbitrary waveforms and present several data filters suitable for this purpose.
Cyber security has become a major concern for users and businesses alike. Cyberstalking and harassment have been identified as a growing anti-social problem. Besides detecting cyberstalking and harassment, there is the need to gather digital evidence, often by the victim. To this end, we provide an overview of and discuss relevant technological means, in particular coming from text analytics as well as machine learning, that are capable to address the above challenges. We present a framework for the detection of text-based cyberstalking and the role and challenges of some core techniques such as author identification, text classification and personalisation. We then discuss PAN, a network and evaluation initiative that focusses on digital text forensics, in particular author identification.
Urban and building energy simulation models are usually driven by typical meteorological year (TMY) weather data often in a TMY2 or EPW format. However, the locations where these historical datasets were collected (usually airports) generally do not represent the local, site specific micro-climates that cities develop. In this paper, a humid sub-tropical climate context has been considered. An idealised “urban unit model” of 250 m radius is being presented as a method of adapting commonly available weather data files to the local micro-climate. This idealised “urban unit model” is based on the main thermal and morphological characteristics of nine sites with residential/institutional (university) use in Hangzhou, China. The area of the urban unit was determined by the region of influence on the air temperature signal at the centre of the unit. Air temperature and relative humidity were monitored and the characteristics of the surroundings assessed (eg green-space, blue-space, built form). The “urban unit model” was then implemented into micro-climatic simulations using a Computational Fluid Dynamics – Surface Energy Balance analysis tool (ENVI-met, Version 4). The “urban unit model” approach used here in the simulations delivered results with performance evaluation indices comparable to previously published work (for air temperature; RMSE <1, index of agreement d > 0.9). The micro-climatic simulation results were then used to adapt the air temperature and relative humidity of the TMY file for Hangzhou to represent the local, site specific morphology under three different weather forcing cases, (ie cloudy/rainy weather (Group 1), clear sky, average weather conditions (Group 2) and clear sky, hot weather (Group 3)). Following model validation, two scenarios (domestic and non-domestic building use) were developed to assess building heating and cooling loads against the business as usual case of using typical meteorological year data files. The final “urban weather projections” obtained from the simulations with the “urban unit model” were used to compare the degree days amongst the reference TMY file, the TMY file with a bulk UHI offset and the TMY file adapted for the site-specific micro-climate (TMY-UWP). The comparison shows that Heating Degree Days (HDD) of the TMY file (1598 days) decreased by 6 % in the “TMY + UHI” case and 13 % in the “TMY-UWP” case showing that the local specific micro-climate is attributed with an additional 7 % (ie from 6 to 13 %) reduction in relation to the bulk UHI effect in the city. The Cooling Degree Days (CDD) from the “TMY + UHI” file are 17 % more than the reference TMY (207 days) and the use of the “TMY-UWP” file results to an additional 14 % increase in comparison with the “TMY + UHI” file (ie from 17 to 31 %). This difference between the TMY-UWP and the TMY + UHI files is a reflection of the thermal characteristics of the specific urban morphology of the studied sites compared to the wider city. A dynamic thermal simulation tool (TRNSYS) was used to calculate the heating and cooling load demand change in a domestic and a non-domestic building scenario. The heating and cooling loads calculated with the adapted TMY-UWP file show that in both scenarios there is an increase by approximately 20 % of the cooling load and a 20 % decrease of the heating load. If typical COP values for a reversible air-conditioning system are 2.0 for heating and 3.5 for cooling then the total electricity consumption estimated with the use of the “urbanised” TMY-UWP file will be decreased by 11 % in comparison with the “business as usual” (ie reference TMY) case. Overall, it was found that the proposed method is appropriate for urban and building energy performance simulations in humid sub-tropical climate cities such as Hangzhou, addressing some of the shortfalls of current simulation weather data sets such as the TMY.
The point collocation method of finite spheres (PCMFS) is used to model the hyperelastic response of soft biological tissue in real time within the framework of virtual surgery simulation. The proper orthogonal decomposition (POD) model order reduction (MOR) technique was used to achieve reduced-order model of the problem, minimizing computational cost. The PCMFS is a physics-based meshfree numerical technique for real-time simulation of surgical procedures where the approximation functions are applied directly on the strong form of the boundary value problem without the need for integration, increasing computational efficiency. Since computational speed has a significant role in simulation of surgical procedures, the proposed technique was able to model realistic nonlinear behavior of organs in real time. Numerical results are shown to demonstrate the effectiveness of the new methodology through a comparison between full and reduced analyses for several nonlinear problems. It is shown that the proposed technique was able to achieve good agreement with the full model; moreover, the computational and data storage costs were significantly reduced.
This study is focused on finite element analysis of a model comprising femur into which a femoral component of a total hip replacement was implanted. The considered prosthesis is fabricated from a functionally graded material (FGM) comprising a layer of a titanium alloy bonded to a layer of hydroxyapatite. The elastic modulus of the FGM was adjusted in the radial, longitudinal, and longitudinal-radial directions by altering the volume fraction gradient exponent. Four cases were studied, involving two different methods of anchoring the prosthesis to the spongy bone and two cases of applied loading. The results revealed that the FG prostheses provoked more SED to the bone. The FG prostheses carried less stress, while more stress was induced to the bone and cement. Meanwhile, less shear interface stress was stimulated to the prosthesis-bone interface in the noncemented FG prostheses. The cement-bone interface carried more stress compared to the prosthesis-cement interface. Stair climbing induced more harmful effects to the implanted femur components compared to the normal walking by causing more stress. Therefore, stress shielding, developed stresses, and interface stresses in the THR components could be adjusted through the controlling stiffness of the FG prosthesis by managing volume fraction gradient exponent.
Different types of data provide different type of information. The present research analyzes the error on prediction obtained under different data type availability for calibration. The contribution of different measurement types to model calibration and prognosis are evaluated. A coupled 2D hydro-mechanical model of a water retaining dam is taken as an example. Here, the mean effective stress in the porous skeleton is reduced due to an increase in pore water pressure under drawdown conditions. Relevant model parameters are identified by scaled sensitivities. Then, Particle Swarm Optimization is applied to determine the optimal parameter values and finally, the error in prognosis is determined. We compare the predictions of the optimized models with results from a forward run of the reference model to obtain the actual prediction errors. The analyses presented here were performed calibrating the hydro-mechanical model to 31 data sets of 100 observations of varying data types. The prognosis results improve when using diversified information for calibration. However, when using several types of information, the number of observations has to be increased to be able to cover a representative part of the model domain. For an analysis with constant number of observations, a compromise between data type availability and domain coverage proves to be the best solution. Which type of calibration information contributes to the best prognoses could not be determined in advance. The error in model prognosis does not depend on the error in calibration, but on the parameter error, which unfortunately cannot be determined in inverse problems since we do not know its real value. The best prognoses were obtained independent of calibration fit. However, excellent calibration fits led to an increase in prognosis error variation. In the case of excellent fits; parameters' values came near the limits of reasonable physical values more often. To improve the prognoses reliability, the expected value of the parameters should be considered as prior information on the optimization algorithm.
Assessing Essential Qualities of Urban Space with Emotional and Visual Data Based on GIS Technique
(2016)
Finding a method to evaluate people’s emotional responses to urban spaces in a valid and objective way is fundamentally important for urban design practices and related policy making. Analysis of the essential qualities of urban space could be made both more effective and more accurate using innovative information techniques that have become available in the era of big data. This study introduces an integrated method based on geographical information systems (GIS) and an emotion-tracking technique to quantify the relationship between people’s emotional responses and urban space. This method can evaluate the degree to which people’s emotional responses are influenced by multiple urban characteristics such as building shapes and textures, isovist parameters, visual entropy, and visual fractals. The results indicate that urban spaces may influence people’s emotional responses through both spatial sequence arrangements and shifting scenario sequences. Emotional data were collected with body sensors and GPS devices. Spatial clustering was detected to target effective sampling locations; then, isovists were generated to extract building textures. Logistic regression and a receiver operating characteristic analysis were used to determine the key isovist parameters and the probabilities that they influenced people’s emotion. Finally, based on the results, we make some suggestions for design professionals in the field of urban space optimization.
The paper gives the results of scientific research, which, being based on probabilistic and statistical modeling, identifies the relationship of certain socio-economic factors and the number of people killed in road accidents in the Russian Federation regions. It notes the identity of processes in various fields, in which there is loss of life. Scientific methods and techniques were used in the process of data processing and study findings: systematic approach, methods of system analysis (algorithmization, mathematical programming) and mathematical statistics. The scientific novelty lies in the formulation, formalization and solving problems related to the analysis of regional road traffic accidents, its modeling taking into account the factors of socio-economic impact.
We present recent developments of adaptive wavelet solvers for elliptic eigenvalue problems. We describe the underlying abstract iteration scheme of the preconditioned perturbed iteration. We apply the iteration to a simple model problem in order to identify the main ideas which a numerical realization of the abstract scheme is based upon. This indicates how these concepts carry over to wavelet discretizations. Finally we present numerical results for the Poisson eigenvalue problem on an L-shaped domain.