TY - INPR A1 - Wetzstein, Gordon A1 - Bimber, Oliver T1 - A Generalized Approach to Radiometric N2 - We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible types of light modulation. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear system that can be solved with respect to the projector patterns. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes interactive compensation rates possible. Our generalized method unifies existing approaches that address individual problems. Based on examples, we show that it is possible to project corrected images onto complex surfaces such as an inter-reflecting statuette, glossy wallpaper, or through highly-refractive glass. Furthermore, we illustrate that a side-effect of our approach is an increase in the overall sharpness of defocused projections. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-7625 ER - TY - THES A1 - Bruns, Erich T1 - Adaptive Image Classification on Mobile Phones N2 - The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate. N2 - Die Einführung von Mobiltelefonen mit eingebauten Sensoren wie Kameras, GPS oder Beschleunigungssensoren, sowie Kommunikationstechniken wie Bluetooth oder WLAN ermöglicht die Entwicklung neuer kontextsensitiver Anwendungen für das tägliche Leben. Insbesondere Applikationen im Bereich kontextsensitiver Informationsbeschaffung in Verbindung mit bildbasierter Objekterkennung sind in den Fokus der aktuellen Forschung geraten. Der Beitrag dieser Arbeit ist die Entwicklung eines bildbasierten, mobilen Museumsführersystems, welches unterschiedliche Adaptionstechniken verwendet, um die Objekterkennung zu verbessern. Es wird gezeigt, wie Ojekterkennungsalgorithmen auf Mobiltelefonen realisiert werden können und wie die Erkennungsrate verbessert wird, indem man zum Beispiel ad-hoc Netzwerke einsetzt oder Bewegungsvorhersagen von Personen berücksichtigt. T2 - Adaptive Bilderkennung auf Mobiltelefonen KW - Kontextbezogenes System KW - Bilderkennung KW - Ubiquitous Computing KW - Mobile Computing KW - Maschinelles Sehen KW - Museumsführer KW - Handy KW - Wegrouten KW - Positionsbestimmung KW - PhoneGuide KW - Bluetooth tracking KW - pathway awareness KW - museum guidance system KW - PhoneGuide Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20100707-15092 ER - TY - RPRT A1 - Amano, Toshiyuki A1 - Bimber, Oliver A1 - Grundhöfer, Anselm T1 - Appearance Enhancement for Visually Impaired with Projector Camera Feedback N2 - Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing. KW - Maschinelles Sehen KW - Projector Camera System KW - Model Predictive Control KW - Visually Impaired Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20100106-14974 ER - TY - JOUR A1 - Grundhöfer, Anselm A1 - Seeger, Manja A1 - Häntsch, Ferry A1 - Bimber, Oliver T1 - Coded Projection and Illumination for Television Studios N2 - We propose the application of temporally and spatially coded projection and illumination in modern television studios. In our vision, this supports ad-hoc re-illumination, automatic keying, unconstrained presentation of moderation information, camera-tracking, and scene acquisition. In this paper we show how a new adaptive imperceptible pattern projection that considers parameters of human visual perception, linked with real-time difference keying enables an in-shot optical tracking using a novel dynamic multi-resolution marker technique KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Virtuelle Studios KW - Erweiterte Realität KW - Kamera Tracking KW - Projektion KW - Virtual Studios KW - Augmented Reality KW - Camera Tracking KW - Projection Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8005 ER - TY - RPRT A1 - Grundhöfer, Anselm A1 - Seeger, Manja A1 - Häntsch, Ferry A1 - Bimber, Oliver T1 - Dynamic Adaptation of Projected Imperceptible Codes N2 - In this paper we present a novel adaptive imperceptible pattern projection technique that considers parameters of human visual perception. A coded image that is invisible for human observers is temporally integrated into the projected image, but can be reconstructed by a synchronized camera. The embedded code is dynamically adjusted on the fly to guarantee its non-perceivability and to adapt it to the current camera pose. Linked with real-time flash keying, for instance, this enables in-shot optical tracking using a dynamic multi-resolution marker technique. A sample prototype is realized that demonstrates the application of our method in the context of augmentations in television studios. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Erweiterte Realität KW - Kamera Tracking KW - Projektion KW - Augmented Reality KW - Camera Tracking KW - Projection Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8168 ER - TY - RPRT A1 - Grundhöfer, Anselm A1 - Bimber, Oliver T1 - Dynamic Bluescreens N2 - Blue screens and chroma keying technology are essential for digital video composition. Professional studios apply tracking technology to record the camera path for perspective augmentations of the original video footage. Although this technology is well established, it does not offer a great deal of flexibility. For shootings at non-studio sets, physical blue screens might have to be installed, or parts have to be recorded in a studio separately. We present a simple and flexible way of projecting corrected keying colors onto arbitrary diffuse surfaces using synchronized projectors and radiometric compensation. Thereby, the reflectance of the underlying real surface is neutralized. A temporal multiplexing between projection and flash illumination allows capturing the fully lit scene, while still being able to key the foreground objects. In addition, we embed spatial codes into the projected key image to enable the tracking of the camera. Furthermore, the reconstruction of the scene geometry is implicitly supported. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Farbstanzen KW - Erweiterte Realität KW - Projektion KW - Chroma Keying KW - Bildmischung KW - Augmented Reality KW - Projection KW - Chromakeying KW - Compositing Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20080226-13016 ER - TY - JOUR A1 - Kreskowski, Adrian A1 - Rendle, Gareth A1 - Fröhlich, Bernd T1 - Efficient Direct Isosurface Rasterization of Scalar Volumes JF - Computer Graphics Forum N2 - In this paper we propose a novel and efficient rasterization-based approach for direct rendering of isosurfaces. Our method exploits the capabilities of task and mesh shader pipelines to identify subvolumes containing potentially visible isosurface geometry, and to efficiently extract primitives which are consumed on the fly by the rasterizer. As a result, our approach requires little preprocessing and negligible additional memory. Direct isosurface rasterization is competitive in terms of rendering performance when compared with ray-marching-based approaches, and significantly outperforms them for increasing resolution in most situations. Since our approach is entirely rasterization based, it affords straightforward integration into existing rendering pipelines, while allowing the use of modern graphics hardware features, such as multi-view stereo for efficient rendering of stereoscopic image pairs for geometry-bound applications. Direct isosurface rasterization is suitable for applications where isosurface geometry is highly variable, such as interactive analysis scenarios for static and dynamic data sets that require frequent isovalue adjustment. KW - Rendering KW - Rastergrafik KW - Visualisierung KW - Maschinelles Sehen KW - isosurface KW - rendering KW - rasterization Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20230525-63835 UR - https://onlinelibrary.wiley.com/doi/full/10.1111/cgf.14670 VL - 2022 IS - Volume 4, Issue 7 SP - 215 EP - 226 PB - Wiley Blackwell CY - Oxford ER - TY - INPR A1 - Zollmann, Stefanie A1 - Bimber, Oliver T1 - Imperceptible Calibration for Radiometric Compensation N2 - We present a novel multi-step technique for imperceptible geometry and radiometry calibration of projector-camera systems. Our approach can be used to display geometry and color corrected images on non-optimized surfaces at interactive rates while simultaneously performing a series of invisible structured light projections during runtime. It supports disjoint projector-camera configurations, fast and progressive improvements, as well as real-time correction rates of arbitrary graphical content. The calibration is automatically triggered when mis-registrations between camera, projector and surface are detected. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - unsichtbare Muster Projektion KW - Projektor-Kamera Systeme KW - Kalibrierung KW - Radiometrische Kompensation KW - imperceptible pattern projection KW - projector-camera systems KW - calibration KW - radiometric compensation Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8094 ER - TY - RPRT A1 - Kurz, Daniel A1 - Häntsch, Ferry A1 - Grosse, Max A1 - Schiewe, Alexander A1 - Bimber, Oliver T1 - Laser Pointer Tracking in Projector-Augmented Architectural Environments N2 - We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Architektur KW - Maschinelles Sehen KW - Laserpointer Tracking KW - Erweiterte Realität KW - Interaktion KW - Projektion KW - Verteilte Systeme KW - Laser Pointer Tracking KW - Augmented Reality KW - Interaction KW - Projection KW - Distributed Systems Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8183 ER - TY - THES A1 - Wetzstein, Gordon T1 - Radiometric Compensation of Global Illumination Effects with Projector-Camera Systems N2 - Projector-based displays have been evolving tremendously in the last decade. Reduced costs and increasing capabilities have let to a widespread use for home entertainment and scientific visualization. The rapid development is continuing - techniques that allow seamless projection onto complex everyday environments such as textured walls, window curtains or bookshelfs have recently been proposed. Although cameras enable a completely automatic calibration of the systems, all previously described techniques rely on a precise mapping between projector and camera pixels. Global illumination effects such as reflections, refractions, scattering, dispersion etc. are completely ignored since only direct illumination is taken into account. We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible lighting effects. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear equation system that can be solved separately for each cluster. For interactive compensation rates this model is adapted to enable an efficient implementation on programmable graphics hardware. Applying the light transport matrix's pseudo-inverse allows to separate the compensation into a computational expensive preprocessing step (computing the pseudo-inverse) and an on-line matrix-vector multiplication. The generalized mathematical foundation for radiometric compensation with projector-camera systems is validated with several experiments. We show that it is possible to project corrected imagery onto complex surfaces such as an inter-reflecting statuette and glass. The overall sharpness of defocused projections is increased as well. Using the proposed optimization for GPUs, real-time framerates are achieved. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Projektionssystem KW - radiometrische Kompensation KW - Licht Transport KW - Projector-Camera Systems KW - Radiometric Compensation KW - Inverse Light Transport Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8106 ER - TY - RPRT A1 - Wetzstein, Gordon A1 - Bimber, Oliver T1 - Radiometric Compensation through Inverse Light Transport N2 - Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between a projector and a camera to account for many illumination aspects, such as interreflections, refractions and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Projektionssystem KW - radiometrische Kompensation KW - Licht Transport KW - Projector-Camera Systems KW - Radiometric Compensation KW - Inverse Light Transport Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8126 ER - TY - INPR A1 - Grundhöfer, Anselm A1 - Bimber, Oliver T1 - Real-Time Adaptive Radiometric Compensation N2 - Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. With the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. They will lead to clipping errors and to visible artifacts on the surface. In this article, we present a novel algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real-time. KW - Maschinelles Sehen KW - CGI KW - Bildbasiertes Rendering KW - Display KW - Projektionsverfahren KW - Radiometrische Kompensation KW - Projektion KW - Projekor-Kamera System KW - Bildkorrektur KW - Visuelle Wahrnehmung KW - radiometric compensation KW - projection KW - projector-camera systems KW - image correction KW - visual perception Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-7848 ER - TY - INPR A1 - Langlotz, Tobias A1 - Bimber, Oliver T1 - Unsynchronized 4D Barcodes N2 - We present a novel technique for optical data transfer between public displays and mobile devices based on unsynchronized 4D barcodes. We assume that no direct (electromagnetic or other) connection between the devices can exist. Time-multiplexed, 2D color barcodes are displayed on screens and recorded with camera equipped mobile phones. This allows to transmit information optically between both devices. Our approach maximizes the data throughput and the robustness of the barcode recognition, while no immediate synchronization exists. Although the transfer rate is much smaller than it can be achieved with electromagnetic techniques (e.g., Bluetooth or WiFi), we envision to apply such a technique wherever no direct connection is available. 4D barcodes can, for instance, be integrated into public web-pages, movie sequences or advertisement presentations, and they encode and transmit more information than possible with single 2D or 3D barcodes. KW - Maschinelles Sehen KW - Computer Vision KW - Barcodes Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8531 ER -