Junior-Professur Augmented Reality
Refine
Document Type
- Report (13)
- Article (5)
- Preprint (4)
- Part of a Book (2)
- Doctoral Thesis (2)
- Master's Thesis (1)
Institute
Keywords
- CGI <Computergraphik> (15)
- Maschinelles Sehen (12)
- Association for Computing Machinery / Special Interest Group on Graphics (9)
- Projektion (6)
- Handy (5)
- Neuronales Netz (5)
- Objekterkennung (5)
- Augmented Reality (4)
- Erweiterte Realität (4)
- Museum (4)
- Museumsführer (4)
- Projection (4)
- Projector-Camera Systems (4)
- Projektor-Kamera Systeme (4)
- museum guidance (4)
- object recognition (4)
- Bildverarbeitung (3)
- Erweiterte Realität <Informatik> (3)
- Objektverfolgung (3)
- mobile phones (3)
- neural networks (3)
- projector-camera systems (3)
- Anpassung (2)
- Bildbasiertes Rendering (2)
- Bilderkennung (2)
- Camera Tracking (2)
- Computergraphik (2)
- Contrast (2)
- High Dynamic Range (2)
- Hoher Dynamikumfang (2)
- Inverse Light Transport (2)
- Kamera Tracking (2)
- Kontrast (2)
- Licht Transport (2)
- Projektionssystem (2)
- Projektionsverfahren (2)
- Radiometric Compensation (2)
- Radiometrische Kompensation (2)
- Rendering (2)
- museum guidance system (2)
- pervasive tracking (2)
- radiometric compensation (2)
- radiometrische Kompensation (2)
- 54.73 (1)
- Ad-hoc Sensor-Netzwerke (1)
- Ad-hoc-Netz (1)
- Adaptive Klassifizierung (1)
- Architektur <Informatik> (1)
- Augmented reality (1)
- Augmented studio (1)
- Barcodes (1)
- Bildkorrektur (1)
- Bildmischung (1)
- Blende <Optik> (1)
- Bluetooth tracking (1)
- CAMShift (1)
- Chroma Keying (1)
- Chromakeying (1)
- Compositing (1)
- Computer Vision (1)
- Computer graphics (1)
- Computer vision (1)
- Digital video compositing ; video projection ; digital video composition techniques (1)
- Display (1)
- Distributed Systems (1)
- Farbstanzen (1)
- Fernsehproduktion (1)
- Filmproduktion (1)
- GPU Programming (1)
- Hologramm (1)
- Image processing (1)
- Interaction (1)
- Interaktion (1)
- Kalibrierung (1)
- Kernel-Based Tracking (1)
- Kontextbezogenes System (1)
- Laser Pointer Tracking (1)
- Laserpointer Tracking (1)
- Lumineszenzdiode (1)
- Microscopy (1)
- Mikroskopie (1)
- Mobile Computing (1)
- Mobile phones (1)
- Mobiltelefone (1)
- Model Predictive Control (1)
- Multi-Projektor Systeme (1)
- Museum Guidance (1)
- Neural Networks (1)
- Peer-to-Peer-Netz (1)
- Phone-to-phone communication (1)
- PhoneGuide (1)
- Positionsbestimmung (1)
- Projector Camera System (1)
- Projekor-Kamera System (1)
- Projektion <Optik> (1)
- Projektionsapparat (1)
- Reconstruction (1)
- Räumliche Beziehungen (1)
- Scanning (1)
- Smartphone (1)
- Spatial Relationships (1)
- Studiotechnik (1)
- Subobject Detection (1)
- Subobjekterkennung (1)
- Ubiquitous Computing (1)
- Verteilte Systeme (1)
- Virtual Studios (1)
- Virtuelle Realität (1)
- Virtuelle Studios (1)
- Visually Impaired (1)
- Visuelle Effekte ; virtuelles Studio ; digitale Videokomposition (1)
- Visuelle Wahrnehmung (1)
- Wegrouten (1)
- ad-hoc sensor networks (1)
- adaptive classification (1)
- adaptive image classification (1)
- calibration (1)
- computer grafik (1)
- computer graphics (1)
- digital light projection (1)
- image correction (1)
- imperceptible pattern projection (1)
- mobile ad-hoc networks (1)
- multi-projector systems (1)
- pathway awareness (1)
- projection (1)
- spatial augmented reality (1)
- temporal adaptation (1)
- unsichtbare Muster Projektion (1)
- visual perception (1)
In this paper we present a novel adaptive imperceptible pattern projection technique that considers parameters of human visual perception. A coded image that is invisible for human observers is temporally integrated into the projected image, but can be reconstructed by a synchronized camera. The embedded code is dynamically adjusted on the fly to guarantee its non-perceivability and to adapt it to the current camera pose. Linked with real-time flash keying, for instance, this enables in-shot optical tracking using a dynamic multi-resolution marker technique. A sample prototype is realized that demonstrates the application of our method in the context of augmentations in television studios.
We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying.
Unsynchronized 4D Barcodes
(2007)
We present a novel technique for optical data transfer between public displays and mobile devices based on unsynchronized 4D barcodes. We assume that no direct (electromagnetic or other) connection between the devices can exist. Time-multiplexed, 2D color barcodes are displayed on screens and recorded with camera equipped mobile phones. This allows to transmit information optically between both devices. Our approach maximizes the data throughput and the robustness of the barcode recognition, while no immediate synchronization exists. Although the transfer rate is much smaller than it can be achieved with electromagnetic techniques (e.g., Bluetooth or WiFi), we envision to apply such a technique wherever no direct connection is available. 4D barcodes can, for instance, be integrated into public web-pages, movie sequences or advertisement presentations, and they encode and transmit more information than possible with single 2D or 3D barcodes.
Projector-based displays have been evolving tremendously in the last decade. Reduced costs and increasing capabilities have let to a widespread use for home entertainment and scientific visualization. The rapid development is continuing - techniques that allow seamless projection onto complex everyday environments such as textured walls, window curtains or bookshelfs have recently been proposed. Although cameras enable a completely automatic calibration of the systems, all previously described techniques rely on a precise mapping between projector and camera pixels. Global illumination effects such as reflections, refractions, scattering, dispersion etc. are completely ignored since only direct illumination is taken into account. We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible lighting effects. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear equation system that can be solved separately for each cluster. For interactive compensation rates this model is adapted to enable an efficient implementation on programmable graphics hardware. Applying the light transport matrix's pseudo-inverse allows to separate the compensation into a computational expensive preprocessing step (computing the pseudo-inverse) and an on-line matrix-vector multiplication. The generalized mathematical foundation for radiometric compensation with projector-camera systems is validated with several experiments. We show that it is possible to project corrected imagery onto complex surfaces such as an inter-reflecting statuette and glass. The overall sharpness of defocused projections is increased as well. Using the proposed optimization for GPUs, real-time framerates are achieved.
We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible types of light modulation. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear system that can be solved with respect to the projector patterns. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes interactive compensation rates possible. Our generalized method unifies existing approaches that address individual problems. Based on examples, we show that it is possible to project corrected images onto complex surfaces such as an inter-reflecting statuette, glossy wallpaper, or through highly-refractive glass. Furthermore, we illustrate that a side-effect of our approach is an increase in the overall sharpness of defocused projections.
Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between a projector and a camera to account for many illumination aspects, such as interreflections, refractions and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.
We present a novel multi-step technique for imperceptible geometry and radiometry calibration of projector-camera systems. Our approach can be used to display geometry and color corrected images on non-optimized surfaces at interactive rates while simultaneously performing a series of invisible structured light projections during runtime. It supports disjoint projector-camera configurations, fast and progressive improvements, as well as real-time correction rates of arbitrary graphical content. The calibration is automatically triggered when mis-registrations between camera, projector and surface are detected.