Refine
Document Type
- Report (13) (remove)
Institute
- Junior-Professur Augmented Reality (13) (remove)
Keywords
- CGI <Computergraphik> (7)
- Association for Computing Machinery / Special Interest Group on Graphics (5)
- Maschinelles Sehen (5)
- Handy (4)
- Museum (4)
- Neuronales Netz (4)
- Objekterkennung (4)
- Projektion (4)
- museum guidance (4)
- object recognition (4)
- Augmented Reality (3)
- Bildverarbeitung (3)
- Erweiterte Realität (3)
- Objektverfolgung (3)
- Projection (3)
- Projector-Camera Systems (3)
- mobile phones (3)
- neural networks (3)
- Anpassung (2)
- Computergraphik (2)
- Contrast (2)
- High Dynamic Range (2)
- Hoher Dynamikumfang (2)
- Kontrast (2)
- Projektor-Kamera Systeme (2)
- pervasive tracking (2)
- Ad-hoc Sensor-Netzwerke (1)
- Adaptive Klassifizierung (1)
- Architektur <Informatik> (1)
- Bildmischung (1)
- Blende <Optik> (1)
- CAMShift (1)
- Camera Tracking (1)
- Chroma Keying (1)
- Chromakeying (1)
- Compositing (1)
- Distributed Systems (1)
- Farbstanzen (1)
- GPU Programming (1)
- Interaction (1)
- Interaktion (1)
- Inverse Light Transport (1)
- Kamera Tracking (1)
- Kernel-Based Tracking (1)
- Laser Pointer Tracking (1)
- Laserpointer Tracking (1)
- Licht Transport (1)
- Microscopy (1)
- Mikroskopie (1)
- Mobile phones (1)
- Mobiltelefone (1)
- Model Predictive Control (1)
- Museumsführer (1)
- Projector Camera System (1)
- Projektionssystem (1)
- Radiometric Compensation (1)
- Verteilte Systeme (1)
- Visually Impaired (1)
- ad-hoc sensor networks (1)
- adaptive classification (1)
- radiometrische Kompensation (1)
- temporal adaptation (1)
Coded Aperture Projection
(2008)
In computer vision, optical defocus is often described as convolution with a filter kernel that corresponds to an image of the aperture being used by the imaging device. The degree of defocus correlates to the scale of the kernel. Convolving an image with the inverse aperture kernel will digitally sharpen the image and consequently compensate optical defocus. This is referred to as deconvolution or inverse filtering. In frequency domain, the reciprocal of the filter kernel is its inverse, and deconvolution reduces to a division. Low magnitudes in the Fourier transform of the aperture image, however, lead to intensity values in spatial domain that exceed the displayable range. Therefore, the corresponding frequencies are not considered, which then results in visible ringing artifacts in the final projection. This is the main limitation of previous approaches, since in frequency domain the Gaussian PSF of spherical apertures does contain a large fraction of low Fourier magnitudes. Applying only small kernel scales will reduce the number of low Fourier magnitudes (and consequently the ringing artifacts) -- but will also lead only to minor focus improvements. To overcome this problem, we apply a coded aperture whose Fourier transform has less low magnitudes initially. Consequently, more frequencies are retained and more image details are reconstructed.
Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing.
We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera–equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server–system for improving data acquisition and for supporting scale–invariant object recognition.