Refine
Document Type
- Report (13)
- Article (5)
- Preprint (4)
- Part of a Book (2)
- Doctoral Thesis (2)
- Master's Thesis (1)
Institute
- Junior-Professur Augmented Reality (27) (remove)
Keywords
- CGI <Computergraphik> (15)
- Maschinelles Sehen (12)
- Association for Computing Machinery / Special Interest Group on Graphics (9)
- Projektion (6)
- Handy (5)
- Neuronales Netz (5)
- Objekterkennung (5)
- Augmented Reality (4)
- Erweiterte Realität (4)
- Museum (4)
- Museumsführer (4)
- Projection (4)
- Projector-Camera Systems (4)
- Projektor-Kamera Systeme (4)
- museum guidance (4)
- object recognition (4)
- Bildverarbeitung (3)
- Erweiterte Realität <Informatik> (3)
- Objektverfolgung (3)
- mobile phones (3)
- neural networks (3)
- projector-camera systems (3)
- Anpassung (2)
- Bildbasiertes Rendering (2)
- Bilderkennung (2)
- Camera Tracking (2)
- Computergraphik (2)
- Contrast (2)
- High Dynamic Range (2)
- Hoher Dynamikumfang (2)
- Inverse Light Transport (2)
- Kamera Tracking (2)
- Kontrast (2)
- Licht Transport (2)
- Projektionssystem (2)
- Projektionsverfahren (2)
- Radiometric Compensation (2)
- Radiometrische Kompensation (2)
- Rendering (2)
- museum guidance system (2)
- pervasive tracking (2)
- radiometric compensation (2)
- radiometrische Kompensation (2)
- 54.73 (1)
- Ad-hoc Sensor-Netzwerke (1)
- Ad-hoc-Netz (1)
- Adaptive Klassifizierung (1)
- Architektur <Informatik> (1)
- Augmented reality (1)
- Augmented studio (1)
- Barcodes (1)
- Bildkorrektur (1)
- Bildmischung (1)
- Blende <Optik> (1)
- Bluetooth tracking (1)
- CAMShift (1)
- Chroma Keying (1)
- Chromakeying (1)
- Compositing (1)
- Computer Vision (1)
- Computer graphics (1)
- Computer vision (1)
- Digital video compositing ; video projection ; digital video composition techniques (1)
- Display (1)
- Distributed Systems (1)
- Farbstanzen (1)
- Fernsehproduktion (1)
- Filmproduktion (1)
- GPU Programming (1)
- Hologramm (1)
- Image processing (1)
- Interaction (1)
- Interaktion (1)
- Kalibrierung (1)
- Kernel-Based Tracking (1)
- Kontextbezogenes System (1)
- Laser Pointer Tracking (1)
- Laserpointer Tracking (1)
- Lumineszenzdiode (1)
- Microscopy (1)
- Mikroskopie (1)
- Mobile Computing (1)
- Mobile phones (1)
- Mobiltelefone (1)
- Model Predictive Control (1)
- Multi-Projektor Systeme (1)
- Museum Guidance (1)
- Neural Networks (1)
- Peer-to-Peer-Netz (1)
- Phone-to-phone communication (1)
- PhoneGuide (1)
- Positionsbestimmung (1)
- Projector Camera System (1)
- Projekor-Kamera System (1)
- Projektion <Optik> (1)
- Projektionsapparat (1)
- Reconstruction (1)
- Räumliche Beziehungen (1)
- Scanning (1)
- Smartphone (1)
- Spatial Relationships (1)
- Studiotechnik (1)
- Subobject Detection (1)
- Subobjekterkennung (1)
- Ubiquitous Computing (1)
- Verteilte Systeme (1)
- Virtual Studios (1)
- Virtuelle Realität (1)
- Virtuelle Studios (1)
- Visually Impaired (1)
- Visuelle Effekte ; virtuelles Studio ; digitale Videokomposition (1)
- Visuelle Wahrnehmung (1)
- Wegrouten (1)
- ad-hoc sensor networks (1)
- adaptive classification (1)
- adaptive image classification (1)
- calibration (1)
- computer grafik (1)
- computer graphics (1)
- digital light projection (1)
- image correction (1)
- imperceptible pattern projection (1)
- mobile ad-hoc networks (1)
- multi-projector systems (1)
- pathway awareness (1)
- projection (1)
- spatial augmented reality (1)
- temporal adaptation (1)
- unsichtbare Muster Projektion (1)
- visual perception (1)
Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing.
Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies – such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, today’s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cues—perspective, binocular disparity, motion parallax, convergence, and accommodation—and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordings—for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is today’s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years – even if Moore’s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of today’s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter.
Projector-Based Augmentation
(2006)
Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces.
Superimposing Dynamic Range
(2008)
Replacing a uniform illumination by a high-frequent illumination enhances the contrast of observed and captured images. We modulate spatially and temporally multiplexed (projected) light with reflective or transmissive matter to achieve high dynamic range visualizations of radiological images on printed paper or ePaper, and to boost the optical contrast of images viewed or imaged with light microscopes.
Virtual studio technology plays an important role for modern television productions. Blue-screen matting is a common technique for integrating real actors or moderators into computer generated sceneries. Augmented reality offers the possibility to mix real and virtual in a more general context. This article proposes a new technological approach for combining real studio content with computergenerated information. Digital light projection allows a controlled spatial, temporal, chrominance and luminance modulation of illumination – opening new possibilities for TV studios.
Superimposing Dynamic Range
(2008)
We present a simple and cost-efficient way of extending contrast, perceived tonal resolution, and the color space of static hardcopy images, beyond the capabilities of hardcopy devices or low-dynamic range displays alone. A calibrated projector-camera system is applied for automatic registration, scanning and superimposition of hardcopies. We explain how high-dynamic range content can be split for linear devices with different capabilities, how luminance quantization can be optimized with respect to the non-linear response of the human visual system as well as for the discrete nature of the applied modulation devices; and how inverse tone-mapping can be adapted in case only untreated hardcopies and softcopies (such as regular photographs) are available. We believe that our approach has the potential to complement hardcopy-based technologies, such as X-ray prints for filmless imaging, in domains that operate with high quality static image content, like radiology and other medical fields, or astronomy.
Superimposing Dynamic Range
(2009)
Replacing a uniform illumination by a high-frequent illumination enhances the contrast of observed and captured images. We modulate spatially and temporally multiplexed (projected) light with reflective or transmissive matter to achieve high dynamic range visualizations of radiological images on printed paper or ePaper, and to boost the optical contrast of images viewed or imaged with light microscopes.
We present a novel image classification technique for detecting multiple objects (called subobjects) in a single image. In addition to image classifiers, we apply spatial relationships among the subobjects to verify and to predict locations of detected and undetected subobjects, respectively. By continuously refining the spatial relationships throughout the detection process, even locations of completely occluded exhibits can be determined. Finally, all detected subobjects are labeled and the user can select the object of interest for retrieving corresponding multimedia information. This approach is applied in the context of PhoneGuide, an adaptive museum guidance system for camera-equipped mobile phones. We show that the recognition of subobjects using spatial relationships is up to 68% faster than related approaches without spatial relationships. Results of a field experiment in a local museum illustrate that unexperienced users reach an average recognition rate for subobjects of 85.6% under realistic conditions.
The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate.
In this paper, we present a novel technique for adapting local image classifiers that are applied for object recognition on mobile phones through ad-hoc network communication between the devices. By continuously accumulating and exchanging collected user feedback among devices that are located within signal range, we show that our approach improves the overall classification rate and adapts to dynamic changes quickly. This technique is applied in the context of PhoneGuide – a mobile phone based museum guidance framework that combines pervasive tracking and local object recognition for identifying a large number of objects in uncontrolled museum environments.