TY - RPRT A1 - Amano, Toshiyuki A1 - Bimber, Oliver A1 - Grundhöfer, Anselm T1 - Appearance Enhancement for Visually Impaired with Projector Camera Feedback N2 - Visually impaired is a common problem for human life in the world wide. The projector-based AR technique has ability to change appearance of real object, and it can help to improve visibility for visually impaired. We propose a new framework for the appearance enhancement with the projector camera system that employed model predictive controller. This framework enables arbitrary image processing such as photo-retouch software in the real world and it helps to improve visibility for visually impaired. In this article, we show the appearance enhancement result of Peli's method and Wolffshon's method for the low vision, Jefferson's method for color vision deficiencies. Through experiment results, the potential of our method to enhance the appearance for visually impaired was confirmed as same as appearance enhancement for the digital image and television viewing. KW - Maschinelles Sehen KW - Projector Camera System KW - Model Predictive Control KW - Visually Impaired Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20100106-14974 ER - TY - RPRT A1 - Grundhöfer, Anselm A1 - Bimber, Oliver T1 - Dynamic Bluescreens N2 - Blue screens and chroma keying technology are essential for digital video composition. Professional studios apply tracking technology to record the camera path for perspective augmentations of the original video footage. Although this technology is well established, it does not offer a great deal of flexibility. For shootings at non-studio sets, physical blue screens might have to be installed, or parts have to be recorded in a studio separately. We present a simple and flexible way of projecting corrected keying colors onto arbitrary diffuse surfaces using synchronized projectors and radiometric compensation. Thereby, the reflectance of the underlying real surface is neutralized. A temporal multiplexing between projection and flash illumination allows capturing the fully lit scene, while still being able to key the foreground objects. In addition, we embed spatial codes into the projected key image to enable the tracking of the camera. Furthermore, the reconstruction of the scene geometry is implicitly supported. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Farbstanzen KW - Erweiterte Realität KW - Projektion KW - Chroma Keying KW - Bildmischung KW - Augmented Reality KW - Projection KW - Chromakeying KW - Compositing Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20080226-13016 ER - TY - CHAP A1 - Bimber, Oliver T1 - HOLOGRAPHICS: Combining Holograms with Interactive Computer Graphics T2 - New Directions in Holography and Speckles N2 - Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies – such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, today’s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cues—perspective, binocular disparity, motion parallax, convergence, and accommodation—and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordings—for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is today’s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years – even if Moore’s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of today’s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter. KW - Erweiterte Realität KW - CGI KW - Hologramm KW - Projektionsapparat KW - Rendering KW - Scanning KW - Reconstruction KW - computer grafik KW - computer graphics Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-7365 ER - TY - RPRT A1 - Bimber, Oliver A1 - Iwai, Daisuke T1 - Superimposing Dynamic Range N2 - We present a simple and cost-efficient way of extending contrast, perceived tonal resolution, and the color space of static hardcopy images, beyond the capabilities of hardcopy devices or low-dynamic range displays alone. A calibrated projector-camera system is applied for automatic registration, scanning and superimposition of hardcopies. We explain how high-dynamic range content can be split for linear devices with different capabilities, how luminance quantization can be optimized with respect to the non-linear response of the human visual system as well as for the discrete nature of the applied modulation devices; and how inverse tone-mapping can be adapted in case only untreated hardcopies and softcopies (such as regular photographs) are available. We believe that our approach has the potential to complement hardcopy-based technologies, such as X-ray prints for filmless imaging, in domains that operate with high quality static image content, like radiology and other medical fields, or astronomy. KW - Bildverarbeitung KW - CGI KW - Computergraphik KW - Kontrast KW - Projektor-Kamera Systeme KW - Hoher Dynamikumfang KW - Contrast KW - Projector-Camera Systems KW - High Dynamic Range Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20080422-13585 ER - TY - CHAP A1 - Bimber, Oliver T1 - Projector-Based Augmentation T2 - Emerging Technologies of Augmented Reality: Interfaces & Design N2 - Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces. KW - Erweiterte Realität KW - Virtuelle Realität KW - Projektionsverfahren KW - CGI KW - Bildbasiertes Rendering KW - Rendering KW - Projektor-Kamera Systeme KW - Multi-Projektor Systeme KW - projector-camera systems KW - multi-projector systems KW - spatial augmented reality Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-7353 ER - TY - JOUR A1 - Grundhöfer, Anselm A1 - Seeger, Manja A1 - Häntsch, Ferry A1 - Bimber, Oliver T1 - Coded Projection and Illumination for Television Studios N2 - We propose the application of temporally and spatially coded projection and illumination in modern television studios. In our vision, this supports ad-hoc re-illumination, automatic keying, unconstrained presentation of moderation information, camera-tracking, and scene acquisition. In this paper we show how a new adaptive imperceptible pattern projection that considers parameters of human visual perception, linked with real-time difference keying enables an in-shot optical tracking using a novel dynamic multi-resolution marker technique KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Virtuelle Studios KW - Erweiterte Realität KW - Kamera Tracking KW - Projektion KW - Virtual Studios KW - Augmented Reality KW - Camera Tracking KW - Projection Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8005 ER - TY - JOUR A1 - Bimber, Oliver A1 - Grundhöfer, Anselm A1 - Zollmann, Stefanie A1 - Kolster, Daniel T1 - Digital Illumination for Augmented Studios N2 - Virtual studio technology plays an important role for modern television productions. Blue-screen matting is a common technique for integrating real actors or moderators into computer generated sceneries. Augmented reality offers the possibility to mix real and virtual in a more general context. This article proposes a new technological approach for combining real studio content with computergenerated information. Digital light projection allows a controlled spatial, temporal, chrominance and luminance modulation of illumination – opening new possibilities for TV studios. KW - Studiotechnik KW - Erweiterte Realität KW - Fernsehproduktion KW - Projektion KW - Augmented studio KW - Augmented reality KW - digital light projection Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8576 ER - TY - INPR A1 - Zollmann, Stefanie A1 - Bimber, Oliver T1 - Imperceptible Calibration for Radiometric Compensation N2 - We present a novel multi-step technique for imperceptible geometry and radiometry calibration of projector-camera systems. Our approach can be used to display geometry and color corrected images on non-optimized surfaces at interactive rates while simultaneously performing a series of invisible structured light projections during runtime. It supports disjoint projector-camera configurations, fast and progressive improvements, as well as real-time correction rates of arbitrary graphical content. The calibration is automatically triggered when mis-registrations between camera, projector and surface are detected. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - unsichtbare Muster Projektion KW - Projektor-Kamera Systeme KW - Kalibrierung KW - Radiometrische Kompensation KW - imperceptible pattern projection KW - projector-camera systems KW - calibration KW - radiometric compensation Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8094 ER - TY - INPR A1 - Grundhöfer, Anselm A1 - Bimber, Oliver T1 - Real-Time Adaptive Radiometric Compensation N2 - Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. With the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. They will lead to clipping errors and to visible artifacts on the surface. In this article, we present a novel algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real-time. KW - Maschinelles Sehen KW - CGI KW - Bildbasiertes Rendering KW - Display KW - Projektionsverfahren KW - Radiometrische Kompensation KW - Projektion KW - Projekor-Kamera System KW - Bildkorrektur KW - Visuelle Wahrnehmung KW - radiometric compensation KW - projection KW - projector-camera systems KW - image correction KW - visual perception Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-7848 ER - TY - RPRT A1 - Bruns, Erich A1 - Brombach, Benjamin A1 - Zeidler, Thomas A1 - Bimber, Oliver T1 - Enabling Mobile Phones To Support Large-Scale Museum Guidance N2 - We present a museum guidance system called PhoneGuide that uses widespread camera equipped mobile phones for on-device object recognition in combination with pervasive tracking. It provides additional location- and object-aware multimedia content to museum visitors, and is scalable to cover a large number of museum objects. KW - Objektverfolgung KW - Neuronales Netz KW - Handy KW - Objekterkennung KW - Museum KW - mobile phones KW - object recognition KW - neural networks KW - museum guidance KW - pervasive tracking Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-6777 ER - TY - INPR A1 - Langlotz, Tobias A1 - Bimber, Oliver T1 - Unsynchronized 4D Barcodes N2 - We present a novel technique for optical data transfer between public displays and mobile devices based on unsynchronized 4D barcodes. We assume that no direct (electromagnetic or other) connection between the devices can exist. Time-multiplexed, 2D color barcodes are displayed on screens and recorded with camera equipped mobile phones. This allows to transmit information optically between both devices. Our approach maximizes the data throughput and the robustness of the barcode recognition, while no immediate synchronization exists. Although the transfer rate is much smaller than it can be achieved with electromagnetic techniques (e.g., Bluetooth or WiFi), we envision to apply such a technique wherever no direct connection is available. 4D barcodes can, for instance, be integrated into public web-pages, movie sequences or advertisement presentations, and they encode and transmit more information than possible with single 2D or 3D barcodes. KW - Maschinelles Sehen KW - Computer Vision KW - Barcodes Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8531 ER - TY - JOUR A1 - Bruns, Erich A1 - Bimber, Oliver T1 - Phone-to-Phone Communication for Adaptive Image Classification N2 - In this paper, we present a novel technique for adapting local image classifiers that are applied for object recognition on mobile phones through ad-hoc network communication between the devices. By continuously accumulating and exchanging collected user feedback among devices that are located within signal range, we show that our approach improves the overall classification rate and adapts to dynamic changes quickly. This technique is applied in the context of PhoneGuide – a mobile phone based museum guidance framework that combines pervasive tracking and local object recognition for identifying a large number of objects in uncontrolled museum environments. KW - Peer-to-Peer-Netz KW - Bilderkennung KW - Museumsführer KW - Ad-hoc-Netz KW - Phone-to-phone communication KW - adaptive image classification KW - mobile ad-hoc networks KW - museum guidance system Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20080722-13685 ER - TY - JOUR A1 - Bimber, Oliver A1 - Iwai, Daisuke T1 - Superimposing Dynamic Range JF - Eurographics 2009 N2 - Replacing a uniform illumination by a high-frequent illumination enhances the contrast of observed and captured images. We modulate spatially and temporally multiplexed (projected) light with reflective or transmissive matter to achieve high dynamic range visualizations of radiological images on printed paper or ePaper, and to boost the optical contrast of images viewed or imaged with light microscopes. KW - CGI KW - Computer graphics KW - Image processing KW - Computer vision KW - 54.73 Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20120130-15325 ER - TY - RPRT A1 - Kurz, Daniel A1 - Häntsch, Ferry A1 - Grosse, Max A1 - Schiewe, Alexander A1 - Bimber, Oliver T1 - Laser Pointer Tracking in Projector-Augmented Architectural Environments N2 - We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Architektur KW - Maschinelles Sehen KW - Laserpointer Tracking KW - Erweiterte Realität KW - Interaktion KW - Projektion KW - Verteilte Systeme KW - Laser Pointer Tracking KW - Augmented Reality KW - Interaction KW - Projection KW - Distributed Systems Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8183 ER - TY - THES A1 - Grundhöfer, Anselm T1 - Synchronized Illumination Modulation for Digital Video Compositing N2 - Informationsaustausch ist eines der Grundbedürfnisse der Menschen. Während früher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man später, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten Hälfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar Anschütz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. Während diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschränkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, führten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ... N2 - Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ... KW - Lumineszenzdiode KW - Filmproduktion KW - Visuelle Effekte ; virtuelles Studio ; digitale Videokomposition KW - Digital video compositing ; video projection ; digital video composition techniques Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20101210-15278 ER - TY - THES A1 - Bruns, Erich T1 - Adaptive Image Classification on Mobile Phones N2 - The advent of high-performance mobile phones has opened up the opportunity to develop new context-aware applications for everyday life. In particular, applications for context-aware information retrieval in conjunction with image-based object recognition have become a focal area of recent research. In this thesis we introduce an adaptive mobile museum guidance system that allows visitors in a museum to identify exhibits by taking a picture with their mobile phone. Besides approaches to object recognition, we present different adaptation techniques that improve classification performance. After providing a comprehensive background of context-aware mobile information systems in general, we present an on-device object recognition algorithm and show how its classification performance can be improved by capturing multiple images of a single exhibit. To accomplish this, we combine the classification results of the individual pictures and consider the perspective relations among the retrieved database images. In order to identify multiple exhibits in pictures we present an approach that uses the spatial relationships among the objects in images. They make it possible to infer and validate the locations of undetected objects relative to the detected ones and additionally improve classification performance. To cope with environmental influences, we introduce an adaptation technique that establishes ad-hoc wireless networks among the visitors’ mobile devices to exchange classification data. This ensures constant classification rates under varying illumination levels and changing object placement. Finally, in addition to localization using RF-technology, we present an adaptation technique that uses user-generated spatio-temporal pathway data for person movement prediction. Based on the history of previously visited exhibits, the algorithm determines possible future locations and incorporates these predictions into the object classification process. This increases classification performance and offers benefits comparable to traditional localization approaches but without the need for additional hardware. Through multiple field studies and laboratory experiments we demonstrate the benefits of each approach and show how they influence the overall classification rate. N2 - Die Einführung von Mobiltelefonen mit eingebauten Sensoren wie Kameras, GPS oder Beschleunigungssensoren, sowie Kommunikationstechniken wie Bluetooth oder WLAN ermöglicht die Entwicklung neuer kontextsensitiver Anwendungen für das tägliche Leben. Insbesondere Applikationen im Bereich kontextsensitiver Informationsbeschaffung in Verbindung mit bildbasierter Objekterkennung sind in den Fokus der aktuellen Forschung geraten. Der Beitrag dieser Arbeit ist die Entwicklung eines bildbasierten, mobilen Museumsführersystems, welches unterschiedliche Adaptionstechniken verwendet, um die Objekterkennung zu verbessern. Es wird gezeigt, wie Ojekterkennungsalgorithmen auf Mobiltelefonen realisiert werden können und wie die Erkennungsrate verbessert wird, indem man zum Beispiel ad-hoc Netzwerke einsetzt oder Bewegungsvorhersagen von Personen berücksichtigt. T2 - Adaptive Bilderkennung auf Mobiltelefonen KW - Kontextbezogenes System KW - Bilderkennung KW - Ubiquitous Computing KW - Mobile Computing KW - Maschinelles Sehen KW - Museumsführer KW - Handy KW - Wegrouten KW - Positionsbestimmung KW - PhoneGuide KW - Bluetooth tracking KW - pathway awareness KW - museum guidance system KW - PhoneGuide Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20100707-15092 ER - TY - RPRT A1 - Bruns, Erich A1 - Brombach, Benjamin A1 - Bimber, Oliver T1 - Mobile Phone Enabled Museum Guidance with Adaptive Classification N2 - Although audio guides are widely established in many museums, they suffer from several drawbacks compared to state-of-the-art multimedia technologies: First, they provide only audible information to museum visitors, while other forms of media presentation, such as reading text or video could be beneficial for museum guidance tasks. Second, they are not very intuitive. Reference numbers have to be manually keyed in by the visitor before information about the exhibit is provided. These numbers are either displayed on visible tags that are located near the exhibited objects, or are printed in brochures that have to be carried. Third, offering mobile guidance equipment to visitors leads to acquisition and maintenance costs that have to be covered by the museum. With our project PhoneGuide we aim at solving these problems by enabling the application of conventional camera-equipped mobile phones for museum guidance purposes. The advantages are obvious: First, today’s off-the-shelf mobile phones offer a rich pallet of multimedia functionalities ---ranging from audio (over speaker or head-set) and video (graphics, images, movies) to simple tactile feedback (vibration). Second, integrated cameras, improvements in processor performance and more memory space enable supporting advanced computer vision algorithms. Instead of keying in reference numbers, objects can be recognized automatically by taking non-persistent photographs of them. This is more intuitive and saves museum curators from distributing and maintaining a large number of physical (visible or invisible) tags. Together with a few sensor-equipped reference tags only, computer vision based object recognition allows for the classification of single objects; whereas overlapping signal ranges of object-distinct active tags (such as RFID) would prevent the identification of individuals that are grouped closely together. Third, since we assume that museum visitors will be able to use their own devices, the acquisition and maintenance cost for museum-owned devices decreases. KW - Objektverfolgung KW - Neuronales Netz KW - Handy KW - Objekterkennung KW - Museum KW - Anpassung KW - Mobiltelefone KW - Museumsführer KW - Adaptive Klassifizierung KW - Ad-hoc Sensor-Netzwerke KW - mobile phones KW - object recognition KW - museum guidance KW - adaptive classification KW - ad-hoc sensor networks Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-9406 ER - TY - RPRT A1 - Exner, David A1 - Bruns, Erich A1 - Kurz, Daniel A1 - Grundhöfer, Anselm A1 - Bimber, Oliver T1 - Fast and Reliable CAMShift Tracking N2 - CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the object's appearance changes (e.g. due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories. We propose extensions to CAMShift that address and resolve all of these problems. They allow the accumulation of multiple histograms to model more complex object appearance and the continuous monitoring of object identi- ties to handle ambiguous cases of partial or full occlusion. Most steps of our method are carried out on the GPU for achieving real-time tracking of multiple targets simultaneously. We explain an ecient GPU implementations of histogram generation, probability back projection, im- age moments computations, and histogram intersection. All of these techniques make full use of a GPU's high parallelization. KW - Bildverarbeitung KW - CAMShift KW - Kernel-Based Tracking KW - GPU Programming KW - CAMShift KW - Kernel-Based Tracking KW - GPU Programming Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20091217-14962 ER - TY - RPRT A1 - Grosse, Max A1 - Bimber, Oliver T1 - Coded Aperture Projection N2 - In computer vision, optical defocus is often described as convolution with a filter kernel that corresponds to an image of the aperture being used by the imaging device. The degree of defocus correlates to the scale of the kernel. Convolving an image with the inverse aperture kernel will digitally sharpen the image and consequently compensate optical defocus. This is referred to as deconvolution or inverse filtering. In frequency domain, the reciprocal of the filter kernel is its inverse, and deconvolution reduces to a division. Low magnitudes in the Fourier transform of the aperture image, however, lead to intensity values in spatial domain that exceed the displayable range. Therefore, the corresponding frequencies are not considered, which then results in visible ringing artifacts in the final projection. This is the main limitation of previous approaches, since in frequency domain the Gaussian PSF of spherical apertures does contain a large fraction of low Fourier magnitudes. Applying only small kernel scales will reduce the number of low Fourier magnitudes (and consequently the ringing artifacts) -- but will also lead only to minor focus improvements. To overcome this problem, we apply a coded aperture whose Fourier transform has less low magnitudes initially. Consequently, more frequencies are retained and more image details are reconstructed. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Projektion KW - Blende Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20080227-13020 ER - TY - JOUR A1 - Brombach, Benjamin A1 - Bruns, Erich A1 - Bimber, Oliver T1 - Subobject Detection through Spatial Relationships on Mobile Phones N2 - We present a novel image classification technique for detecting multiple objects (called subobjects) in a single image. In addition to image classifiers, we apply spatial relationships among the subobjects to verify and to predict locations of detected and undetected subobjects, respectively. By continuously refining the spatial relationships throughout the detection process, even locations of completely occluded exhibits can be determined. Finally, all detected subobjects are labeled and the user can select the object of interest for retrieving corresponding multimedia information. This approach is applied in the context of PhoneGuide, an adaptive museum guidance system for camera-equipped mobile phones. We show that the recognition of subobjects using spatial relationships is up to 68% faster than related approaches without spatial relationships. Results of a field experiment in a local museum illustrate that unexperienced users reach an average recognition rate for subobjects of 85.6% under realistic conditions. KW - Objekterkennung KW - Smartphone KW - Subobjekterkennung KW - Räumliche Beziehungen KW - Neuronales Netz KW - Museumsführer KW - Subobject Detection KW - Spatial Relationships KW - Neural Networks KW - Museum Guidance Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20081007-14296 ER - TY - RPRT A1 - Bruns, Erich A1 - Bimber, Oliver T1 - Adaptive Training of Video Sets for Image Recognition on Mobile Phones N2 - We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera–equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server–system for improving data acquisition and for supporting scale–invariant object recognition. KW - Objektverfolgung KW - Neuronales Netz KW - Handy KW - Objekterkennung KW - Museum KW - Anpassung KW - mobile phones KW - object recognition KW - neural networks KW - museum guidance KW - pervasive tracking KW - temporal adaptation Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8223 ER - TY - RPRT A1 - Bimber, Oliver T1 - Superimposing Dynamic Range N2 - Replacing a uniform illumination by a high-frequent illumination enhances the contrast of observed and captured images. We modulate spatially and temporally multiplexed (projected) light with reflective or transmissive matter to achieve high dynamic range visualizations of radiological images on printed paper or ePaper, and to boost the optical contrast of images viewed or imaged with light microscopes. KW - Bildverarbeitung KW - CGI KW - Computergraphik KW - Kontrast KW - Projektor-Kamera Systeme KW - Hoher Dynamikumfang KW - Mikroskopie KW - Contrast KW - Projector-Camera Systems KW - High Dynamic Range KW - Microscopy Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20090303-14662 ER - TY - RPRT A1 - Grundhöfer, Anselm A1 - Seeger, Manja A1 - Häntsch, Ferry A1 - Bimber, Oliver T1 - Dynamic Adaptation of Projected Imperceptible Codes N2 - In this paper we present a novel adaptive imperceptible pattern projection technique that considers parameters of human visual perception. A coded image that is invisible for human observers is temporally integrated into the projected image, but can be reconstructed by a synchronized camera. The embedded code is dynamically adjusted on the fly to guarantee its non-perceivability and to adapt it to the current camera pose. Linked with real-time flash keying, for instance, this enables in-shot optical tracking using a dynamic multi-resolution marker technique. A sample prototype is realized that demonstrates the application of our method in the context of augmentations in television studios. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Erweiterte Realität KW - Kamera Tracking KW - Projektion KW - Augmented Reality KW - Camera Tracking KW - Projection Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8168 ER - TY - RPRT A1 - Föckler, Paul A1 - Zeidler, Thomas A1 - Bimber, Oliver T1 - PhoneGuide: Museum Guidance Supported by On-Device Object Recognition on Mobile Phones N2 - We present PhoneGuide – an enhanced museum guidance approach that uses camera-equipped mobile phones and on-device object recognition. Our main technical achievement is a simple and light-weight object recognition approach that is realized with single-layer perceptron neuronal networks. In contrast to related systems which perform computational intensive image processing tasks on remote servers, our intention is to carry out all computations directly on the phone. This ensures little or even no network traffic and consequently decreases cost for online times. Our laboratory experiments and field surveys have shown that photographed museum exhibits can be recognized with a probability of over 90%. We have evaluated different feature sets to optimize the recognition rate and performance. Our experiments revealed that normalized color features are most effective for our method. Choosing such a feature set allows recognizing an object below one second on up-to-date phones. The amount of data that is required for differentiating 50 objects from multiple perspectives is less than 6KBytes. KW - Neuronales Netz KW - Objekterkennung KW - Handy KW - Museum KW - Mobile phones KW - object recognition KW - neural networks KW - museum guidance Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-6500 ER - TY - INPR A1 - Wetzstein, Gordon A1 - Bimber, Oliver T1 - A Generalized Approach to Radiometric N2 - We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible types of light modulation. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear system that can be solved with respect to the projector patterns. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes interactive compensation rates possible. Our generalized method unifies existing approaches that address individual problems. Based on examples, we show that it is possible to project corrected images onto complex surfaces such as an inter-reflecting statuette, glossy wallpaper, or through highly-refractive glass. Furthermore, we illustrate that a side-effect of our approach is an increase in the overall sharpness of defocused projections. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-7625 ER - TY - RPRT A1 - Wetzstein, Gordon A1 - Bimber, Oliver T1 - Radiometric Compensation through Inverse Light Transport N2 - Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between a projector and a camera to account for many illumination aspects, such as interreflections, refractions and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Projektionssystem KW - radiometrische Kompensation KW - Licht Transport KW - Projector-Camera Systems KW - Radiometric Compensation KW - Inverse Light Transport Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8126 ER - TY - THES A1 - Wetzstein, Gordon T1 - Radiometric Compensation of Global Illumination Effects with Projector-Camera Systems N2 - Projector-based displays have been evolving tremendously in the last decade. Reduced costs and increasing capabilities have let to a widespread use for home entertainment and scientific visualization. The rapid development is continuing - techniques that allow seamless projection onto complex everyday environments such as textured walls, window curtains or bookshelfs have recently been proposed. Although cameras enable a completely automatic calibration of the systems, all previously described techniques rely on a precise mapping between projector and camera pixels. Global illumination effects such as reflections, refractions, scattering, dispersion etc. are completely ignored since only direct illumination is taken into account. We propose a novel method that applies the light transport matrix for performing an image-based radiometric compensation which accounts for all possible lighting effects. For practical application the matrix is decomposed into clusters of mutually influencing projector and camera pixels. The compensation is modeled as a linear equation system that can be solved separately for each cluster. For interactive compensation rates this model is adapted to enable an efficient implementation on programmable graphics hardware. Applying the light transport matrix's pseudo-inverse allows to separate the compensation into a computational expensive preprocessing step (computing the pseudo-inverse) and an on-line matrix-vector multiplication. The generalized mathematical foundation for radiometric compensation with projector-camera systems is validated with several experiments. We show that it is possible to project corrected imagery onto complex surfaces such as an inter-reflecting statuette and glass. The overall sharpness of defocused projections is increased as well. Using the proposed optimization for GPUs, real-time framerates are achieved. KW - Association for Computing Machinery / Special Interest Group on Graphics KW - CGI KW - Maschinelles Sehen KW - Projektionssystem KW - radiometrische Kompensation KW - Licht Transport KW - Projector-Camera Systems KW - Radiometric Compensation KW - Inverse Light Transport Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:gbv:wim2-20111215-8106 ER -