0000000000320983
AUTHOR
Seokmin Hong
Ownership protection of plenoptic images by robust and reversible watermarking
Abstract Plenoptic images are highly demanded for 3D representation of broad scenes. Contrary to the images captured by conventional cameras, plenoptic images carry a considerable amount of angular information, which is very appealing for 3D reconstruction and display of the scene. Plenoptic images are gaining increasing importance in areas like medical imaging, manufacturing control, metrology, or even entertainment business. Thus, the adaptation and refinement of watermarking techniques to plenoptic images is a matter of raising interest. In this paper a new method for plenoptic image watermarking is proposed. A secret key is used to specify the location of logo insertion. Employing discr…
Display of travelling 3D scenes from single integral-imaging capture
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
Full parallax three-dimensional display from Kinect v1 and v2
We exploit the two different versions of Kinect, v1 and v2, for the calculation of microimages projected onto integral-imaging displays. Our approach is based on composing a three-dimensional (3-D) point cloud from a captured depth map and RGB information. These fused 3-D maps permit to generate an integral image after projecting the information through a virtual pinhole array. In our analysis, we take into account that each of the Kinect devices has its own inherent capacities and individualities. We illustrate our analysis with some imaging experiments, provide the distinctive differences between the two Kinect devices, and finally conclude that Kinect v2 allows the display of 3-D images …
Full-parallax 3D display from the hole-filtered depth information
In this paper we introduce an efficient hole-filling algorithm for synthetic generation of microimages that are displayed on an integral imaging monitor. We apply the joint bilateral filter and the median filter to the captured depth map. We introduce in any step of the iterative algorithm with the data from a new Kinect capture. As a result, this algorithm can improve the quality of the depth maps and remove unmeasured depth holes effectively. This refined depth information enables to create a tidy integral image, which can be projected into an integral imaging monitor. In this way the monitor can display 3D images with continuous views, full parallax and abundant 3D reconstructed scene fo…
Full-parallax 3D display from stereo-hybrid 3D camera system
Abstract In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microim…
GPU-accelerated integral imaging and full-parallax 3D display using stereo-plenoptic camera system
Abstract In this paper, we propose a novel approach to produce integral images ready to be displayed onto an integral-imaging monitor. Our main contribution is the use of commercial plenoptic camera, which is arranged in a stereo configuration. Our proposed set-up is able to record the radiance, spatial and angular, information simultaneously in each different stereo position. We illustrate our contribution by composing the point cloud from a pair of captured plenoptic images, and generate an integral image from the properly registered 3D information. We have exploited the graphics processing unit (GPU) acceleration in order to enhance the integral-image computation speed and efficiency. We…
Full parallax 3D display from Kinect v1 and v2
We exploit the two different versions of Kinect, v1 and v2, for the calculation of microimages projected onto integral-imaging displays. Our approach is based on composing a three-dimensional (3-D) point cloud from a captured depth map and RGB information. These fused 3-D maps permit to generate an integral image after projecting the information through a virtual pinhole array. In our analysis, we take into account that each of the Kinect devices has its own inherent capacities and individualities. We illustrate our analysis with some imaging experiments, provide the distinctive differences between the two Kinect devices, and finally conclude that Kinect v2 allows the display of 3-D images …
Three-Dimensional Integral-Imaging Display From Calibrated and Depth-Hole Filtered Kinect Information
We exploit the Kinect capacity of picking up a dense depth map, to display static three-dimensional (3D) images with full parallax. This is done by using the IR and RGB camera of the Kinect. From the depth map and RGB information, we are able to obtain an integral image after projecting the information through a virtual pinhole array. The integral image is displayed on our integral-imaging monitor, which provides the observer with horizontal and vertical perspectives of big 3D scenes. But, due to the Kinect depth-acquisition procedure, many depthless regions appear in the captured depth map. These holes spread to the generated integral image, reducing its quality. To solve this drawback we …
Integral-Imaging display from stereo-Kinect capture
In this paper, we propose a new approach in order to improve the quality of microimages and display them onto an integral imaging monitor. Our main proposal is based on the stereo-hybrid 3D camera system. Originally, hybrid camera system has dissimilarity itself. We interpret our method in order to equalize the hybrid sensor's characteristics and 3D data modification strategy. We generate integral image by using synthetic back-projection mapping method. Finally, we project the integral image onto our proposed display system. We illustrate this procedure with some imaging experiments in order to prove an advantage of our approach.
Fusion of computed point clouds and integral-imaging concepts for full-parallax 3D display
During the last century, various technologies of 3D image capturing and visualization have spotlighted, due to both their pioneering nature and the aspiration to extend the applications of conventional 2D imaging technology to 3D scenes. Besides, thanks to advances in opto-electronic imaging technologies, the possibilities of capturing and transmitting 2D images in real-time have progressed significantly, and boosted the growth of 3D image capturing, processing, transmission and as well as display techniques. Among the latter, integral-imaging technology has been considered as one of the promising ones to restore real 3D scenes through the use of a multi-view visualization system that provi…
Computation of microimages for plenoptic display
We report a new algorithm for the generation of the microimages ready for their projection into an integral imaging monitor. The algorithm is based in the transformation properties of the plenoptic field captured with an array of digital cameras. We show that a small number of cameras can produce the microimages for displaying 3D scenes with resolution and parallax fully adapted to the monitor features.
Full-parallax immersive 3D display from depth-map cameras
We exploit two different versions of the Kinect to make comparison of three-dimensional (3D) scenes displayed by proposed integral imaging (InI) display system. We attempt to show the difference between each version specifications and capacity. Furthermore, we illustrate our study result with some empirical imaging experiment in which the final result are displayed with full-parallax. Each demonstrated integral images can provide clear comparison results to the observer.
Towards 3D Television Through Fusion of Kinect and Integral-Imaging Concepts
We report a new procedure for the capture and processing of light proceeding from 3D scenes of some cubic meters in size. Specifically we demonstrate that with the information provided by a kinect device it is possible to generate an array of microimages ready for their projection onto an integral-imaging monitor. We illustrate our proposal with some imaging experiment in which the final result are 3D images displayed with full parallax.
New Method of Microimages Generation for 3D Display
In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used to create a point cloud. A set of elemental images of this point cloud is captured synthetically and from it the microimages are computed. The main feature of this method is that the reference plane of displayed images can be set at will, while the empty pixels are avoided. Another advantage of the method is that the center point of displayed images and also their scale and field of view can be set. To show the final results, a 3D InI dis…
Computation and Display of 3D Movie From a Single Integral Photography
Integral photography is an auto-stereoscopic technique that allows, among other interesting applications, the display of 3D images with full parallax and avoids the painful effects of the accommodation-convergence conflict. Currently, one of the main drawbacks of this technology is the need of a huge amount of data, which have to be stored and transmitted. This is due to the fact that behind every visual resolution unit, i.e. behind any microlens of an integral-photography monitor, between 100 and 300 pixels should appear. In this paper, we make use of an updated version of our algorithm, SPOC 2.0, to alleviate this situation. We propose the application of SPOC 2.0 for the calculation of co…
Integral display for non-static observers
We propose to combine the Kinect and the Integral-Imaging technologies for the implementation of Integral Display. The Kinect device permits the determination, in real time, of (x,y,z) position of the observer relative to the monitor. Due to the active condition of its IR technology, the Kinect provides the observer position even in dark environments. On the other hand, SPOC 2.0 algorithm permits to calculate microimages adapted to the observer 3D position. The smart combination of these two concepts permits the implementation, for the first time we believe, of an Integral Display that provides the observer with color 3D images of real scenes that are viewed with full parallax and which are…
Toward 3D integral-imaging broadcast with increased viewing angle and parallax
Abstract We propose a new method for improving the observer experience when using an integral monitor. Our method permits to increase the viewing angle of the integral monitor, and also the maximum parallax that can be displayed. Additionally, it is possible to decide which parts of the 3D scene are displayed in front or behind the monitor. Our method is based, first, in the direct capture, with significant excess of parallax, of elemental images of 3D real scenes. From them, a collection of microimages adapted to the observer lateral and depth position is calculated. Finally, an eye-tracking system permits to determine the 3D observer position, and therefore to display the adequate microim…