Search results for "Depth map"
showing 9 items of 19 documents
Depth Enhancement by Fusion for Passive and Active Sensing
2012
This paper presents a general refinement procedure that enhances any given depth map obtained by passive or active sensing. Given a depth map, either estimated by triangulation methods or directly provided by the sensing system, and its corresponding 2-D image, we correct the depth values by separately treating regions with undesired effects such as empty holes, texture copying or edge blurring due to homogeneous regions, occlusions, and shadowing. In this work, we use recent depth enhancement filters intended for Time-of-Flight cameras, and adapt them to alternative depth sensing modalities, both active using an RGB-D camera and passive using a dense stereo camera. To that end, we propose …
Full-parallax immersive 3D display from depth-map cameras
2016
We exploit two different versions of the Kinect to make comparison of three-dimensional (3D) scenes displayed by proposed integral imaging (InI) display system. We attempt to show the difference between each version specifications and capacity. Furthermore, we illustrate our study result with some empirical imaging experiment in which the final result are displayed with full-parallax. Each demonstrated integral images can provide clear comparison results to the observer.
Full parallax three-dimensional display from Kinect v1 and v2
2016
We exploit the two different versions of Kinect, v1 and v2, for the calculation of microimages projected onto integral-imaging displays. Our approach is based on composing a three-dimensional (3-D) point cloud from a captured depth map and RGB information. These fused 3-D maps permit to generate an integral image after projecting the information through a virtual pinhole array. In our analysis, we take into account that each of the Kinect devices has its own inherent capacities and individualities. We illustrate our analysis with some imaging experiments, provide the distinctive differences between the two Kinect devices, and finally conclude that Kinect v2 allows the display of 3-D images …
Full-parallax 3D display from the hole-filtered depth information
2015
In this paper we introduce an efficient hole-filling algorithm for synthetic generation of microimages that are displayed on an integral imaging monitor. We apply the joint bilateral filter and the median filter to the captured depth map. We introduce in any step of the iterative algorithm with the data from a new Kinect capture. As a result, this algorithm can improve the quality of the depth maps and remove unmeasured depth holes effectively. This refined depth information enables to create a tidy integral image, which can be projected into an integral imaging monitor. In this way the monitor can display 3D images with continuous views, full parallax and abundant 3D reconstructed scene fo…
Real Time Stereo Matching Using Two Step Zero-Mean SAD and Dynamic Programing
2018
Dense depth map extraction is a dynamic research field in a computer vision that tries to recover three-dimensional information from a stereo image pair. A large variety of algorithms has been developed. The local methods based on block matching that are prevalent due to the linear computational complexity and easy implementation. This local cost is used on global methods as graph cut and dynamic programming in order to reduce sensitivity to local to occlusion and uniform texture. This paper proposes a new method for matching images based on a two-stage of block matching as local cost function and dynamic programming as energy optimization approach. In our work introduce the two stage of th…
Robust Depth Estimation for Light Field Microscopy
2019
Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth ma…
<title>Face modeling: a real-time embedded implementation of a stereovision algorithm</title>
2001
The problem to acquire 3D data of human face can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consistes in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple realtime implementation, based on a multiprocessor approach (FPGA-DSP) allowing to consider an embedded processing. Then we show our method which provides depth map of face, dense and reliable, and …
Combining Defocus and Photoconsistency for Depth Map Estimation in 3D Integral Imaging
2017
This paper presents the application of a depth estimation method for scenes acquired using a Synthetic Aperture Integral Imaging (SAII) technique. SAII is an autostereoscopic technique consisting of an array of cameras that acquires images from different perspectives. The depth estimation method combines a defocus and a correspondence measure. This approach obtains consistent results and shows noticeable improvement in the depth estimation as compared to a minimum variance minimisation strategy, also tested in our scenes. Further improvements are obtained for both methods when they are fed into a regularisation approach that takes into account the depth in the spatial neighbourhood of a pix…
Robust technique for 3D shape reconstruction
2017
Abstract In this paper a robust and simple scheme is presented for three dimensional (3D) shape reconstruction of real object. A novel composite pattern technique is proposed for projecting the light pattern on the object of interest. The proposed scheme reduces the number of patterns by combining the primary color coded channels into one composite format. Our approach uses both spatial and temporal intensity variation for calibration and construction phase. Gamma calibration is considered with the propose scheme. High quality depth map is obtained from the linear light reflected by the shape of object without complex calculations. Experimental results demonstrated that proposed technique i…