0000000000302634

AUTHOR

Alain Trémeau

Color and Flow Based Superpixels for 3D Geometry Respecting Meshing

We present an adaptive weight based superpixel segmentation method for the goal of creating mesh representation that respects the 3D scene structure. We propose a new fusion framework which employs both dense optical flow and color images to compute the probability of boundaries. The main contribution of this work is that we introduce a new color and optical flow pixel-wise weighting model that takes into account the non-linear error distribution of the depth estimation from optical flow. Experiments show that our method is better than the other state-of-art methods in terms of smaller error in the final produced mesh.

research product

Sampling CIELAB color space with perceptual metrics

International audience

research product

Cross-Media Color Reproduction and Display Characterization

International audience; In this chapter, we present the problem of cross-media color reproduction, that is, how to achieve consistent reproduction of images in different media with different technologies. Of particular relevance for the color image processing community is displays, whose color properties have not been extensively covered in previous literature. Therefore, we go more in depth concerning how to model displays in order to achieve colorimetric consistency. The structure of this chapter is as follows: After a short introduction, we introduce the field of cross-media color reproduction, including a brief description of current standards for color management, the concept of colori…

research product

A Performance Evaluation of Fusion Techniques for Spatio-Temporal Saliency Detection in Dynamic Scenes

International audience; Visual saliency is an important research topic in computer vision applications, which helps to focus on regions of interest instead of processing the whole image. Detecting visual saliency in still images has been widely addressed in literature. However, visual saliency detection in videos is more complicated due to additional temporal information. A spatio-temporal saliency map is usually obtained by the fusion of a static saliency map and a dynamic saliency map. The way both maps are fused plays a critical role in the accuracy of the spatio-temporal saliency map. In this paper, we evaluate the performances of different fusion techniques on a large and diverse datas…

research product

On the uniform sampling of CIELAB color space and the number of discernible colors

This paper presents a useful algorithmic strategy to sample uniformly the CIELAB color space based on close packed hexagonal grid. This sampling scheme has been used successfully in different research works from computational color science to color image processing. The main objective of this paper is to demonstrate the relevance and the accuracy of the hexagonal grid sampling method applied to the CIELAB color space. The second objective of this paper is to show that the number of color samples computed depends on the application and on the color gamut boundary considered. As demonstration, we use this sampling to support a discussion on the number of discernible colors related to a JND.

research product

A Gamut Preserving Color Image Quantization

International audience; We propose a new approach for color image quantization which preserves the shape of the color gamut of the studied image. Quantization consists to find a set of color representative of the color distribution of the image. We are looking here for an optimal LUT (look up table) which contains information on the image's gamut and on the color distribution of this image. The main motivation of this work is to control the reproduction of color images on different output devices in order to have the same color feeling, coupling intrinsic informations on the image gamut and output device calibration. We have developped a color quantization algorithm based on an image depend…

research product

Spatio-Temporal Saliency Detection in Dynamic Scenes using Local Binary Patterns

International audience; Visual saliency detection is an important step in many computer vision applications, since it reduces further processing steps to regions of interest. Saliency detection in still images is a well-studied topic. However, videos scenes contain more information than static images, and this additional temporal information is an important aspect of human perception. Therefore, it is necessary to include motion information in order to obtain spatio-temporal saliency map for a dynamic scene. In this paper, we introduce a new spatio-temporal saliency detection method for dynamic scenes based on dynamic textures computed with local binary patterns. In particular, we extract l…

research product