Search results for "methodologies"
showing 10 items of 2106 documents
Towards an Efficient Implementation of an Accurate SPH Method
2020
A modified version of the Smoothed Particle Hydrodynamics (SPH) method is considered in order to overcome the loss of accuracy of the standard formulation. The summation of Gaussian kernel functions is employed, using the Improved Fast Gauss Transform (IFGT) to reduce the computational cost, while tuning the desired accuracy in the SPH method. This technique, coupled with an algorithmic design for exploiting the performance of Graphics Processing Units (GPUs), makes the method promising, as shown by numerical experiments.
A video-based real-time vehicle counting system using adaptive background method
2008
International audience; This paper presents a video-based solution for real time vehicle detection and counting system, using a surveillance camera mounted on a relatively high place to acquire the traffic video stream.The two main methods applied in this system are: the adaptive background estimation and the Gaussian shadow elimination. The former allows a robust moving detection especially in complex scenes. The latter is based on color space HSV, which is able to deal with different size and intensity shadows. After these two operations, it obtains an image with moving vehicle extracted, and then operation counting is effected by a method called virtual detector.
Optimal Filter Estimation for Lucas-Kanade Optical Flow
2012
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filt…
GPU-accelerated integral imaging and full-parallax 3D display using stereo-plenoptic camera system
2019
Abstract In this paper, we propose a novel approach to produce integral images ready to be displayed onto an integral-imaging monitor. Our main contribution is the use of commercial plenoptic camera, which is arranged in a stereo configuration. Our proposed set-up is able to record the radiance, spatial and angular, information simultaneously in each different stereo position. We illustrate our contribution by composing the point cloud from a pair of captured plenoptic images, and generate an integral image from the properly registered 3D information. We have exploited the graphics processing unit (GPU) acceleration in order to enhance the integral-image computation speed and efficiency. We…
The cone of gaze
2011
Gaze direction is an important cue that regulates social interactions. Although humans are very accurate in determining gaze directions in general, they have a surprisingly liberal criterion for the presence of mutual gaze. We first established a psychophysical task to measure the cone of gaze, which required observers to adjust the eyes of a virtual head to the margins of the area of mutual gaze. Then we examined differences between 2D, 3D, and genuine real life gaze. Finally, the tolerance for image distortions when the virtual head is not viewed from the proper vantage point was investigated. Gaze direction was remarkably robust toward loss in detail and distortion. Important lessons for…
A new Adaptive and Progressive Image Transmission Approach using Function Superpositions
2010
International audience; We present a novel approach to adaptive and progressive image transmission, based on the decomposition of an image into compositions and superpositions of monovariate functions. The monovariate functions are iteratively constructed and transmitted, one after the other, to progressively reconstruct the original image: the progressive transmission is performed directly in the 1D space of the monovariate functions and independently of any statistical properties of the image. Each monovariate function contains only a fraction of the pixels of the image. Each new transmitted monovariate function adds data to the previously transmitted monovariate functions. After each tra…
A survey on geometrical reconstruction as a core technology to sketch-based modeling
2005
In this work, the background and evolution of three-dimensional reconstruction of line drawings during the last 30 years is discussed. A new general taxonomy is proposed to make apparent and discuss the historical evolution of geometrical reconstruction and their challenges. The evolution of geometrical reconstruction from recovering know-how stored in engineering drawings to sketch-based modeling for helping in the first steps of conceptual design purposes, and the current challenges of geometrical reconstruction are discussed too.
Optimizing PolyACO Training with GPU-Based Parallelization
2016
A central part of Ant Colony Optimisation (ACO) is the function calculating the quality and cost of solutions, such as the distance of a potential ant route. This cost function is used to deposit an opportune amount of pheromones to achieve an apt convergence, and in an active ACO implementation a significant part of the runtime is spent in this part of the code. In some cases, the cost function accumulates up towards 94 % in its run time making it a performance bottle neck.
Some subgroup embeddings in finite groups: A mini review
2015
[EN] In this survey paper several subgroup embedding properties related to some types of permutability are introduced and studied. ª 2014 Production and hosting by Elsevier B.V. on behalf of Cairo University
High-resolution far-field integral-imaging camera by double snapshot
2012
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed images is equal to the spatial density of microlenses in the array. In this paper we report a simple method, based on the realization of double snapshots, to double the 2D pixel density of reconstructed scenes. Experiments are reported to support the proposed approach.