Search results for "Computer Vision"
showing 10 items of 2353 documents
Classification of Melanoma Lesions Using Sparse Coded Features and Random Forests
2016
International audience; Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the…
Uncalibrated Reconstruction: An Adaptation to Structured Light Vision
2003
Abstract Euclidean reconstruction from two uncalibrated stereoscopic views is achievable from the knowledge of geometrical constraints about the environment. Unfortunately, these constraints may be quite difficult to obtain. In this paper, we propose an approach based on structured lighting, which has the advantage of providing geometrical constraints independent of the scene geometry. Moreover, the use of structured light provides a unique solution to the tricky correspondence problem present in stereovision. The projection matrices are first computed by using a canonical representation, and a projective reconstruction is performed. Then, several constraints are generated from the image an…
Toward morphological thoracic EIT: major signal sources correspond to respective organ locations in CT.
2012
Lung and cardiovascular monitoring applications of electrical impedance tomography (EIT) require localization of relevant functional structures or organs of interest within the reconstructed images. We describe an algorithm for automatic detection of heart and lung regions in a time series of EIT images. Using EIT reconstruction based on anatomical models, candidate regions are identified in the frequency domain and image-based classification techniques applied. The algorithm was validated on a set of simultaneously recorded EIT and CT data in pigs. In all cases, identified regions in EIT images corresponded to those manually segmented in the matched CT image. Results demonstrate the abilit…
Effect of Footstep Vibrations and Proprioceptive Vibrations Used with an Innovative Navigation Method
2017
This study proposes to investigate the effect of adding vibration feedback to a navigation task in virtual environment. Previous study used footstep vibrations and proprioceptive vibrations in order to decrease the cyber-sickness and increase the sense of presence. In this study, we experiment the same vibration modalities but with a new navigation method. The results show that proprioceptive vibrations do not impact the sense of presence neither the cyber-sickness while footstep vibrations increase sense of presence and decrease in a certain way cyber-sickness. Burgundy region through the JCE funding project
The Athena X-ray Integral Field Unit (X-IFU)
2016
Event: SPIE Astronomical Telescopes + Instrumentation, 2016, Edinburgh, United Kingdom.
Tracking Hands in Interaction with Objects: A Review
2017
Markerless vision-based 3D hand motion tracking is a key and popular component for interaction studies in many domains such as virtual reality and natural human-computer interfaces. While this research field has been well studied in the last decades, most approaches have considered the human hand in isolation and not in action or in interaction with the environment or the other articulated human body parts. Employing contextual information about the surrounding environment (e.g. the shape, the texture, and the posture of the object in the hand) can remarkably constrain the tracking problem. The goal of this survey is to develop an up-to-date taxonomy of existing vision-based hand tracking m…
Dynamic Augmented Kalman Filtering for Human Motion Tracking under Occlusion Using Multiple 3D Sensors
2020
In this paper real-time human motion tracking using multiple 3D sensors has been demonstrated in a relatively large industrial robot work cell. The proposed solution extends state-of-the-art by augmenting the constant velocity model and Kalman filter with low-pass filtered velocity states. The presented method is able to handle occlusions by dynamically inclusion in the Kalman filter of only those 3D sensors which provide valid human position data. Human motion tracking was achieved at a frame rate of 20 Hz, with a typical delay of 50 ms to 100 ms and an estimation accuracy of typically 0.10 m to 0.15 m.
GPU-Based Occlusion Minimisation for Optimal Placement of Multiple 3D Cameras
2020
This paper presents a fast GPU-based solution to the 3D occlusion detection problem and the 3D camera placement optimisation problem. Occlusion detection is incorporated into the optimisation problem to return near-optimal positions for 3D cameras in environments containing occluding objects, which maximises the volume that is visible to the cameras. In addition, the authors’ previous work on 3D sensor placement optimisation is extended to include a model for a pyramid-shaped viewing frustum and to take the camera’s pose into account when computing the optimal position.
Visual Marker Guided Point Cloud Registration in a Large Multi-Sensor Industrial Robot Cell
2018
This paper presents a benchmark and accuracy analysis of 3D sensor calibration in a large industrial robot cell. The sensors used were the Kinect v2 which contains both an RGB and an IR camera measuring depth based on the time-of-flight principle. The approach taken was based on a novel procedure combining Aruco visual markers, methods using region of interest and iterative closest point. The calibration of sensors is performed pairwise, exploiting the fact that time-of-flight sensors can have some overlap in the generated point cloud data. For a volume measuring 10m × 14m × 5m a typical accuracy of the generated point cloud data of 5–10cm was achieved using six sensor nodes.
Augmented Reality Visualisation Concepts to Support Intraoperative Distance Estimation
2019
The estimation of distances and spatial relations between surgical instruments and surrounding anatomical structures is a challenging task for clinicians in image-guided surgery. Using augmented reality (AR), navigation aids can be displayed directly at the intervention site to support the assessment of distances and reduce the risk of damage to healthy tissue. To this end, four distance-encoding visualisation concepts were developed using a head-mounted optical see-through AR setup and evaluated by conducting a comparison study. Results suggest the general advantage of the proposed methods compared to a blank visualisation providing no additional information. Using a Distance Sensor concep…