0000000001316970
AUTHOR
Joacim Dybedal
Optimal placement of 3D sensors considering range and field of view
This paper describes a novel approach to the problem of optimal placement of 3D sensors in a specified volume of interest. The coverage area of the sensors is modelled as a cone having limited field of view and range. The volume of interest is divided into many, smaller cubes each having a set of associated Boolean and continuous variables. The proposed method could be easily extended to handle the case where certain sub-volumes must be covered by several sensors (redundancy), for example ex-zones, regions where humans are not allowed to enter or regions where machine movement may obstruct the view of a single sensor. The optimisation problem is formulated as a Mixed-Integer Linear Program …
Industrial Environment Mapping Using Distributed Static 3D Sensor Nodes
This paper presents a system architecture for mapping and real-time monitoring of a relatively large industrial robotic environment of size 10 m × 15 m × 5 m. Six sensor nodes with embedded computing power and local processing of the 3D point clouds are placed close to the ceiling. The system architecture and data processing is based on the Robot Operating System (ROS) and the Point Cloud Library (PCL). The 3D sensors used are the Microsoft Kinect for Xbox One and point cloud data is collected at 20 Hz. A new manual calibration procedure is developed using reflective planes. The specified range of the used sensor is 0.8 m to 4.2 m, while depth data up to 9 m is used in this paper. Despite t…
Scalability of GPU-Processed 3D Distance Maps for Industrial Environments
This paper contains a benchmark analysis of the open source library GPU-Voxels together with the Robot Operating System (ROS) in large-scale industrial robotics environment. Six sensor nodes with embedded computing generate real-time point cloud data as ROS topics. The overall data from all sensor nodes is processed by a combination of CPU and GPU on a central ROS node. Experimental results demonstrate that the system is able to handle frame rates of 10 and 20 Hz with voxel sizes of 4, 6, 8 and 12 cm without saturation of the CPU or the GPU used by the GPU-Voxels library. The results in this paper show that ROS, in combination with GPU-Voxels, can be used as a viable solution for real-time …
GPU-Based Occlusion Minimisation for Optimal Placement of Multiple 3D Cameras
This paper presents a fast GPU-based solution to the 3D occlusion detection problem and the 3D camera placement optimisation problem. Occlusion detection is incorporated into the optimisation problem to return near-optimal positions for 3D cameras in environments containing occluding objects, which maximises the volume that is visible to the cameras. In addition, the authors’ previous work on 3D sensor placement optimisation is extended to include a model for a pyramid-shaped viewing frustum and to take the camera’s pose into account when computing the optimal position.
Automatic Calibration of an Industrial RGB-D Camera Network Using Retroreflective Fiducial Markers
This paper describes a non-invasive, automatic, and robust method for calibrating a scalable RGB-D sensor network based on retroreflective ArUco markers and the iterative closest point (ICP) scheme. We demonstrate the system by calibrating a sensor network comprised of six sensor nodes positioned in a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m . Here, the automatic calibration achieved an average Euclidean error of 3 c m at distances up to 9.45 m . To achieve robustness, we apply several innovative techniques: Firstly, we mitigate the ambiguity problem that occurs when detecting a marker at long range or low resolution by comparing the…
Reshaping Field of View and Resolution with Segmented Reflectors: Bridging the Gap between Rotating and Solid-State LiDARs
This paper describes the first simulations and experimental results of a novel segmented Light Detection And Ranging (LiDAR) reflector. Large portions of the rotating LiDAR data are typically discarded due to occlusion or a misplaced field of view (FOV). The proposed reflector solves this problem by reflecting the entire FOV of the rotating LiDAR towards a target. Optical simulation results, using Zemax OpticStudio, suggest that adding a reflector reduces the range of the embedded LiDAR with only 3 . 9 . Furthermore, pattern simulation results show that a radially reshaped FOV can be configured to maximize point cloud density, maximize coverage, or a combination. Here, the maximum density i…
GPU-Based Optimisation of 3D Sensor Placement Considering Redundancy, Range and Field of View
This paper presents a novel and efficient solution for the 3D sensor placement problem based on GPU programming and massive parallelisation. Compared to prior art using gradient-search and mixed-integer based approaches, the method presented in this paper returns optimal or good results in a fraction of the time compared to previous approaches. The presented method allows for redundancy, i.e. requiring selected sub-volumes to be covered by at least n sensors. The presented results are for 3D sensors which have a visible volume represented by cones, but the method can easily be extended to work with sensors having other range and field of view shapes, such as 2D cameras and lidars.
CNN-based People Detection in Voxel Space using Intensity Measurements and Point Cluster Flattening
In this paper real-time people detection is demonstrated in a relatively large indoor industrial robot cell as well as in an outdoor environment. Six depth sensors mounted at the ceiling are used to generate a merged point cloud of the cell. The merged point cloud is segmented into clusters and flattened into gray-scale 2D images in the xy and xz planes. These images are then used as input to a classifier based on convolutional neural networks (CNNs). The final output is the 3D position (x,y,z) and bounding box representing the human. The system is able to detect and track multiple humans in real-time, both indoors and outdoors. The positional accuracy of the proposed method has been verifi…
Embedded Processing and Compression of 3D Sensor Data for Large Scale Industrial Environments
This paper presents a scalable embedded solution for processing and transferring 3D point cloud data. Sensors based on the time-of-flight principle generate data which are processed on a local embedded computer and compressed using an octree-based scheme. The compressed data is transferred to a central node where the individual point clouds from several nodes are decompressed and filtered based on a novel method for generating intensity values for sensors which do not natively produce such a value. The paper presents experimental results from a relatively large industrial robot cell with an approximate size of 10 m ×
Visual Marker Guided Point Cloud Registration in a Large Multi-Sensor Industrial Robot Cell
This paper presents a benchmark and accuracy analysis of 3D sensor calibration in a large industrial robot cell. The sensors used were the Kinect v2 which contains both an RGB and an IR camera measuring depth based on the time-of-flight principle. The approach taken was based on a novel procedure combining Aruco visual markers, methods using region of interest and iterative closest point. The calibration of sensors is performed pairwise, exploiting the fact that time-of-flight sensors can have some overlap in the generated point cloud data. For a volume measuring 10m × 14m × 5m a typical accuracy of the generated point cloud data of 5–10cm was achieved using six sensor nodes.
Replication Data for: CNN-based People Detection in Voxel Space using Intensity Measurements and Point Cluster Flattening
Dataset used to train and test a human classifier in the article "CNN-based People Detection in Voxel Space using Intensity Measurements and Point Cluster Flattening". The set contains both the raw point cloud data from an outdoor test site, as well as generated images used for training.