6533b7d2fe1ef96bd125e21c

RESEARCH PRODUCT

Camera-LiDAR Data Fusion for Autonomous Mooring Operation

Ajit JhaDipendra SubediGeir HovlandIlya Tyapin

subject

business.industryComputer scienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONPoint cloudRobotics02 engineering and technologySensor fusionMooringGeneralLiterature_MISCELLANEOUSLidarVDP::Teknologi: 500::Maskinfag: 5700202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingSegmentationComputer visionArtificial intelligencebusinessPoseCamera resectioning

description

Author's accepted manuscript. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The use of camera and LiDAR sensors to sense the environment has gained increasing popularity in robotics. Individual sensors, such as cameras and LiDARs, fail to meet the growing challenges in complex autonomous systems. One such scenario is autonomous mooring, where the ship has to be tied to a fixed rigid structure (bollard) to keep it stationary safely. The detection and pose estimation of the bollard based on data fusion from the camera and LiDAR are presented here. Firstly, a single shot extrinsic calibration of LiDAR with the camera is presented. Secondly, the camera-LiDAR data fusion method using camera intrinsic parameters and camera to LiDAR extrinsic parameters is proposed. Finally, the use of an image-based segmentation method to segment the corresponding point cloud from the fused camera-LiDAR data is developed and tailored for its application in autonomous mooring operation.

https://doi.org/10.1109/iciea48937.2020.9248089