6533b86efe1ef96bd12cc86c

RESEARCH PRODUCT

OMNI-DRL: Learning to Fly in Forests with Omnidirectional Images

Charles-olivier ArtizzuGuillaume AllibertCédric Demonceaux

subject

Perception and sensingDeep Reinforcement LearningControl and Systems EngineeringMobile robots and vehicles[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO]Omnidirectional sensorsLearning robot control

description

Perception is crucial for drone obstacle avoidance in complex, static, and unstructured outdoor environments. However, most navigation solutions based on Deep Reinforcement Learning (DRL) use limited Field-Of-View (FOV) images as input. In this paper, we demonstrate that omnidirectional images improve these methods. Thus, we provide a comparative benchmark of several visual modalities for navigation: ground truth depth, ground truth semantic segmentation, and RGB images. These exhaustive comparisons reveal that it is superior to use an omnidirectional camera to navigate with classical DRL methods. Finally, we show in two different virtual forest environments that adapting the convolution to take into account spherical distortions improves the results even more.

https://hal.science/hal-03777700