6533b7d1fe1ef96bd125d97a
RESEARCH PRODUCT
Modality-Guided Subnetwork for Salient Object Detection
Zongwei WuGuillaume AllibertChristophe StolzChao MaCédric Demonceauxsubject
FOS: Computer and information sciencesComputer Vision and Pattern Recognition (cs.CV)[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO]Computer Science - Computer Vision and Pattern Recognition[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]description
Recent RGBD-based models for saliency detection have attracted research attention. The depth clues such as boundary clues, surface normal, shape attribute, etc., contribute to the identification of salient objects with complicated scenarios. However, most RGBD networks require multi-modalities from the input side and feed them separately through a two-stream design, which inevitably results in extra costs on depth sensors and computation. To tackle these inconveniences, we present in this paper a novel fusion design named modality-guided subnetwork (MGSnet). It has the following superior designs: 1) Our model works for both RGB and RGBD data, and dynamically estimating depth if not available. Taking the inner workings of depth-prediction networks into account, we propose to estimate the pseudo-geometry maps from RGB input - essentially mimicking the multi-modality input. 2) Our MGSnet for RGB SOD results in real-time inference but achieves state-of-the-art performance compared to other RGB models. 3) The flexible and lightweight design of MGS facilitates the integration into RGBD two-streaming models. The introduced fusion design enables a cross-modality interaction to enable further progress but with a minimal cost.
year | journal | country | edition | language |
---|---|---|---|---|
2021-01-01 |