Occluded Video Instance Segmentation: A Benchmark
Research output: Working paper › Preprint › Research
Standard
Occluded Video Instance Segmentation : A Benchmark. / Qi, Jiyang; Gao, Yan; Hu, Yao; Wang, Xinggang; Liu, Xiaoyu; Bai, Xiang; Belongie, Serge; Yuille, Alan; Torr, Philip H. S.; Bai, Song.
2021.Research output: Working paper › Preprint › Research
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - UNPB
T1 - Occluded Video Instance Segmentation
T2 - A Benchmark
AU - Qi, Jiyang
AU - Gao, Yan
AU - Hu, Yao
AU - Wang, Xinggang
AU - Liu, Xiaoyu
AU - Bai, Xiang
AU - Belongie, Serge
AU - Yuille, Alan
AU - Torr, Philip H. S.
AU - Bai, Song
N1 - project page at https://songbai.site/ovis
PY - 2021/2/2
Y1 - 2021/2/2
N2 - Can our video understanding systems perceive objects when a heavy occlusion exists in a scene? To answer this question, we collect a large-scale dataset called OVIS for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. OVIS consists of 296k high-quality instance masks from 25 semantic categories, where object occlusions usually occur. While our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems cannot. On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 16.3, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario. We also present a simple plug-and-play module that performs temporal feature calibration to complement missing object cues caused by occlusion. Built upon MaskTrack R-CNN and SipMask, we obtain a remarkable AP improvement on the OVIS dataset. The OVIS dataset and project code are available at http://songbai.site/ovis .
AB - Can our video understanding systems perceive objects when a heavy occlusion exists in a scene? To answer this question, we collect a large-scale dataset called OVIS for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. OVIS consists of 296k high-quality instance masks from 25 semantic categories, where object occlusions usually occur. While our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems cannot. On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 16.3, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario. We also present a simple plug-and-play module that performs temporal feature calibration to complement missing object cues caused by occlusion. Built upon MaskTrack R-CNN and SipMask, we obtain a remarkable AP improvement on the OVIS dataset. The OVIS dataset and project code are available at http://songbai.site/ovis .
KW - cs.CV
KW - 68T07, 68T45
M3 - Preprint
BT - Occluded Video Instance Segmentation
ER -
ID: 303683496