End-to-End Pseudo-LiDAR for Image-Based 3D Object Detection

Research output: Contribution to journalConference articleResearchpeer-review

  • Rui Qian
  • DIvyansh Garg
  • Yan Wang
  • Yurong You
  • Belongie, Serge
  • Bharath Hariharan
  • Mark Campbell
  • Kilian Q. Weinberger
  • Wei Lun Chao

Reliable and accurate 3D object detection is a necessity for safe autonomous driving. Although LiDAR sensors can provide accurate 3D point cloud estimates of the environment, they are also prohibitively expensive for many settings. Recently, the introduction of pseudo-LiDAR (PL) has led to a drastic reduction in the accuracy gap between methods based on LiDAR sensors and those based on cheap stereo cameras. PL combines state-of-the-art deep neural networks for 3D depth estimation with those for 3D object detection by converting 2D depth map outputs to 3D point cloud inputs. However, so far these two networks have to be trained separately. In this paper, we introduce a new framework based on differentiable Change of Representation (CoR) modules that allow the entire PL pipeline to be trained end-to-end. The resulting framework is compatible with most state-of-the-art networks for both tasks and in combination with PointRCNN improves over PL consistently across all benchmarks - - yielding the highest entry on the KITTI image-based 3D object detection leaderboard at the time of submission. Our code will be made available at https://github.com/mileyan/pseudo-LiDAR_e2e.

Original languageEnglish
Article number9157553
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Pages (from-to)5880-5889
Number of pages10
ISSN1063-6919
DOIs
Publication statusPublished - 2020
Externally publishedYes
Event2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
Duration: 14 Jun 202019 Jun 2020

Conference

Conference2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
CountryUnited States
CityVirtual, Online
Period14/06/202019/06/2020

Bibliographical note

Funding Information:
This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), the Bill and Melinda Gates Foundation, and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875). We are thankful for generous support by Zillow and SAP America Inc.

Publisher Copyright:
© 2020 IEEE.

ID: 301822796