gradSim: Differentiable simulation for system identification and visuomotor control

Research output: Contribution to conferencePaperResearch

Documents

  • Krishna Murthy Jatavallabhula
  • Miles Macklin
  • Florian Golemo
  • Vikram Volet
  • Linda Petrini
  • Martin Weis
  • Breandan Considine
  • Jérôme Parent-Lévesque
  • Kevin Xie
  • Erleben, Kenny
  • Liam Pauli
  • Florian Shkurti
  • Derek Nowrouzezahrai
  • Sanja Fidler
In this paper, we tackle the problem of estimating object physical properties such as mass, friction, and elasticity directly from video sequences. Such a system identification problem is fundamentally ill-posed due to the loss of information during image formation. Current best solutions to the problem require precise 3D labels which are labor intensive to gather, and infeasible to create for many systems such as deformable solids or cloth. In this work we present gradSim, a framework that overcomes the dependence on 3D supervision by combining differentiable multiphysics simulation and differentiable rendering to jointly model the evolution of scene dynamics and image formation. This unique combination enables backpropagation from pixels in a video sequence through to the underlying physical attributes that generated them. Furthermore, our unified computation graph across dynamics and rendering engines enables the learning of challenging visuomotor control tasks, without relying on state-based (3D) supervision, while obtaining performance competitive to/better than techniques that require precise 3D labels.
Original languageEnglish
Publication date2021
Number of pages25
Publication statusPublished - 2021
Event9th International Conference on Learning Representations - ICLR 2021 - Virtual
Duration: 3 May 20217 May 2021

Conference

Conference9th International Conference on Learning Representations - ICLR 2021
CityVirtual
Period03/05/202107/05/2021

Links

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 276648735