RADNet: Robustness against sensor degradation in 3D object detection

Adam Tonderski *†& Joakim Berntsson *†& Bernhard Birkner Roman Glebov Armin Stangl Bernhard Mehlig & Sebastian Ramos
* Denotes equal contribution
Not published (master thesis)

Abstract

We propose a deep-learning architecture for threedimensional object detection that uses multiple sensors to cope with sensor degradation. To achieve this, image pixels are transformed into a three-dimensional point cloud, which is fused with a LIDAR point cloud using a learnable discretization method. In the nondegraded case, the model performs close to recent state-of-the-art fusion architectures. Further, when only one sensor is available, the model is close to the respective stateof-the-art. Additional experiments show that the architecture can be made robust against various types of partial sensor degradation. However, this imposes a trade-off where the performance in the optimal, nondegraded, case is decreased. Example runs can be viewed here.