In applications from robotics and computer vision to autonomous driving and remote sensing, there is an increasing need for optical sensors and visual computing algorithms that efficiently sense and understand the surrounding environment. Yet, conventional imaging systems fail to exploit or, worse, discard captured physical properties of light that are rich with information. For example, time of flight, polarization, wavelength, coherence, angular information, and other physical properties are encoded in photons as they interact with an environment. By understanding and carefully modeling the physics of light transport, we can reveal scene information that would otherwise remain invisible, enabling powerful and efficient methods for vision and sensing.
In this talk, I describe physics-based techniques for applications in 3D imaging and computer vision. Surprisingly, I find that new, efficient methods for imaging around corners and through scattering media are connected to efficient methods for neural rendering and novel view synthesis through different approximations of the radiative transfer equation.
David Lindell is a postdoctoral scholar at Stanford University in the Computational Imaging Lab. His research combines novel optical designs, emerging sensors, and physics-based algorithms to enable new capabilities in visual computing. He received his Ph.D. in Electrical Engineering from Stanford University, where he was a Stanford Graduate Fellow. He co-organized the workshop on Computational Cameras and Displays at CVPR 2020 and a course on Computational Time-Resolved Imaging at SIGGRAPH 2020.