Faculty Candidate Seminar

Deep Learning for Medical Imaging: Mapping Sensor Data to Decisions

Morteza MardaniResearch ScientistStanford University
SHARE:

The advent of AI is arguably a renaissance for medicine, where life-and-death decisions can be significantly improved using data and algorithms. Deep learning is the workhorse for automation and perception from the abundant clinical data. In a sensitive domain such as medical imaging, AI however faces severe challenges such as: c1) robustness; c2) explainability; and c3) data scarcity. This talk sheds light on solutions to these challenges via a principled design of deep learning algorithms, and analysis of their behavior using a mixture of theory and empirical experiments. While the scope and motivation is wide, this talk primarily focuses on image recovery from compressive measurements.

In essence, we cast image recovery as a denoising task that maps from a low quality (linear) image estimate to a high quality estimate using a deep residual network (ResNet). To achieve images that are valuable for diagnostic decisions, we leverage deep generative adversarial networks (GANs) to learn a projection onto a manifold of high perceptual quality medical images with fine delineation of the details. This so-termed GANCS (CS refers to compressed sensing) scheme is approximately MAP optimal compared with pixel-wise training schemes that average out the admissible solutions leading to blurry images. To study the robustness (c1) we leverage variational autoencoders (VAE) as the generator network for GANs that offers an uncertainty map to assess the pixels confidence for the subsequent decision-making tasks. To deal with data scarcity (c3) we develop the neural proximal gradient descent (NPGD) algorithm that designs recurrent neural networks (RNNs) in a principled way inspired by iterative optimization algorithms. Modeling the proximal network with only a few residual blocks (small training variable count), the trained RNN is observed very effective in recovering MR images. This is also a useful step toward explainability (c2) of neural networks.

The generalization performance of NPGD is also analyzed using the Stein's unbiased risk estimator (SURE). It is particularly insightful to see how NPGD achieves the network degrees of freedom (DOF) for low training sample complexity regimes. It reveals the eigenvalues of the end-to-end network Jacobian as the key factor for generalization. The analysis are confirmed with extensive empirical experiments on real-world MRI and natural image datasets.

Morteza Mardani is a research scientist at Stanford University Dept. of Electrical Engineering, Information Systems Lab (ISL). He received his PhD in Electrical Engineering and Mathematics (minor) from the University of Minnesota, Twin cities, Minneapolis, 2015. He was then a visiting scholar at the Electrical Engineering & Computer Science Dept., and the International Computer Science Institute, UC Berkeley, Jan.-Jun. 2015, and then a postdoctoral Fellow at Stanford University ISL until Dec. 2017. His research interests lie in the area of machine learning and statistical signal processing for data science and artificial intelligence, where he is currently working on deep learning and generative adversarial neural networks for biomedical imaging. He is recipient of number of awards including a young author best paper award from IEEE Signal Processing Society (2017), and a best student paper award from IEEE Workshop on Signal Processing Advances in Wireless Communication (June 2012)

Sponsored by

ECE

Faculty Host

Jeff Fessler