Robustness, Safety and Uncertainty
In medical image analysis, confidently predicting something false can have devastating consequences. Apart from achieving high predictive accuracy, one needs to establish the circumstances under which algorithmic predictions generalize, or give appropriate error bounds. This is highly relevant to patient safety and the regulation of machine learning based medical software. The goal in this branch, is develop deep learning methods founded in probability theory that can be used to estimate uncertainty stemming from sources such as the image reconstruction, or inherent limitations of an imaging modality.
Selected publications
- Christian F Baumgartner, Kerem C Tezcan, Krishna Chaitanya, Andreas M Hötker, Urs J Muehlematter, Khoschy Schawkat, Anton S Becker, Olivio Donati, Ender Konukoglu., Phiseg: Capturing uncertainty in medical image segmentation, Proc. MICCAI 2019
- Kerem C Tezcan, Christian F Baumgartner, Ender Konukoglu, Sampling possible reconstructions of undersampled acquisitions in MR imaging, arXiv preprint arXiv:2010.00042 (2020)
- Esther Puyol-Antón, Bram Ruijsink, Christian F Baumgartner, Pier-Giorgio Masci, Matthew Sinclair, Ender Konukoglu, Reza Razavi, Andrew P King, Automated quantification of myocardial tissue characteristics from native T 1 mapping using neural networks with uncertainty-based quality-control, Journal of Cardiovascular Magnetic Resonance (2020)