Interpretable Machine Learning
The rapid developments and early successes of deep learning technology in medical image analysis (and other fields) have caused the field to prioritize predictive accuracy over human integration. However, it is becoming increasingly clear that black box models are unlikely to find clinical acceptance, can lead to ethical problems when neither the patient nor the doctor understand the reasoning behind a prediction, and are difficult to certify. Our research goals in this branch are to develop adequate explanations for predictions of deep learning models, and perhaps more importantly, to build inherently interpretable models rooted in prior clinical knowledge.
Selected publications
- Christian F Baumgartner, Konstantinos Kamnitsas, Jacqueline Matthew, Tara P Fletcher, Sandra Smith, Lisa M Koch, Bernhard Kainz, Daniel Rueckert, SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound, IEEE Transactions in Medical Imaging (2017)