Thesis Defence: Automating Computed Tomography Diagnostic Image Quality Control Using Deep Learning
December 12 at 9:00 am - 1:00 pm
Anubhav Gupta, supervised by Dr. John Braun and Dr. Mohamed Shehata, will defend their thesis titled “Automating Computed Tomography Diagnostic Image Quality Control Using Deep Learning” in partial fulfillment of the requirements for the degree of Master of Science in Computer Science.
An abstract for Anubhav Gupta’s thesis is included below.
Defences are open to all members of the campus community as well as the general public. Registration is not required for in person defences.
Medical imaging quality control is traditionally done through routine manual testing, which is time-consuming, error-prone, and costly. There has been limited exploration of automating this quality control process using deep learning techniques. Several factors contribute to this gap in research, including 1) the need for a specialized labelled dataset to evaluate this problem, 2) the scarcity of labelled medical imaging data for training state-of-the-art deep learning models capable of robust generalization on medical imaging classification tasks, and 3) the absence of a dedicated deep learning backbone pre-trained on medical imaging data, capable of robust generalization for various medical image analysis tasks. This thesis first presents a study that evaluates state-of-the-art deep convolutional neural network models to automate the quality control process for medical images. By harnessing transfer learning, the goal is to enhance the availability and effectiveness of medical image diagnostic scanners for applications in patient diagnosis, treatment, research, and development. We evaluate the performance of 5 different models wherein InceptionResNetV2 achieves the best accuracy of 99%. However, when these models are tested on out-of-domain datasets, the results show a crucial need for more resilient models capable of automating quality control within different healthcare facilities.
In the second part of the thesis, we employ a self-supervised approach to pre-train a specialized backbone, referred to as MedMAE, using a diverse dataset comprising over two million publicly available medical images. Our results demonstrate that MedMAE effectively captures the unique characteristics of medical images, leading to improved accuracy in downstream medical imaging tasks and even on out-of-domain datasets. This innovative approach holds promise for enhancing the utility and efficiency of medical image analysis, bridging the gap caused by the scarcity of labelled data in the healthcare domain. Furthermore, to evaluate the effectiveness of MedMAE, we employ an Augmented Reality tool for precise manual annotation of discrepancies between the original medical images and the reconstructions produced by MedMAE.