Thesis Defence: Out-of-Domain Few-Shot Learning for Image Classification
April 5 at 2:30 pm - 5:30 pm
Reece Walsh, supervised by Dr. Mohamed Shehata, will defend their dissertation titled “Out-of-Domain Few-Shot Learning for Image Classification” in partial fulfillment of the requirements for the degree of Master of Science in Computer Science.
An abstract for Reece’s thesis is included below.
Examinations are open to all members of the campus community as well as the general public. Registration is not required for in-person defences.
Deep learning models have consistently produced state-of-the-art results on large, labelled datasets for image classification. Although these models are highly accurate when trained on significant amounts of labeled data, generalization notably suffers when using relatively small datasets or testing on out-of-domain classes. Acquiring new, labelled samples, however, can be an expensive or impractical affair in some situations. In an attempt to address the issue of small dataset sizes, few-shot learning has risen as one prominent area of study that enables models to learn a new task from a very small amount of labelled data. Although techniques in this area have generally succeeded in improving in-domain few-shot capabilities, generalization towards out-of-domain classes remains an open research area. In this work, we address out-of-domain performance in a few-shot setting. To this end, we initially evaluate and experiment with nine state-of-the-art few-shot learning techniques on human cell classification, an out-of-domain task. We subsequently use our findings to formulate three novel techniques for improved out-of-domain few-shot learning. The first technique, Embedding Mixup for Meta-Training, leverages a modified off-line or on-line meta-training phase for improved image classification performance. Our second proposed technique, Fully Self-Supervised Few-Shot Learning, enhances the performance of label efficiency by performing self-supervised, on-line finetuning for labelless in-domain or out-of-domain few-shot learning. Finally, the third proposed technique, Masked Autoencoders for Few-Shot Learning, generates new embeddings that reinforce few-shot accuracy on in-domain and out-of-domain settings. The aforementioned techniques are extensively evaluated on standard in-domain and out-of-domain datasets to illustrate the achieved improvement in generalization and robustness towards unseen data. Overall, we show that our proposed work is capable of improving performance by up to 16% on both out-of-domain and in-domain datasets.