Loading Events

« All Events

  • This event has passed.

Thesis Defence: Understanding and Mitigating the Threat of Physical Attacks in CNN Based Vision Systems

August 14, 2023 at 9:30 am - 12:30 pm

Abhijith Sharma, supervised by Dr. Apurva Narayan, will defend their thesis titled “Understanding and Mitigating the Threat of Physical Attacks in CNN Based Vision Systems” in partial fulfillment of the requirements for the degree of Master of Science in Computer Science.

An abstract for Abhijith Sharma’s thesis is included below.

Defences are open to all members of the campus community as well as the general public. Please email apurva.narayan@ubc.ca to receive the Zoom link for this defence.


ABSTRACT

Convolutional Neural Networks are integral to vision-based AI systems due to their remarkable performance on visual tasks. Although highly accurate, CNNs are vulnerable to natural and adversarial physical corruption in the real world. It poses a serious security concern for safety-critical systems. Such corruption often arises unexpectedly and alters the model’s performance. One of the most practical adversarial corruption is the patch attack. The current form of patch attacks often involves only a single adversarial patch. Using multiple patches enables the attacker to craft a stronger adversary by utilizing various combinations of the patches and their respective locations. Moreover, mitigating multiple patches is a challenging task in practice due to the nascence of the domain. In recent years, the primary focus has been on adversarial attacks. However, natural corruptions (e.g., snow, fog, dust) are an omnipresent threat to CNN-based systems having
equally devastating consequences. Hence, it is essential to make CNNs resilient against both adversarial attacks and natural disturbances.

The contributions of this thesis are three-fold: First, we propose the idea of naturalistic support artifacts (NSA) for robust prediction against natural corruption. The NSAs are natural-looking objects generated through artifact training and have high visual fidelity in the scene. The NSAs are shown to be beneficial in scenarios where model parameters are inaccessible and adding artifacts in the scene is feasible. Second, we present three independent ways to perform an attack with multiple patches: Split, Mono-Multi, and Poly-Multi attacks, showcasing the true potential of patch attacks. The multi-patch attacks are shown to overcome existing state-of-the-art defenses, raising a serious risk to CNN-based systems. Ultimately, we present a novel, model-agnostic patch mitigation technique based on total variation-based image resurfacing (TVR). The TVR acts as a first line of defense against patch attacks by cleansing the image of any suspicious perturbations. The TVR can nullify single and multi-patch attacks in one scan of the image, making it the first defense technique to defend against multi-patch attacks.

The thesis attempts to move one step closer to the aim of safe and robust CNN-based AI systems.

Details

Date:
August 14, 2023
Time:
9:30 am - 12:30 pm

Additional Info

Registration/RSVP Required
Yes (see event description)
Event Type
Thesis Defence
Topic
Research and Innovation, Science, Technology and Engineering
Audiences
Alumni, Community, Faculty, Staff, Families, Partners and Industry, Students, Postdoctoral Fellows and Research Associates