Robust Computer Vision Against Adversarial Examples and Domain Shifts

Embargo until
Date
2023-03-31
Journal Title
Journal ISSN
Volume Title
Publisher
Johns Hopkins University
Abstract
Recent advances in deep learning have achieved remarkable success in various computer vision problems. Driven by progressive computing resources and a vast amount of data, deep learning technology is reshaping human life. However, Deep Neural Networks (DNNs) have been shown vulnerable to adversarial examples, in which carefully crafted perturbations can easily fool DNNs into making wrong predictions. On the other hand, DNNs have poor generalization to domain shifts, as they suffer from performance degradation when encountering data from new visual distributions. We view these issues from the perspective of robustness. More precisely, existing deep learning technology is not reliable enough for many scenarios, where adversarial examples and domain shifts are among the most critical. The lack of reliability inevitably limits DNNs from being deployed in more important computer vision applications, such as self-driving vehicles and medical instruments that have major safety concerns. To overcome these challenges, we focus on investigating and addressing the robustness of deep learning-based computer vision approaches. The first part of this thesis attempts to robustify computer vision models against adversarial examples. We dive into such adversarial robustness from four aspects: novel attacks for strengthening benchmarks, empirical defenses validated by a third-party evaluator, generalizable defenses that can defend against multiple and unforeseen attacks, and defenses specifically designed for less explored tasks. The second part of this thesis improves the robustness against domain shifts via domain adaptation. We dive into two important domain adaptation settings: unsupervised domain adaptation, which is the most common, and source-free domain adaptation, which is more practical in real-world scenarios. The last part explores the intersection of adversarial robustness and domain adaptation fields to provide new insights for robust DNNs. We study two directions: adversarial defense for domain adaptation and adversarial defense via domain adaptations. This dissertation aims at more robust, reliable, and trustworthy computer vision.
Description
Keywords
Adversarial machine learning, Adversarial robustness, Domain adaptation, Computer vision
Citation