Group projects highlights

Authors marked in blue indicate our group members, and “*” indicates equal contribution.

Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization

This work reformulates the problem of adversarial training (AT) to a bi-level optimization problem (BLO). BLO advances the optimization foundations of AT. We first show that the commonly-used Fast-AT is equivalent to using a stochastic gradient algorithm to solve a linearized BLO problem involving a sign operation. However, the discrete nature of the sign operation makes it difficult to understand the algorithm performance. Inspired by BLO, we design and analyze a new set of robust training algorithms termed Fast Bi-level AT (Fast-BAT), which effectively defends sign-based projected gradient descent (PGD) attacks without using any gradient sign method or explicit robust regularization. In practice, we show that our method yields substantial robustness improvements over multiple baselines across multiple models and datasets.

Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, and S. Liu

ICML’22.

Click here to read the full article.

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

This paper revealed that for a backdoored model, Trojan features learned are more stable against pruning than benign features! We further observed the existence of the ‘winning Trojan ticket’ which preserves the Trojan attack performance while retaining chance-level performance on clean inputs. Further, we propose a clean data-free algorithm to detect and reverse engineer the Trojan attacks.

T. Chen*, Z. Zhang*, Y. Zhang*, S. Chang, S. Liu, Z. Wang

CVPR’22

Click here to read the full article.

Reverse Engineering of Imperceptible Adversarial Image Perturbations

In this paper, we study the problem of Reverse Engineering of Deceptions (RED), with the goal to recover the attack toolchain signatures (e.g. adversarial perturbations and adversary salience image regions) from an adversarial instance. Our work makes a solid step towards formalizing the RED problem and developing a systematic RED pipeline, covering not only a solution method but also a complete set of evaluation metrics.

Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu

ICLR’22

Click here to read the full article.

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

In this paper, we study the problem of black-box defense, aiming to secure black-box models against adversarial attacks using only input-output model queries. We integrate denoised smoothing (DS) with ZO (zerothorder) optimization to build a feasible black-box defense framework. We further propose ZO-AE-DS, which leverages autoencoder (AE) to bridge the gap between FO and ZO optimization.

Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu

ICLR’22 (spotlight, acceptance rate 5%)

Click here to read the full article.