News

28. November 2024
Thrilled to announce that our position paper, Rethinking Machine Unlearning for LLMs, has been accepted for publication in Nature Machine Intelligence. Congratulations to the team and our amazing collaborators for achieving this milestone!

18. November 2024
Grateful for receiving the NAIRR Pilot Award in the field of Artificial Intelligence and Intelligent Systems!

4. November 2024
One paper in WACV’25: Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

26. September 2024
Six papers in NeurIPS’24, including one in dataset & benchmark track. Congrats to Yihua Zhang, Yuguang Yao, Jinghan Jia, and Yimeng Zhang for their outstanding leadership!

20. September 2024
One paper in EMNLP’24: SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

20 August 2024
Grateful to receive the Amazon Research Award for AI in Information Security–Spring 2024!

20. July 2024
The 3rd AdvML-Frontiers Workshop is now live and will be co-located at NeurIPS’24! Submit your papers by Aug 30.

11. July 2024
Dr. Liu has received the prestigious NSF Faculty Early Career Development (CAREER) Award!

10. July 2024
Congratulations to Yihua for receiving the 2024 MLCommons Rising Stars Award!

1. July 2024
Two papers in ECCV’24: (1) Exploring adversarial robustness of safety-driven concept-unlearned diffusion models through a diffusion classifier perspective [Paper]; (2) Challenging forgets to unveil when and why machine unlearning could be more challenging than common beliefs [paper]

10. June 2024
Two papers accepted in ICML’24: (1) Benchmarking zeroth-order optimization for memory-efficient LLM fine-tuning; (2) Why does graph transformer generalize? A Theoretical Dive into Self-attention and Positional Encoding

19. April 2024
[Feature Article@IEEE SPM] We are thrilled to share that our tutorial article titled An Introduction to Bilevel Optimization: Foundations and Applications in Signal Processing and Machine Learning has been published in the IEEE Signal Processing Magazine as a Feature Article.

29. March 2024
We are thrilled to share that our research paper Reverse Engineering Deceptions in Machine- and Human-Centric Attacks has been officially published in Foundations and Trends® in Privacy and Security!

29. March 2024
[New Preprints] We are pleased to announce the release of the new paper on arXiv: Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning!

5. March 2024
[New Preprints] We are pleased to announce the release of the following papers on arXiv: [1] Rethinking Machine Unlearning for Large Language Models; [2] Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark; [3] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models; [4] Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning

5. March 2024
[Launch of the MSU-UM-ARO Project Website] The Lifelong Multimodal Fusion by Cross Layer Distributed Optimization project receives funding from the Army Research Office (ARO).

12. February 2024
Tutorial ‘Machine Unlearning in Computer Vision: Foundations and Applications’ is accepted for presentation by CVPR’24. See you in Seattle!

16. January 2024
Four papers accepted in ICLR’24: [1] Machine unlearning for safe image generation (Spotlight); [2] DeepZero: Training neural networks from scratch using only forward passes; [3] Backdoor data sifting; [4] Visual prompting automation.

9. November 2023
[New Preprints] We are pleased to announce the release of the paper on arXiv: From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models.

24. October 2023
Tutorial on ‘Zeroth-Order Machine Learning - Fundamental Principles and Emerging Applications in Foundation Models’ is accepted by ICASSP’24 and AAAI’24.

21. October 2023
[New Preprints] We are pleased to announce the release of the following papers on arXiv: [1] To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images … For Now; [2] SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation; [3] DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training.

22. September 2023
NeurIPS 2023: 3 Papers Accepted – 1 Spotlight and 2 Posters. Congratulations to Jinghan, Jiancheng, and Yuguang for their spotlight acceptance with ‘Model Sparsity Simplifies Machine Unlearning’. And kudos to Yihua, Yimeng, Aochuan, Jinghan, and Jiancheng for their poster acceptance with ‘Selectivity Boosts Transfer Learning Efficiency’.

31. August 2023
Grateful to receive a grant from Army Research Office (ARO) as the PI.

12. August 2023
Our paper on Adversarial Training for MoE has been chosen for an Oral Presentation at ICCV’23!

2. August 2023
Grateful to receive a gift funding from Cisco Research as the PI.

21. July 2023
Call for participation in 2nd AdvML-Frontiers Workshop@ICML’23

19. July 2023
One paper in ICCV’23 on Adversarial Robustness of Mixture-of-Experts

29. June 2023
Grateful to receive a CPS Medium Grant Award from NSF as a co-PI.

19. June 2023
Slides of our CVPR’23 tutorial on Reverse Engineering of Deceptions (RED) is now available at the tutorial page. Tutorial recording is available here.

4. June 2023
Our paper Visual Prompting for Adversarial Robustness received the Top 3% Paper Recognition at ICASSP 2023. Congrats to Aochuan, Peter (internship at OPTML in 2022), Yuguang, and Pin-Yu (IBM Research)!

24. April 2023
Two papers in ICML’23 and CFP for 2nd AdvML-Frontiers Workshop @ICML’23.

17. April 2023
[New Preprints] A new arXiv paper is released: Model Sparsification Can Simplify Machine Unlearning (see paper and code)!

13. April 2023
Grateful to receive a grant from Lawrence Livermore National Laboratory.

1. April 2023
Call for Papers and AdvML Rising Star Award Applications in the workshop AdvML-Frontiers, ICML’23

17. March 2023
[New Preprints] A new arXiv paper is released: Adversarial attacks can be parsed to reveal victim model information! (see [Paper])

17. March 2023
The 2nd Workshop on New Frontiers in Adversarial Machine Learning has been accepted by ICML’23

1. March 2023
Grateful to receive a grant from DSO National Laboratories.

27. February 2023
Two papers accepted in CVPR’23.

16. February 2023
Three papers accepted in ICASSP’23.

11. February 2023
CVPR’23 tutorial on Reverse Engineering of Deception: Foundations and Applications is accepted and will be given with Xiaoming Liu (MSU) and Xue Lin (Northeastern).

09. February 2023
AAAI’23 tutorial on Bi-level Optimization in ML: Foundations and Applications is now available!

20. January 2023
Four papers accepted in ICLR 2023: Issues and Fixes in IRM, TextGrad: Differentiable Solution to NLP Attack Generation, Provable Benefits of Sparse GNN, Sample Complexity Analysis of ViT

17. December 2022
One paper accepted in ASPDAC 2023: Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices.

17. December 2022
One paper accepted in SANER 2023: Towards Both Robust and Accurate Code Models; Equally contributed by Jinghan Jia (MSU) and Shashank Srikant (MIT).

23. November 2022
Code Repositories of Bi-Level Pruning (NeurIPS’22), Fairness Reprogramming (NeurIPS’22), and Visual Prompting by Iterative Label Mapping (arXiv) have been released.

22. November 2022
Dr. Sijia Liu is selected as a presenter of the AAAI 2023 New Faculty Highlight Program.

12. October 2022
Tutorial on Foundational Robustness of Foundation Models will be given in NeurIPS’22.

11. October 2022
Tutorial on Bi-level Machine Learning will be given in AAAI’23.

14. September 2022
Two papers accpeted in NeurIPS’22.

2. September 2022
Francesco Croce will give an invited talk on test-time defense on Sept. 7th.

31. August 2022
Dr. Sijia Liu is grateful to receive a Robust Intelligence (RI) Core Small Grant Award from NSF as the PI.

4. August 2022
Grateful to receive the Best Paper Runner-Up Award at UAI’22 in recognition of our work Distributed Adversarial Training to Robustify Deep Neural Networks at Scale.

16. May 2022
One paper accepted in UAI’22.

15. May 2022
Five papers accepted in ICML’22: [1] Bi-level adversarial training; [2] Winning lottery tickets from robust pretraining; [3] Pruning helps certified robustness; [4] Contrastive learning theory; and [5] Generalization theory of GCN.

20. April 2022
One paper accepted in IJCAI’22.

1. April 2022
CFP: The 1st Workshop on New Frontiers in Adversarial Machine Learning at ICML’22 (AdvML-Frontiers@ICML’22).

11. March 2022
Dr. Sijia Liu is grateful to receive a gift funding from Cisco Research as the PI.

4. March 2022
Two papers accepted in CVPR’22. Congratulations to Yihua Zhang for his first CVPR paper!

21. January 2022
Five accepted papers in ICLR’22: [1] Reverse Engineering of Adversaries; [2] Black-Box Defense(spotlight); [3] Learning to Optimize; [4] Self-Training Theory; [5] Distributed Learning. Congratulations to Yimeng Zhang, Yuguang Yao, Jianghan Jia for their first ICLR papers!

15. January 2022
[New Preprints] Our work on interpreting and advancing adversarial training via bi-level optimization is now available on arXiv; equally contributed by Yihua Zhang (MSU) and Guanhua Zhang (UCSB).

15. October 2021
Dr. Sijia Liu receives a DARPA IP2 AIE Grant as a Co-PI.

28. September 2021
Five papers accepted in NeurIPS’21.

19. May 2021
Our MSU-NEU team (with PI Xiaoming Liu and co-PI Xue Lin) entered the Phase 2 of DARPA AIE RED.

13. May 2021
One paper accepted in ICML’21