OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.
As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.
We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!
Authors marked in bold indicate our group members, and “*” indicates equal contribution.
Trustworthy AI: Robustness, fairness, and model explanation
Model sparsification can simplify machine unlearning
J. Jia*, J. Liu*, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, S. Liu
NeurIPS’23 (Spotlight)
Understanding and Improving Visual Prompting: A Label-Mapping Perspective
A. Chen, Y. Yao, P.-Y. Chen, Y. Zhang, S. Liu
CVPR’23
Revisiting and advancing fast adversarial training through the lens of bi-level optimization
Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu
ICML’22
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu
ICLR’22 (Spotlight)
Reverse Engineering of Imperceptible Adversarial Image Perturbations
Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu
ICLR’22
Scalable AI: Model & data compression, distributed learning, black-box optimization, and automated ML
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Y. Zhang*, Y. Zhang*, A. Chen, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S. Liu
NeurIPS’23
Advancing Model Pruning via Bi-level Optimization
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
NeurIPS’22
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu
UAI’22 (Best Paper Runner-Up Award)
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, U.-M. O’Reilly
ICML’20
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
IEEE Signal Processing Magazine, 2020
We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO.
[Feature Article@IEEE SPM] We are thrilled to share that our tutorial article titled An Introduction to Bilevel Optimization: Foundations and Applications in Signal Processing and Machine Learning has been published in the IEEE Signal Processing Magazine as a Feature Article.
29. March 2024We are thrilled to share that our research paper Reverse Engineering Deceptions in Machine- and Human-Centric Attacks has been officially published in Foundations and Trends® in Privacy and Security!
29. March 2024[New Preprints] We are pleased to announce the release of the new paper on arXiv: Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning!
5. March 2024[New Preprints] We are pleased to announce the release of the following papers on arXiv: [1] Rethinking Machine Unlearning for Large Language Models; [2] Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark; [3] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models; [4] Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning
5. March 2024[Launch of the MSU-UM-ARO Project Website] The Lifelong Multimodal Fusion by Cross Layer Distributed Optimization project receives funding from the Army Research Office (ARO).
12. February 2024Tutorial ‘Machine Unlearning in Computer Vision: Foundations and Applications’ is accepted for presentation by CVPR’24. See you in Seattle!
16. January 2024Four papers accepted in ICLR’24: [1] Machine unlearning for safe image generation (Spotlight); [2] DeepZero: Training neural networks from scratch using only forward passes; [3] Backdoor data sifting; [4] Visual prompting automation.
9. November 2023[New Preprints] We are pleased to announce the release of the paper on arXiv: From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models.