OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.
As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.
We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!
Authors marked in bold indicate our group members, and “*” indicates equal contribution.
Trustworthy AI: Robustness, fairness, and model explanation
Understanding and Improving Visual Prompting: A Label-Mapping Perspective
A. Chen, Y. Yao, P.-Y. Chen, Y. Zhang, S. Liu
CVPR’23
Revisiting and advancing fast adversarial training through the lens of bi-level optimization
Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu
ICML’22
Reverse Engineering of Imperceptible Adversarial Image Perturbations
Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu
ICLR’22
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu
ICLR’22
Proper Network Interpretability Helps Adversarial Robustness in Classification
A. Boopathy, S. Liu, G. Zhang, C. Liu, P.-Y. Chen, S. Chang, L. Daniel
ICML’20
Scalable AI: Model compression, distributed learning, black-box optimization, and automated ML
Advancing Model Pruning via Bi-level Optimization
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
NeurIPS’22
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu
UAI’22 (Best Paper Runner-Up Award)
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, U.-M. O’Reilly
ICML’20
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
IEEE Signal Processing Magazine, 2020
An ADMM Based Framework for AutoML Pipeline Configuration
S. Liu*, P. Ram*, D. Vijaykeerthy, D. Bouneffouf, G. Bramble, H. Samulowitz, D. Wang, A. Conn, A. Gray,
AAAI’20
We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO.
NeurIPS 2023: 3 Papers Accepted – 1 Spotlight and 2 Posters. Congratulations to Jinghan, Jiancheng, and Yuguang for their spotlight acceptance with ‘Model Sparsity Simplifies Machine Unlearning’. And kudos to Yihua, Yimeng, Aochuan, Jinghan, and Jiancheng for their poster acceptance with ‘Selectivity Boosts Transfer Learning Efficiency.’
31. August 2023Grateful to receive a grant from Army Research Office (ARO) as the PI.
12. August 2023Our paper on Adversarial Training for MoE has been chosen for an Oral Presentation at ICCV’23!
2. August 2023Grateful to receive a gift funding from Cisco Research as the PI.
21. July 2023Call for participation in 2nd AdvML-Frontiers Workshop@ICML’23
19. July 2023One paper in ICCV’23 on Adversarial Robustness of Mixture-of-Experts
29. June 2023Grateful to receive a CPS Medium Grant Award from NSF as a co-PI.
19. June 2023Slides of our CVPR’23 tutorial on Reverse Engineering of Deceptions (RED) is now available at the tutorial page. Tutorial recording is available here.