OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.
As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.
We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!
Authors marked in bold indicate our group members, and “*” indicates equal contribution.
Trustworthy AI: Robustness, fairness, and model explanation
Revisiting and advancing fast adversarial training through the lens of bi-level optimization
Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu
ICML’22
Reverse Engineering of Imperceptible Adversarial Image Perturbations
Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu
ICLR’22
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu
ICLR’22
Fairness reprogramming
G. Zhang*, Y. Zhang*, Y. Zhang, W. Fan, Q. Li, S. Liu, S. Chang
NeurIPS’22
Proper Network Interpretability Helps Adversarial Robustness in Classification
A. Boopathy, S. Liu, G. Zhang, C. Liu, P.-Y. Chen, S. Chang, L. Daniel
ICML’20
Scalable AI: Model compression, distributed learning, black-box optimization, and automated ML
Advancing Model Pruning via Bi-level Optimization
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
NeurIPS’22
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu
UAI’22 (Best Paper Runner-Up Award)
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, U.-M. O’Reilly
ICML’20
A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
IEEE Signal Processing Magazine, 2020
An ADMM Based Framework for AutoML Pipeline Configuration
S. Liu*, P. Ram*, D. Vijaykeerthy, D. Bouneffouf, G. Bramble, H. Samulowitz, D. Wang, A. Conn, A. Gray,
AAAI’20
We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research and NSF.
Four papers accepted in ICLR 2023: Issues and Fixes in IRM, TextGrad: Differentiable Solution to NLP Attack Generation, Provable Benefits of Sparse GNN, Sample Complexity Analysis of ViT
17. December 2022One paper accepted in ASPDAC 2023: Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices.
17. December 2022One paper accepted in SANER 2023: Towards Both Robust and Accurate Code Models; Equally contributed by Jinghan Jia (MSU) and Shashank Srikant (MIT).
23. November 2022Code Repositories of Bi-Level Pruning (NeurIPS’22), Fairness Reprogramming (NeurIPS’22), and Visual Prompting by Iterative Label Mapping (arXiv) have been released.
22. November 2022Dr. Sijia Liu is selected as a presenter of the AAAI 2023 New Faculty Highlight Program.
12. October 2022Tutorial on Foundational Robustness of Foundation Models will be given in NeurIPS’22.
11. October 2022Tutorial on Bi-level Machine Learning will be given in AAAI’23.
14. September 2022Two papers accpeted in NeurIPS’22.