Welcome to the OPTML Group

About Us

OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!

Representative Publications

Authors marked in bold indicate our group members, and “*” indicates equal contribution.

Trustworthy AI: Robustness, fairness, and model explanation

Scalable AI: Model compression, distributed learning, black-box optimization, and automated ML


We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research and NSF.


20. January 2023

Four papers accepted in ICLR 2023: Issues and Fixes in IRM, TextGrad: Differentiable Solution to NLP Attack Generation, Provable Benefits of Sparse GNN, Sample Complexity Analysis of ViT

17. December 2022

One paper accepted in ASPDAC 2023: Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices.

17. December 2022

One paper accepted in SANER 2023: Towards Both Robust and Accurate Code Models; Equally contributed by Jinghan Jia (MSU) and Shashank Srikant (MIT).

23. November 2022

Code Repositories of Bi-Level Pruning (NeurIPS’22), Fairness Reprogramming (NeurIPS’22), and Visual Prompting by Iterative Label Mapping (arXiv) have been released.

22. November 2022

Dr. Sijia Liu is selected as a presenter of the AAAI 2023 New Faculty Highlight Program.

12. October 2022

Tutorial on Foundational Robustness of Foundation Models will be given in NeurIPS’22.

11. October 2022

Tutorial on Bi-level Machine Learning will be given in AAAI’23.

14. September 2022

Two papers accpeted in NeurIPS’22.

... see all News