Welcome to the OPTML Group

About Us

OPtimization and Trustworthy Machine Learning (OPTML) group is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/ deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!

Representative Publications

Authors marked in bold indicate our group members, and “*” indicates equal contribution.

Trustworthy AI: Robustness, fairness, and model explanation

Scalable AI: Model & data compression, distributed learning, black-box optimization, and automated ML

Sponsors

We are grateful for funding from Michigan State University, MIT-IBM Watson AI Lab, DARPA, Cisco Research, NSF, DSO National Laboratories, LLNL, ARO.



News

19. April 2024

[Feature Article@IEEE SPM] We are thrilled to share that our tutorial article titled An Introduction to Bilevel Optimization: Foundations and Applications in Signal Processing and Machine Learning has been published in the IEEE Signal Processing Magazine as a Feature Article.

29. March 2024

We are thrilled to share that our research paper Reverse Engineering Deceptions in Machine- and Human-Centric Attacks has been officially published in Foundations and Trends® in Privacy and Security!

29. March 2024

[New Preprints] We are pleased to announce the release of the new paper on arXiv: Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning!

5. March 2024

[New Preprints] We are pleased to announce the release of the following papers on arXiv: [1] Rethinking Machine Unlearning for Large Language Models; [2] Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark; [3] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models; [4] Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning

5. March 2024

[Launch of the MSU-UM-ARO Project Website] The Lifelong Multimodal Fusion by Cross Layer Distributed Optimization project receives funding from the Army Research Office (ARO).

12. February 2024

Tutorial ‘Machine Unlearning in Computer Vision: Foundations and Applications’ is accepted for presentation by CVPR’24. See you in Seattle!

16. January 2024

Four papers accepted in ICLR’24: [1] Machine unlearning for safe image generation (Spotlight); [2] DeepZero: Training neural networks from scratch using only forward passes; [3] Backdoor data sifting; [4] Visual prompting automation.

9. November 2023

[New Preprints] We are pleased to announce the release of the paper on arXiv: From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models.

... see all News