Research

Jump to: Research Highlights, Talks For full publication list, please visit this page.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

In short, trustworthy machine leanring (ML) and scalable ML are two main research directions investigated in my group.

Trustworthy ML

  • Adversarial robustness of deep neural networks (DNNs)
  • Deep model explanation
  • Fairness in ML
  • ML for security

Scalable ML

  • Zeroth-order learning for black-box optimization
  • Optimization theory and methods for deep learning (DL)
  • Deep model compression
  • DL in low-resource settings
  • Automated ML

Research Highlights

Here are some directions that we currently work on:

Adversarial Robustness of Deep Neural Networks It has been widely known that deep neural networks (DNNs) are vulnerable to adversarial attacks that appear not only in the digital world but also in the physical world. Along this direction, we highlight two of our achievements.

Zeroth-Order (ZO) Optimization: Theory, Methods and Applications. ZO optimization (learning without gradients) is increasingly embraced for solving many machine learning (ML) problems, where explicit expressions of the gradients are difficult or infeasible to obtain. Appealing applications include, e.g., robustness evaluation of black-box deep neural networks (DNNs), hyper-parameter optimization for automated ML, meta-learning, DNN diagnosis and explanation, and scientific discovery using black-box simulators. An illustrative example of ZO optimization versus first-order optimization is shown below.(see Tutorial work on IEEE Signal Processing Magazine)

Two of our achievements are highlighted below.

Talks

Tutorial Talks

Workshops