Research Topics
My research lies at the intersection of optimization, machine learning, and scientific computing. I am interested in designing theoretically grounded and computationally efficient algorithms for large-scale problems arising in data science, networked systems, and scientific modeling. Below are several major directions of my current and past work.
Distributed Optimization Algorithms
Distributed optimization aims to solve large-scale problems collaboratively across multiple computing agents or data centers, with or without central coordination. My research explores both centralized and decentralized settings, emphasizing communication-efficient and computation-efficient algorithm design. I have developed several accelerated and provably convergent methods for decentralized gradient tracking, compressed communication, and directed network topologies—bridging theory and implementation in large-scale learning and signal processing systems.
- Decentralized Consensus Algorithms:
This presentation introduces key topics including:- Non-accelerated algorithms: DGD, EXTRA/PG-EXTRA, NIDS
- Accelerated algorithms: OGT (Optimal Gradient Tracking), OPAPC
- Compressed communication: LEAD
- Directed graphs: APD (Accelerated Push-DIGing)
- Selected Publications:
- Z. Peng, Y. Xu, M. Yan, and W. Yin, ARock: An algorithmic framework for asynchronous parallel coordinate updates, SIAM Journal on Scientific Computing, 38 (2016), A2851-A2879.
- H. Tang, X. Lian, M. Yan, C. Zhang, and J. Liu, D2: Decentralized training over decentralized data, ICML 2018, 4848-4856.
- Z. Li, W. Shi, and M. Yan, A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates, IEEE Transactions on Signal Processing, 67 (2019), 4494-4506. (Code)
- X. Liu, Y. Li, R. Wang, J. Tang, and M. Yan, Linear convergent decentralized optimization with compression, ICLR 2021.
- Z. Song, L. Shi, S. Pu, and M. Yan, Provably accelerated decentralized gradient method over unbalanced directed graphs, SIAM Journal on Optimization, 34 (2024), 1131-1156.
- Z. Song, L. Shi, S. Pu, and M. Yan, Optimal gradient tracking for decentralized optimization, Mathematical Programming, 207 (2024), 1-53.
Primal–Dual Algorithms
Primal–dual splitting methods are fundamental tools for structured convex and nonconvex optimization. I proposed the PD3O algorithm (Primal–Dual Three-Operator Splitting), which unifies and generalizes a family of primal–dual methods. My work further explores the equivalence relationships among various primal–dual frameworks, providing both mathematical insights and practical algorithms for imaging, inverse problems, and distributed learning.
- Primal–Dual Algorithms:
This presentation covers:- PD3O, Condat–Vu, AFBA, PDFP
- Chambolle–Pock, PAPC, FDFP2O
- Selected Publications:
- M. Yan, A new primal-dual algorithm for minimizing the sum of three functions with a linear operator, Journal of Scientific Computing, 76 (2018), 1698-1717. (Code)
- M. Yan and Y. Li, On the improved conditions for some primal-dual algorithms, Journal of Scientific Computing, 99 (2024), 74.
Sparse Optimization and Signal Processing
Sparse modeling provides a powerful framework for recovering signals and images from limited or corrupted data. My earlier research contributed advances in nonconvex regularization, robust PCA, and low-rank matrix recovery. These works combine optimization theory with practical algorithms for image reconstruction, denoising, and compressive sensing, leading to efficient solvers with provable recovery guarantees.
- Selected Publications:
- M. Yan, Y. Yang, and S. Osher, Robust 1-bit compressive sensing using adaptive outlier pursuit, IEEE Transactions on Signal Processing, 60 (2012), 3868-3875. (Code)
- M. Yan, Restoration of images corrupted by impulse noise and mixed Gaussian impulse noise using blind inpainting, SIAM Journal on Imaging Sciences, 6 (2013), 1227-1245.
- Y. Lou and M. Yan, Fast L1-L2 minimization via a proximal operator, Journal of Scientific Computing, 74 (2018), 767-785. (Code)
Physics-Informed Neural Networks (PINNs)
Physics-Informed Neural Networks (PINNs) integrate deep learning with physical laws to solve differential equations efficiently. My recent work improves accuracy and training efficiency through novel network architectures and smoothness-regularized formulations. This research bridges traditional numerical analysis and modern AI, enabling data-driven discovery and simulation of complex physical systems.
- Selected Publications:
- C. Si and M. Yan, Initialization-enhanced physics-informed neural network with domain decomposition (IDPINN), Journal of Computational Physics, 530 (2025), 113914.
Artificial Intelligence for Mathematical and Optimization Problems
Artificial intelligence provides new perspectives for tackling mathematical and operations research challenges. I am broadly interested in leveraging machine learning—including deep networks and reinforcement learning—to design adaptive, data-driven optimization algorithms and efficient solvers for inverse problems.
Last updated: October 2025