Tianhao Wang (王天浩)

profile.jpeg

6045 South Kenwood Ave
Chicago, IL 60637
tianhao.wang@ttic.edu

I am a Research Assistant Professor in the Toyota Technological Institute at Chicago. I am broadly interested in various aspects of statistics and machine learning theory.

Prior to TTIC, I received my Ph.D. from the Department of Statistics and Data Science at Yale University, where I was fortunate to be advised by Zhou Fan. I obtained my Bachelor’s degree in mathematics with a dual degree in computer science at University of Science and Technology of China.

In July 2025, I will join the Halıcıoğlu Data Science Institute at UC San Diego as a tenure-track Assistant Professor.

CV


Recent papers(*: equal contribution)

Foundations of Transformers
  1. Implicit regularization of gradient flow on one-layer softmax attention
    Heejune Sheen, Siyu Chen, Tianhao Wang, and Harrison H. Zhou
    arXiv:2403.08699, 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  2. How well can Transformers emulate in-context Newton’s method?
    Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, and Jason D. Lee
    arXiv:2403.03183, 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  3. Training dynamics of multi-head softmax attention for in-context learning: emergence, convergence, and optimality
    Siyu Chen, Heejune Sheen, Tianhao Wang, and Zhuoran Yang
    Conference on Learning Theory (COLT), 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
Approximate Message Passing algorithms
  1. Approximate Message Passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization
    Xinyi Zhong*, Tianhao Wang*, and Zhou Fan
    Information and Inference, to appear
  2. Universality of Approximate Message Passing algorithms and tensor networks
    Tianhao Wang, Xinyi Zhong, and Zhou Fan
    The Annals of Applied Probability, to appear
Implicit bias of optimization algorithms
  1. The Marginal Value of Momentum for Small Learning Rate SGD
    Runzhe Wang, Sadhika Malladi, Tianhao Wang, Kaifeng Lyu, and Zhiyuan Li
    In International Conference on Learning Representations (ICLR), 2024
  2. Fast mixing of stochastic gradient descent with normalization and weight decay
    Zhiyuan Li, Tianhao Wang, and Dingli Yu
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
  3. Implicit bias of gradient descent on reparametrized models: On equivalence to mirror descent
    Zhiyuan Li*, Tianhao Wang*, Jason D. Lee, and Sanjeev Arora
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
    Abridged version accepted for a contributed talk to ICML 2022 Workshop on Continuous time methods for machine learning
  4. What happens after SGD reaches zero loss?–A mathematical framework
    Zhiyuan Li, Tianhao Wang, and Sanjeev Arora
    In International Conference on Learning Representations (ICLR), 2022  (Spotlight)
Data-driven decision-making problems
  1. Noise-adaptive Thompson sampling for linear contextual bandits
    Ruitu Xu, Yifei Min, and Tianhao Wang
    In Advances in Neural Information Processing Systems (NeurIPS), 2023
  2. Learn to match with no regret: Reinforcement learning in Markov matching markets
    Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I Jordan, and Zhuoran Yang
    In Advances in Neural Information Processing Systems (NeurIPS), 2022  (Oral)
  3. A simple and provably efficient algorithm for asynchronous federated contextual linear bandits
    Jiafan He*, Tianhao Wang*, Yifei Min*, and Quanquan Gu
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
  4. Variance-aware off-policy evaluation with linear function approximation
    Yifei Min*, Tianhao Wang*, Dongruo Zhou, and Quanquan Gu
    In Advances in neural information processing systems (NeurIPS), 2021
  5. Provably efficient reinforcement learning with linear function approximation under adaptivity constraints
    Tianhao Wang*, Dongruo Zhou*, and Quanquan Gu
    In Advances in Neural Information Processing Systems (NeurIPS), 2021
Orbit recovery model
  1. Maximum likelihood for high-noise group orbit estimation and single-particle cryo-EM
    Zhou Fan, Roy R. Lederman, Yi Sun, Tianhao Wang, and Sheng Xu
    The Annals of Statistics, 2024
  2. Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model
    Zhou Fan, Yi Sun, Tianhao Wang, and Yihong Wu
    Communications on Pure and Applied Mathematics, 2022