Tianhao Wang (王天浩)

profile.jpeg

6045 South Kenwood Ave
Chicago, IL 60637
tianhao.wang@ttic.edu
tianhaowang@ucsd.edu

I am a Research Assistant Professor in the Toyota Technological Institute at Chicago. I am broadly interested in various aspects of statistics and machine learning theory.

Prior to TTIC, I received my Ph.D. from the Department of Statistics and Data Science at Yale University, where I was fortunate to be advised by Zhou Fan. I obtained my Bachelor’s degree in mathematics with a dual degree in computer science at the University of Science and Technology of China.

In 2025, I will join the Halıcıoğlu Data Science Institute at UC San Diego as a tenure-track Assistant Professor.

I will have PhD openings for Fall 2025. If you are interested, please apply to the PhD program in the Halıcıoğlu Data Science Institute at UC San Diego and mention my name in your application. Feel free to reach out to me via email.

CV


Recent papers(*: equal contribution)

  1. Implicit regularization of gradient flow on one-layer softmax attention
    Heejune Sheen, Siyu Chen, Tianhao Wang, and Harrison H. Zhou
    arXiv:2403.08699, 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  2. How well can Transformers emulate in-context Newton’s method?
    Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, and Jason D. Lee
    arXiv:2403.03183, 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  3. Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers
    Siyu Chen, Heejune Sheen, Tianhao Wang, and Zhuoran Yang
    In Advances in Neural Information Processing Systems (NeurIPS), 2024
  4. Approximate Message Passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization
    Xinyi Zhong*, Tianhao Wang*, and Zhou Fan
    Information and Inference: A Journal of the IMA, 2024
  5. Universality of Approximate Message Passing algorithms and tensor networks
    Tianhao Wang, Xinyi Zhong, and Zhou Fan
    The Annals of Applied Probability, 2024
  6. Training dynamics of multi-head softmax attention for in-context learning: emergence, convergence, and optimality
    Siyu Chen, Heejune Sheen, Tianhao Wang, and Zhuoran Yang
    Conference on Learning Theory (COLT), 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  7. Maximum likelihood for high-noise group orbit estimation and single-particle cryo-EM
    Zhou Fan, Roy R. Lederman, Yi Sun, Tianhao Wang, and Sheng Xu
    The Annals of Statistics, 2024
  8. The Marginal Value of Momentum for Small Learning Rate SGD
    Runzhe Wang, Sadhika Malladi, Tianhao Wang, Kaifeng Lyu, and Zhiyuan Li
    In International Conference on Learning Representations (ICLR), 2024
  9. Fast mixing of stochastic gradient descent with normalization and weight decay
    Zhiyuan Li, Tianhao Wang, and Dingli Yu
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
  10. A simple and provably efficient algorithm for asynchronous federated contextual linear bandits
    Jiafan He*, Tianhao Wang*, Yifei Min*, and Quanquan Gu
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
  11. Implicit bias of gradient descent on reparametrized models: On equivalence to mirror descent
    Zhiyuan Li*, Tianhao Wang*, Jason D. Lee, and Sanjeev Arora
    In Advances in Neural Information Processing Systems (NeurIPS), 2022
    Abridged version accepted for a contributed talk to ICML 2022 Workshop on Continuous time methods for machine learning
  12. What happens after SGD reaches zero loss?–A mathematical framework
    Zhiyuan Li, Tianhao Wang, and Sanjeev Arora
    In International Conference on Learning Representations (ICLR), 2022  (Spotlight)