Tianhao Wang (王天浩)

profile.jpeg

3234 Matthews Lane
La Jolla, CA 92093
tianhaowang@ucsd.edu

I am an Assistant Professor in the Halıcıoğlu Data Science Institute at University of California, San Diego. I am broadly interested in various aspects of machine learning, optimization, and statistics.

Prior to UCSD, I was a Research Assistant Professor in the Toyota Technological Institute at Chicago from 2024 to 2025, working with Zhiyuan Li and Nathan Srebro. Before that, I received my Ph.D. from the Department of Statistics and Data Science at Yale University, where I was fortunate to be advised by Zhou Fan. I obtained my Bachelor’s degree in mathematics with a dual degree in computer science at the University of Science and Technology of China.

CV


Recent papers(*: equal contribution)

  1. On Universality of NonSeparable Approximate Message Passing Algorithms
    Max Lovig, Tianhao Wang, and Zhou Fan
    arXiv:2506.23010, 2025
  2. Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders
    Siyu Chen, Heejune Sheen, Xuyuan Xiong, Tianhao Wang, and Zhuoran Yang
    arXiv:2506.14002, 2025
  3. Structured Preconditioners in Adaptive Optimization: A Unified Analysis
    Shuo Xie, Tianhao Wang, Sashank Reddi, Sanjiv Kumar, and Zhiyuan Li
    In International Conference on Machine Learning (ICML), 2025
  4. Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model
    Siyu Chen*, Beining Wu*, Miao Lu, Zhuoran Yang, and Tianhao Wang
    In International Conference on Learning Representations (ICLR), 2025  (Oral)
    Presented at NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning
  5. How well can Transformers emulate in-context Newton’s method?
    Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, and Jason D. Lee
    In International Conference on Artificial Intelligence and Statistics (AISTATS), 2025
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  6. Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers
    Siyu Chen, Heejune Sheen, Tianhao Wang, and Zhuoran Yang
    In Advances in Neural Information Processing Systems (NeurIPS), 2024
    Presented at ICML 2024 Workshop on Theoretical Foundations of Foundation Models
  7. Implicit regularization of gradient flow on one-layer softmax attention
    Heejune Sheen, Siyu Chen, Tianhao Wang, and Harrison H. Zhou
    arXiv:2403.08699, 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning
  8. Approximate Message Passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization
    Xinyi Zhong*, Tianhao Wang*, and Zhou Fan
    Information and Inference: A Journal of the IMA, 2024
  9. Universality of Approximate Message Passing algorithms and tensor networks
    Tianhao Wang, Xinyi Zhong, and Zhou Fan
    Annals of Applied Probability, 2024
  10. Training dynamics of multi-head softmax attention for in-context learning: emergence, convergence, and optimality
    Siyu Chen, Heejune Sheen, Tianhao Wang, and Zhuoran Yang
    Conference on Learning Theory (COLT), 2024
    Presented at ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning