
Hello! I am a third-year PhD student in the ECE department at Princeton University, advised by Sanjeev Arora. I am broadly interested in the theoretical foundations of machine learning and large language models. Previously, I graduated from UT Austin's Turing Scholars program with double majors in computer science and mathematics. My research is supported by a NSF Graduate Research Fellowship.
Email: stanley.wei [at] princeton.edu
Research
- What Makes a Reward Model a Good Teacher? An Optimization Perspective
- LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming?
- Provable unlearning in topic modeling and downstream tasks
- Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot
- On the Synergy Between Label Noise and Learning Rate Annealing in Neural Network Training
Noam Razin, Zixuan Wang, Hubert Strauss, Stanley Wei, Jason D. Lee, Sanjeev Arora
[pdf] In NeurIPS 2025 (Spotlight).
Zihan Zheng, Zerui Cheng, Zeyu Shen, Shang Zhou, Kaiyuan Liu, Hansen He, Dongruixuan Li, Stanley Wei, Hangyi Hao, Jianzhu Yao, Peiyao Sheng, Zixuan Wang, Wenhao Chai, Aleksandra Korolova, Peter Henderson, Sanjeev Arora, Pramod Viswanath, Jingbo Shang, Saining Xie
[pdf] In NeurIPS 2025.
Stanley Wei, Sadhika Malladi, Sanjeev Arora, Amartya Sanyal
[pdf] In ICLR 2025.
Zixuan Wang, Stanley Wei, Daniel Hsu, Jason D. Lee
[pdf] In ICML 2024.
Stanley Wei, Tongzheng Ren, Simon S. Du
[pdf] In NeurIPS 2023 Workshop on Optimization for Machine Learning.
Teaching
- COS 226: Algorithms and Data Structures, Princeton University, Spring 2025
- EGR 156: Multivariable Calculus, Princeton University, Fall 2024
- CS 364M: Principles of Machine Learning, UT Austin, Spring 2023
Awards
- NSF Graduate Research Fellowship (2025)
- Dean's Honored Graduate, UT Austin (2023)
- 26th Place, ICPC World Finals (2022)
- Top 100, Putnam Competition (2020, hosted unofficially due to COVID-19)