Fred Zhang

I am a final-year CS PhD student at UC Berkeley, advised by Jelani Nelson. My research spans algorithms, machine learning and high-dimensional statisics. More recently, I am working on language model interpretability.

In Summer 2023, I interned at Google NYC, hosted by Matthew Fahrbach and Peilin Zhong. I also received mentorship from Neel Nanda. In Summer 2022, I was a research intern at Google Brain, working with Richard Zhang and David Woodruff.

Prior to graduate school, I received B.S. in Computer Science and B.S. in Mathematics from Duke University, where I had the fortune of working with Rong Ge and Debmalya Panigrahi.

I am on the job market this year. Please feel free to reach out!

Links: Resume / Google Scholar / Twitter

Publications show selected / by date / by topic

Topics: Language model interpretability / Algorithmic statistics / Sublinear algorithms / Learning-based algorithms / Others

Authorships are in alphabetical order, as is common in theoretical computer science, unless indicated otherwise.

SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang and Boaz Barak (contribution order).

NeurIPS 2019  Spotlight   (arXiv)



Graduate Student Instructor, UC Berkeley
  • CS 294-165: Sketching Algorithms (Fall 20)
  • CS 170: Efficient Algorithms and Intractable Problems (Spring 20)
Undergraudate Teaching Assistant, Duke University


615 Soda Hall
Berkeley CA 94709