Fred Zhang

I am a final-year CS PhD student at UC Berkeley, advised by Jelani Nelson. My research spans algorithms and machine learning.

In Summer 2023, I interned at Google NYC, hosted by Matthew Fahrbach and Peilin Zhong. I also received mentorship from Neel Nanda. In Summer 2022, I was a research intern at Google Brain, working with Richard Zhang and David Woodruff.

Prior to graduate school, I received B.S. in Computer Science and B.S. in Mathematics from Duke University, where I had the fortune of working with Rong Ge and Debmalya Panigrahi.

Links: Resume / Google Scholar / Twitter

Publications show selected / by date / by topic

Topics: Language model / Algorithmic statistics / Sublinear algorithms / Learning-based algorithms / Others

Authorships are in alphabetical order, as is common in theoretical computer science, unless indicated otherwise.

Approaching Human-Level Forecasting with Language Models
Danny Halawi*, Fred Zhang*, Chen Yueh-Han*, and Jacob Steinhardt (* Equal contribution).

Preprint. (arXiv, Twitter thread, blogpost, code)

SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang and Boaz Barak (contribution order).

NeurIPS 2019  Spotlight   (arXiv)



Graduate Student Instructor, UC Berkeley
  • CS 294-165: Sketching Algorithms (Fall 20)
  • CS 170: Efficient Algorithms and Intractable Problems (Spring 20)
Undergraudate Teaching Assistant, Duke University


615 Soda Hall
Berkeley CA 94709