Fred Zhang

z0@berkeley.edu

I am a research scientist at Google DeepMind.

I completed my Ph.D. from the Theory Group at Berkeley EECS in 2024, advised by Jelani Nelson. During graduate school, I had also spent time at Simons Institute, Google Research and Berkeley AI Research. I received B.S. in Computer Science and B.S. in Mathematics from Duke University, where I had the fortune of working with Rong Ge and Debmalya Panigrahi.

Links: Google Scholar / Twitter


Publications show selected / by date / by topic

Topics: Language model / Algorithmic statistics / Sublinear algorithms / Learning-based algorithms / Others

Approaching Human-Level Forecasting with Language Models
Danny Halawi*, Fred Zhang*, Chen Yueh-Han*, Jacob Steinhardt (* Equal contribution).

NeurIPS 2024 (arXiv, Twitter thread, blogpost, code)

Online Prediction in Sub-linear Space
Binghui Peng, Fred Zhang (alphabetical order).

SODA 2023  Best Student Paper  (arXiv, talk at Google, talk at TTIC)

SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak (contribution order).

NeurIPS 2019  Spotlight   (arXiv)


Notes


Teaching

Graduate Student Instructor, UC Berkeley
  • CS 294-165: Sketching Algorithms (Fall 20)
  • CS 170: Efficient Algorithms and Intractable Problems (Spring 20)
Undergraudate Teaching Assistant, Duke University

Contact

z0@berkeley.edu