Fred Zhang

z0@berkeley.edu

I am a final-year CS PhD student at UC Berkeley, advised by Jelani Nelson. My research spans algorithms and machine learning. More recently, I am interested in language model interpretability and evaluations.

In Summer 2023, I interned at Google NYC, hosted by Matthew Fahrbach and Peilin Zhong. I also received mentorship from Neel Nanda. In Summer 2022, I was a research intern at Google Brain, working with Richard Zhang and David Woodruff.

Prior to graduate school, I received B.S. in Computer Science and B.S. in Mathematics from Duke University, where I had the fortune of working with Rong Ge and Debmalya Panigrahi.

I am on the job market this year. Please feel free to reach out!

Links: Resume / Google Scholar / Twitter


Publications show selected / by date / by topic

Topics: Language model / Algorithmic statistics / Sublinear algorithms / Learning-based algorithms / Others

Authorships are in alphabetical order, as is common in theoretical computer science, unless indicated otherwise.

Approaching Human-Level Forecasting with Language Models
Danny Halawi*, Fred Zhang*, Chen Yueh-Han*, and Jacob Steinhardt (* Equal contribution).

Preprint. (arXiv, Twitter thread, blogpost)

SGD on Neural Networks Learns Functions of Increasing Complexity
Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang and Boaz Barak (contribution order).

NeurIPS 2019  Spotlight   (arXiv)


Notes


Teaching

Graduate Student Instructor, UC Berkeley
  • CS 294-165: Sketching Algorithms (Fall 20)
  • CS 170: Efficient Algorithms and Intractable Problems (Spring 20)
Undergraudate Teaching Assistant, Duke University

Contact

615 Soda Hall
Berkeley CA 94709