Fred Zhang

I am a PhD student in the Theory Group of the EECS Department at UC Berkeley, advised by Jelani Nelson. My research lies broadly in algorithm design. I am particularly interested in questions arising from high-dimensional statistics, machine learning, and processing massive data.

I was a graduate student in the Theory of Computation group at Harvard, before moving to Berkeley with my advisor. Prior to that, I received my B.S. in Computer Science and in Mathematics from Duke University, where I had the fortune of working with Rong Ge and Debmalya Panigrahi.

I help with Berkeley Algorithms Office Hours, where the goal is to bridge theory and applications of algorithms. If you come up with algorithmic problems from your (applied) research, feel free to reach out.


Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization
with Samuel B Hopkins and Jerry Li.
NeurIPS 2020. (arXiv)

Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms
with Alexander Wei.
NeurIPS 2020. (arXiv)

A Fast Spectral Algorithm for Mean Estimation with Sub-Gaussian Rates
with Zhixian Lei, Kyle Luh and Prayaag Venkat.
COLT 2020. (arXiv, 15 min talk)

SGD on Neural Networks Learns Functions of Increasing Complexity
with Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman and Boaz Barak.
NeurIPS 2019 (Spotlight). (arXiv)
Also appears in ICML '19 Workshop on Generalization in Deep Learning

Minimum Cut and Minimum k-Cut in Hypergraphs via Branching Contractions
with Kyle Fox and Debmalya Panigrahi.
SODA 2019. (slides)


Graduate Student Instructor, UC Berkeley Undergraudate Teaching Assistant, Duke University



634 Soda Hall
Berkeley CA 94709