I am an Assistant Professor in the School of Electrical and Computer Engineering and H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. I received the B.Tech (with honors) degree from the Indian Institute of Technology, Madras and the Ph.D. degree in Electrical Engineering from University of California, Berkeley. Before joining Georgia Tech, I spent a semester at the Simons Institute for the Theory of Computing as a research fellow for the program “Theory of Reinforcement Learning.”
My broad interests are in game theory, online and statistical learning. I am particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and the foundations of multi-agent decision-making. In my spare time, I enjoy singing Carnatic vocal music, playing the piano, and long-distance cycling.
See here for a more formal bio in the third person.
- Sept 2023: I will be speaking about our group’s efforts on understanding interpolation in classification and regression tasks at the Mini-Workshop: Interpolation and Over-parameterization in Statistics and Machine Learning at Mathematisches Forschungsinstitut Oberwolfach.
- Summer 2023: Enjoyed giving talks at SIAM OP23 and JSM 2023 on our work on equivalences of loss functions at training in the overparameterized regime!
- April 2023: Congratulations to my student Kuo-Wei Lai for winning the 2023 Outstanding ECE Graduate Teaching Assistant Award!
- Jan 2023: Received the Amazon Research Award with Ashwin Pananjady for “A framework for learning from online bidding”.
- Jan 2023: I am incredibly honored and grateful to have received the NSF CAREER Award! Read more about the project here.
- Jan 2023: Our work on the complexity of infinite-horizon general-sum stochastic games will be presented at Innovations in Theoretical Computer Science 2023. Thanks to my wonderful co-authors Yujia Jin and Aaron Sidford!
- Nov 2022: Congratulations to my student Guanghui Wang for winning the ARC-ACO Fellowship in the Spring 2023 cycle for his proposal on adaptive oracle-efficient online learning!
- Oct 2022: Our work on adaptive oracle-efficient online learning will appear at NeurIPS 2022. Congratulations to lead author Guanghui Wang!
- Oct 2022: Grateful to the NSF for funding our upcoming 3-year-long effort on design principles and theory for data augmentation methods in machine learning (joint with Eva Dyer, Mark Davenport and Tom Goldstein)! Marking the beginning of this effort: our new preprint on understanding the effects of data augmentation on generalization of high-dimensional linear models.