Most of our research projects fall under one of the themes below. We use mathematical tools to investigate problems arising at the cutting edge of machine learning practice, and engage with experiments of a foundational nature. This research is funded through the generous support of NSF (CAREER award CCF-2239151 and collaborative project IIS-2212182), Adobe, Amazon and Google.
Mathematics of modern machine learning
“State-of-the-art” machine learning models used in practice typically have several more parameters than the number of training points. Traditionally, this overparameterized regime has been associated with overfitting the training data; why, then, are these models so successful in practice? So far, we have shown that:
-
Higher-dimensional models can fit even noisy training data in a relatively harmless manner [5].
-
The success may be restricted to classification; classification tasks can generalize even when the corresponding regression task fails [4].
-
Interpolating solutions are ubiquitous in ML practice; the support-vector-machine itself interpolates training data in high dimensions [2,3,4]. This means that very different training loss functions can yield identical solutions [2].
These insights were obtained for the high-dimensional linear model (and we’ve recently extended some of these insights to more general kernel settings[1]). Our analyses are underlied by a signal-processing perspective. Current projects include investigating the brittleness of models with high classification accuracy, generalization behavior of overparameterized neural networks, and understanding implications for fairness and transparency of these models. See my recent presentation slides and our recent survey paper to get a flavor of this research!
Selected publications:
[1] Andrew McRae, Santhosh Karnik, Mark Davenport and Vidya Muthukumar: “Harmless interpolation in regression and classification with structured features”, AISTATS 2022, longer version in preparation.
[2] Ke Wang, Vidya Muthukumar and Christos Thrampoulidis: “Benign overfitting in multiclass classification: All roads lead to interpolation”, NeurIPS 2021, longer version in submission.
[3] Daniel Hsu, Vidya Muthukumar and Ji Xu: “On the proliferation of support vectors in high dimensions”, AISTATS 2021.
[4] Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu and Anant Sahai: “Classification-vs-regression in overparameterized regimes: Does the loss function matter?”, Journal of Machine Learning Research.
[5] Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian and Anant Sahai: “Harmless interpolation of noisy data in linear regression”, IEEE Journal of Selected Areas in Information Theory, special issue on “mathematics of deep learning”.
Sequential decision-making from limited data
Several applications of data-driven decision making, such as healthcare, education and online platforms, are both high-stakes and data-hungry — thus, design choices in the algorithms critically and irrevocably impact performance. We design sequential decision-making algorithms that themselves adapt to the properties of the data for optimal performance; for example:
- Data-adaptive model selection in contextual bandits and reinforcement learning [1,3,4]
- Adaptivity between stochastic and adversarial “full-information” online learning [5]
Current projects include scalability of adaptive algorithms, fair decision-making, and inverse learning from sequential algorithms (our early work does this for bandits[2]). I taught a graduate-level course in Fall 2021 on these research topics, and expect to teach this course every fall semester.
Selected publications:
[1] Vidya Muthukumar and Akshay Krishnamurthy: “Universal and data-adaptive algorithms for model selection in linear contextual bandits”, ICML 2022.
[2] Wenshuo Guo, Kumar Krishna Agarwal, Aditya Grover, Vidya Muthukumar and Ashwin Pananjady: “Learning from an exploring demonstrator: Optimal reward estimation for bandits”, to appear in AISTATS 2022.
[3] Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong and Emma Brunskill: “Online model selection for reinforcement learning with function approximation”, AISTATS 2021.
[4] Niladri S. Chatterji, Vidya Muthukumar and Peter L. Bartlett, “OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits”, AISTATS 2020.
[5] Vidya Muthukumar, Mitas Ray, Anant Sahai and Peter L. Bartlett, “Best-of-many-worlds: Robust model selection for online supervised learning”, AISTATS 2019.
Learning in the presence of strategic behavior
In several modern applications like cognitive radio and e-commerce, machine learning agents will not be acting in isolation, and it is critical for them to directly interact with other agents who themselves might behave strategically. Our research addresses the following questions at the intersection of ML and game theory:
- How should we learn from unknown, possibly strategically generated data? [2]
- How will strategic behavior manifest in the presence of learning? [4]
- What is the outcome of popular learning algorithms deployed in multi-agent systems? [3]
Current projects also include computation and optimization of solution concepts in multi-agent reinforcement learning (our early work does this for tabular RL [1]). Chapter 1 of my doctoral dissertation describes our motivation to study these topics.
Selected publications:
[1] Yujia Jin, Vidya Muthukumar and Aaron Sidford: “The complexity of infinite-horizon general-sum stochastic games”, Innovations in Theoretical Computer Science 2023; longer version in preparation.
[2] Guanghui Wang, Zihao Hu, Vidya Muthukumar and Jacob Abernethy: “Adaptive oracle-efficient online learning”, NeurIPS 2022.
[3] Vidya Muthukumar, Soham Phade and Anant Sahai: “On the impossibility of convergence of mixed strategies arising from no-regret learning”, major revision at Mathematics of Operations Research.
[4] Vidya Muthukumar and Anant Sahai: “Robust commitments and partial reputation”, ACM Economics and Computation 2019.