Aditya Gangrade
Hi!
I’m a postdoc jointly in the EECS department at the University of Michigan, advised by Clay Scott, and at the ECE department at Boston University, advised by Venkatesh Saligrama. You can send me an email at adityagangrade92@gmail.com
Research
I am broadly interested in theoretical and methodological aspects of machine learning and statistics, with an emphasis on reliability and resource-efficiency. Recently I have been thinking about problems about safety and reliability in sequential decision making.
Publications
Safe Bandits
Safety requirements impose a priori unknown round-wise constraints on bandit problems – for example, when running adaptive clinical trials, we need to ensure that drugs with high chances of causing side-effects are not played too often, even if they are effective. The following papers propose “doubly optimistic” schemes for stochastic safe bandit problems, and characterise their safety and efficacy properties.
- G., Aditya Gopalan, Venkatesh Saligrama, Clayton Scott
Testing the Feasibility of a Linear Program with Bandit Feedback (In preparation) - Tianrui Chen, G., Venkatesh Saligrama,
Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk
ICML ’22 - Tianrui Chen, G., Venkatesh Saligrama,
A Doubly Optimistic Strategy for Safe Linear Bandits
Testing Log-Concavity
Log-concave distributions have wide applications in both applied contexts (as a tractable restriction satisfied by underlying laws in economics or survival analysis), and in theoretical contexts (as assumptions on covariate distributions that enable computationally efficient learning). They thus form an important shape restriction in modern nonparametric statistics. The following works construct the first valid and powerful tests of log-concavity in the batch and sequential settings, using Universal Inference based e-values and e-processes respectively. The second paper further shows that all sequential tests based on test martingales must be powerless.
- Robin Dunn, G., Larry Wasserman, Aaditya Ramdas
Universal inference meets random projections: a scalable test for log-concavity - G., Alessandro Rinaldo, Aaditya Ramdas
A Sequential Test for Log-Concavity
This received an outstanding poster presentation award at the Michael Woodroofe Memorial Conference at U Mich Stats. in 2023.
Abstention In Learning
Abstention allows a predictor to say ‘I don’t know’ in response to a query. This models real world decisions like gathering more data, or invoking a human fallback, and serves as a primitive for trading-off inference time resource consumption with accuracy. The game is to minimise abstention rate while ensuring that misclassification rates are very small. The following work gives new approaches to this problem in different settings, and uses for the abstention paradigm in non-abstention tasks.
- Yo Joong Choe, G., Aaditya Ramdas
Counterfactually Comparing Abstaining Classifiers
NeurIPS ’23 - G., Anil Kag, Ashok Cutkosky, Venkatesh Saligrama,
Online Selective Classification with Limited Feedback
NeurIPS ’21 (spotlight presentation) - G., Anil Kag, Venkatesh Saligrama,
Selective Classification via One-Sided Prediction
AISTATS ’21 - Durmus Alp Emre Acar, G., Venkatesh Saligrama,
Budget Learning via Bracketing
AISTATS ’20
Efficient Inference Through Selective Classification Ideas
Additionally, I have studied how one can apply ideas from selective classification to perform efficient inference. The second paper below is a classic distributed inference approach, where selective classifiers are used to decide what resources to employ on an instance. The former paper uses selective classification as a core module during the training of low-complexity classifiers from high complexity ones, leading to a generic modification to distillation methods that yields better adaptation to model class complexity.
- Anil Kag, Dumrus Alpe Emre Acar, G., Venkatesh Saligrama
Scaffolding a Student to Instill Knowledge
ICLR ’23 - Anil Kag, Igor Federov, G., Paul Whatmough, Venkatesh Saligrama
Efficient Edge Inference by Selective Query
ICLR ’23
Structural Testing
Work on testing of the latent structure in networks and in graphical models. The focus in the following is to establish separations, or the lack thereof, between the statistical costs of testing and of recovery, and in particular to establish cheap schemes in regimes where it is possible.
- G., Bobak Nazer, Venkatesh Saligrama,
Limits on Testing Structural Changes in Ising Models
NeurIPS ’20
This subsumes earlier work at Allerton ’17 & at ICASSP ’18 - G., Praveen Venkatesh, Bobak Nazer, Venkatesh Saligrama,
Efficient Near-Optimal Testing of Community Changes in Balanced Stochastic Block Models
NeurIPS ’19
Nonparametric Regression
The following paper describes a neat way to do piecewise linear regression over delta-convex functions – that is, functions that can be represented as a difference of convex functions.
- Ali Siahkamari, G., Brian Kulis, Venkatesh Saligrama,
Piecewise Linear Regression via a Difference of Convex Functions
ICML ’20
This was selected as joint best paper at BU CISE’s Graduate Student Workshop 2021.
Thesis
I defended in November ’21 with the following dissertation
Two Studies in Resource-Efficient Inference: Structural Testing of Networks, and Selective Classification
This won the BU Systems Engineering dissertation award.
Miscellaneous
I used to be a postdoc in the Statistics department at Carnegie Mellon University, where I was advised by Aaditya Ramdas, and Alessandro Rinaldo. Before that, I spent a few years studying Systems Engineering at Boston University, advised by Bobak Nazer, and by Venkatesh Saligrama.
Long ago I studied at IIT Bombay, where I learned to love whisky, cancer-sticks, and Iggy Pop.
I have an outdated résumé that does used to do the same job as this site, but with less rambling.