Aditya Gangrade

Hi!

I’m a postdoc jointly in the EECS department at the University of Michigan, advised by Clay Scott, and at the ECE department at Boston University, advised by Venkatesh Saligrama. You can send me an email at adityagangrade92@gmail.com

Research

I am broadly interested in theoretical and methodological aspects of machine learning and statistics, with an emphasis on reliability and resource-efficiency. Recently I have been thinking about problems about safety and reliability in sequential decision making.

Publications

Safe Bandits

Safety requirements impose a priori unknown round-wise constraints on bandit problems – for example, when running adaptive clinical trials, we need to ensure that drugs with high chances of causing side-effects are not played too often, even if they are effective. The following papers propose “doubly optimistic” schemes for stochastic safe bandit problems, and characterise their safety and efficacy properties.

Testing Log-Concavity

Log-concave distributions have wide applications in both applied contexts (as a tractable restriction satisfied by underlying laws in economics or survival analysis), and in theoretical contexts (as assumptions on covariate distributions that enable computationally efficient learning). They thus form an important shape restriction in modern nonparametric statistics. The following works construct the first valid and powerful tests of log-concavity in the batch and sequential settings, using Universal Inference based e-values and e-processes respectively. The second paper further shows that all sequential tests based on test martingales must be powerless.

Abstention In Learning

Abstention allows a predictor to say ‘I don’t know’ in response to a query. This models real world decisions like gathering more data, or invoking a human fallback, and serves as a primitive for trading-off inference time resource consumption with accuracy. The game is to minimise abstention rate while ensuring that misclassification rates are very small. The following work gives new approaches to this problem in different settings, and uses for the abstention paradigm in non-abstention tasks.

Efficient Inference Through Selective Classification Ideas

Additionally, I have studied how one can apply ideas from selective classification to perform efficient inference. The second paper below is a classic distributed inference approach, where selective classifiers are used to decide what resources to employ on an instance. The former paper uses selective classification as a core module during the training of low-complexity classifiers from high complexity ones, leading to a generic modification to distillation methods that yields better adaptation to model class complexity.

Structural Testing

Work on testing of the latent structure in networks and in graphical models. The focus in the following is to establish separations, or the lack thereof, between the statistical costs of testing and of recovery, and in particular to establish cheap schemes in regimes where it is possible.

Nonparametric Regression

The following paper describes a neat way to do piecewise linear regression over delta-convex functions – that is, functions that can be represented as a difference of convex functions.

Thesis

I defended in November ’21 with the following dissertation
Two Studies in Resource-Efficient Inference: Structural Testing of Networks, and Selective Classification
This won the BU Systems Engineering dissertation award.

Miscellaneous

I used to be a postdoc in the Statistics department at Carnegie Mellon University, where I was advised by Aaditya Ramdas, and Alessandro Rinaldo. Before that, I spent a few years studying Systems Engineering at Boston University, advised by Bobak Nazer, and by Venkatesh Saligrama.

Long ago I studied at IIT Bombay, where I learned to love whisky, cancer-sticks, and Iggy Pop.

I have an outdated résumé that does used to do the same job as this site, but with less rambling.

I even have a face, and can sometimes access a razor.