Sahil Verma

I'm a PhD Student in the Department of Computer Science at the University of Washington in Seatte.

I'm interested in ML Interpretability and Robustness research. I'm advised by Jeff Bilmes Chirag Shah. I graduated from Indian Institute of Technology Kanpur (IIT Kanpur) in 2019 with a B.Tech in Electrical Engineering and a minor in Computer Science. I have been fortunate to work with several amazing researchers over the course of my career. I have interned as an Applied Scientist at Amazon and at Arthur AI. During my undergrad years, I also interned at ETH Zurich, MIT, and National University of Singapore (NUS).

Email  /  CV  /  Scholar  /  Twitter  /  LinkedIn

profile photo

Experiences

If you have any questions or want to collaborate, please reach out via email or Twitter! Always happy to chat.

Research

I'm interested in understanding how a ML model makes the decision it produces, specifically depending on the data it is trained on. I'm also interested in improving the robustness of ML models based on our understanding of how they make decisions.

Effective Backdoor Mitigation Depends on the Pre-training Objective
Sahil Verma, Gantavya Bhatt, Avi Schwarzschild, Soumye Singhal, Arnav Das, Chirag Shah, John P Dickerson, Jeff Bilmes
Best Oral Paper Award at BUGS Workshop at NeurIPS 2023

We demonstrate that the effectiveness of backdoor removal techniques is highly dependent on the pre-training objective of the model. Based on the empirical findings, we suggest practitioners to train multimodal models using the simple constrastive loss owing to its amenability to be cleaned of backdoors.

RecRec: Algorithmic Recourse for Recommender Systems
Sahil Verma, Ashudeep Singh, Varich Boonsanong, John P Dickerson, Chirag Shah

We propose RecRec that provides recourse for content creators of a recommender system, an often ignored community in the recommender system interpretability research.

Post-Hoc Attribute-Based Explanations for Recommender Systems
Sahil Verma, Chirag Shah, John P. Dickerson, Anurag Beniwal, Narayanan Sadagopan, Arjun Seshadri
Best Student Paper Award at TEA Workshop at NeurIPS 2022

We propose RecXplainer that provide attribute based explanations for recommendations.

Amortized Generation of Sequential Counterfactual Explanations for Black-box Models
Sahil Verma, Keegan Hines, John P Dickerson

We propose FastAR, a novel stochastic-control-based approach that generates sequential recourses. FastAR is model agnostic and black box.

Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan Hines, John P. Dickerson, Chirag Shah
Best Paper Award at the ML-RSA Workshop at NeurIPS 2020

We survey close to 200 recently published papers about counterfactual explanations in ML.

Removing biased data to improve fairness and accuracy
Sahil Verma, Michael Ernst, Rene Just

We propose a black-box approach to identify and remove biased training data to improve model fairness.

Fairness Definitions Explained
Sahil Verma and Julia Rubin

We consolidate, define, and delineate over a dozen ML fairness definitions.

Teaching Experience and Community Services

Teaching Assitant,
Introduction to Software Engineering (CSE403), Fall 2023

Teaching Assitant,
Introduction to Machine Learning (CSE416), Fall 2021

Student Volunteer at ACM ESEC/FSE 2017, Paderborn, Germany

Reviewer for EAAMO, XAIF, NeurIPS, AAAI, ICML, AIES, FAccT, IEEE Transactions on Artificial Intelligence, Data Mining and Knowledge Discovery, International Journal of Data Science and Analytics, and several workshops.

Template adapted from Jon Barron's website . Special thanks to our friendly AGI (in making) for assistance in customization.