Experiences
- [2019 - Present] PhD in Computer Science and Engineering, University of Washington, Seattle
- [Summer, 2024] Research Intern, Deep Learning Group at Microsoft Research
- [Summer, 2023] Applied Scientist Intern, Amazon
- [Summer, 2022] Applied Scientist Intern, Amazon
- [Summer, 2020-21] Research Scientist Intern, Arthur AI
- [Summer, 2019] Research Intern, ETH Zurich
- [Summer, 2018] Research Intern, MIT
- [Summer, 2017] Research Intern, National University of Singapore
- [2015 - 2019] B.Tech in Electrical Engineering, IIT Kanpur
|
If you have any questions or want to collaborate, please reach out via email or Twitter! Always happy to chat.
|
Research
I'm interested in understanding how a ML model makes the decision it produces, specifically depending on the data it is trained on. I'm also interested in improving the robustness of ML models based on our understanding of how they make decisions. Currently, I'm interested in AI safety and security research. This area would be the biggest beneficiary of ML robustness research.
|
|
How Many Van Goghs Does It Take to Van Gogh? Finding the Imitation Threshold
Sahil Verma, Royi Rassin, Arnav Das, Gantavya Bhatt, Preethi Seshadri, Chirag Shah, Jeff Bilmes, Hannaneh Hajishirzi, Yanai Elazar
We ask "how many images of a concept does a text-to-image model need for its imitation?". We posit this question as a new problem: Finding the Imitation Threshold (FIT) and propose an efficient approach (MIMETIC2) that estimates the imitation threshold without incurring the colossal cost of training multiple models from scratch.
|
|
Effective Backdoor Mitigation Depends on the Pre-training Objective
Sahil Verma, Gantavya Bhatt, Avi Schwarzschild, Soumye Singhal, Arnav Das, Chirag Shah, John P Dickerson, Jeff Bilmes
Best Oral Paper Award at BUGS Workshop at NeurIPS 2023
We demonstrate that the effectiveness of backdoor removal techniques is highly dependent on the pre-training objective of the model. Based on the empirical findings, we suggest practitioners to train multimodal models using the simple constrastive loss owing to its amenability to be cleaned of backdoors.
|
|
RecRec: Algorithmic Recourse for Recommender Systems
Sahil Verma, Ashudeep Singh, Varich Boonsanong, John P Dickerson, Chirag Shah
We propose RecRec that provides recourse for content creators of a recommender system, an often ignored community in the recommender system interpretability research.
|
|
Post-Hoc Attribute-Based Explanations for Recommender Systems
Sahil Verma, Chirag Shah, John P. Dickerson, Anurag Beniwal, Narayanan Sadagopan, Arjun Seshadri
Best Student Paper Award at TEA Workshop at NeurIPS 2022
We propose RecXplainer that provide attribute based explanations for recommendations.
|
|
Amortized Generation of Sequential Counterfactual Explanations for Black-box Models
Sahil Verma, Keegan Hines, John P Dickerson
We propose FastAR, a novel stochastic-control-based approach that generates sequential recourses. FastAR is model agnostic and black box.
|
|
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan Hines, John P. Dickerson, Chirag Shah
Best Paper Award at the ML-RSA Workshop at NeurIPS 2020
We survey close to 200 recently published papers about counterfactual explanations in ML.
|
|
Removing biased data to improve fairness and accuracy
Sahil Verma, Michael Ernst, Rene Just
We propose a black-box approach to identify and remove biased training data to improve model fairness.
|
|
Fairness Definitions Explained
Sahil Verma and Julia Rubin
We consolidate, define, and delineate over a dozen ML fairness definitions.
|
Teaching Experience and Community Services
|
|
Teaching Assitant,
Introduction to Software Engineering (CSE403), Fall 2023
Teaching Assitant,
Introduction to Machine Learning (CSE416), Fall 2021
|
|
Student Volunteer at ACM ESEC/FSE 2017, Paderborn, Germany
Reviewer for EAAMO, XAIF, NeurIPS, AAAI, ICML, AIES, FAccT, IEEE Transactions on Artificial Intelligence, Data Mining and Knowledge Discovery, International Journal of Data Science and Analytics, and several workshops.
|
Template adapted from Jon Barron's website . Special thanks to our friendly AGI (in making) for assistance in customization.
|