With the ever increasing size of State-of-the-art Machine Learning models, we see greater reliance on ML models hosted as a service through an API.
Toward the objective of rendering service APIs that can cater to a wider population of clients, we conduct research on:
(1) training algorithms that enable broader generalization–domain generalization[NeurIPS21a, ICML20, ICLR18].
(2) evaluation data-structures that can allow the query of service’s performance on the client data [NeurIPS21b].
(3) adaptation techniques that side-step the non-scalable parameter fine-tuning to potentially millions of clients [Interspeech20, EMNLP20].
- Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time.
In NeurIPS 2021. A Nasery, S Thakur, V Piratla, A De, S Sarawagi
[Paper]
- Active Assessment of Prediction Services as Accuracy Surface Over Attribute Combinations.
In NeurIPS 2021.
V Piratla, S Chakrabarty, S Sarawagi
[Paper] [Code]
- Black-box Adaptation of ASR for Accented Speech.
In Interspeech, 2020. Kartik Khandelwal, Preethi Jyothi, Abhijeet Awasthi, Sunita Sarawagi
[Paper] [Code]
- NLP Service APIs and Models for Efficient Registration of New Clients.
In Findings in EMNLP 2020. S Shah, V Piratla, S Chakrabarti, S Sarawagi
[Paper] [Code]
- Efficient Domain Generalization via Common-Specific Low-Rank Decomposition.
In ICML 2020. V Piratla, P Netrapalli, S Sarawagi
[Paper] [Code] [Talk 📢]
- Generalizing across domains via cross-gradient training.
In ICLR 2018. S Shankar, V Piratla, S Chaudhuri, P Jyothi, S Chakrabarti, S Sarawagi
[Paper] [Code]
Collaborators