SS Lab

Sunita Sarawagi's Lab at IIT Bombay

Team
Projects
Publications

Domain Generalization and Adaptation

Machines learn from dataset repositories of examples generated by the humans. The examples are a creative expression of the label by the human. The predictive models that learn from examples therefore should be cautious not to entangle what is expressed (label) with how it is expressed (style). However, several machine learned models fall in to this trap and generalize poorly on benign style shifts of the distribution. Poor generalization to new domains has several practical consequences especially when deployed in the wild.

Toward the objective of improving performance on any domain, we conduct research in the following two broad themes.
Domain Generalization: When we have access to examples drawn from multiple domains (styles) during training, can we exploit the train-time domain variation to generalize better to any domain? [NeurIPS21, ICML20, ICLR18].
Domain Adaptation: How can we learn a model if we are interested in performance on a focussed target domain and are provided access to resources, albeit small, from the same domain? [Interspeech20, EMNLP20, ACL19].

Publications

Collaborators