iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://aclanthology.org/2020.acl-main.244/
Pretrained Transformers Improve Out-of-Distribution Robustness - ACL Anthology

Pretrained Transformers Improve Out-of-Distribution Robustness

Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, Dawn Song


Abstract
Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers’ performance declines are substantially smaller. Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness.
Anthology ID:
2020.acl-main.244
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2744–2751
Language:
URL:
https://aclanthology.org/2020.acl-main.244
DOI:
10.18653/v1/2020.acl-main.244
Bibkey:
Cite (ACL):
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained Transformers Improve Out-of-Distribution Robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2744–2751, Online. Association for Computational Linguistics.
Cite (Informal):
Pretrained Transformers Improve Out-of-Distribution Robustness (Hendrycks et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.244.pdf
Video:
 http://slideslive.com/38929340
Code
 camelop/NLP-Robustness
Data
IMDb Movie ReviewsMultiNLIReCoRDSNLISSTSST-2