default search action
Machel Reid
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c18]Akari Asai, Sneha Kudugunta, Xinyan Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, Hannaneh Hajishirzi:
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer. NAACL-HLT 2024: 1771-1800 - [i16]Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy P. Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, Ioannis Antonoglou, Rohan Anil, Sebastian Borgeaud, Andrew M. Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sottiaux, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, James Molloy, Jilin Chen, Michael Isard, Paul Barham, Tom Hennigan, Ross McIlroy, Melvin Johnson, Johan Schalkwyk, Eli Collins, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Clemens Meyer, Gregory Thornton, Zhen Yang, Henryk Michalewski, Zaheer Abbas, Nathan Schucher, Ankesh Anand, Richard Ives, James Keeling, Karel Lenc, Salem Haykal, Siamak Shakeri, Pranav Shyam, Aakanksha Chowdhery, Roman Ring, Stephen Spencer, Eren Sezener, et al.:
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. CoRR abs/2403.05530 (2024) - 2023
- [c17]Machel Reid, Mikel Artetxe:
On the Role of Parallel Data in Cross-lingual Transfer Learning. ACL (Findings) 2023: 5999-6006 - [c16]Jonas Pfeiffer, Francesco Piccinno, Massimo Nicosia, Xinyi Wang, Machel Reid, Sebastian Ruder:
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations. EMNLP (Findings) 2023: 1978-2008 - [c15]Stephen Ngumbi Kiilu, Machel Reid:
Pivot Pre-finetuning for Low Resource MT: A Case Study in Kikamba. Tiny Papers @ ICLR 2023 - [c14]Machel Reid, Vincent Josua Hellendoorn, Graham Neubig:
DiffusER: Diffusion via Edit-based Reconstruction. ICLR 2023 - [i15]Jonas Pfeiffer, Francesco Piccinno, Massimo Nicosia, Xinyi Wang, Machel Reid, Sebastian Ruder:
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations. CoRR abs/2305.14224 (2023) - [i14]Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, Hannaneh Hajishirzi:
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer. CoRR abs/2305.14857 (2023) - 2022
- [c13]Itsuki Okimura, Machel Reid, Makoto Kawano, Yutaka Matsuo:
On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing. Insights@ACL 2022: 88-93 - [c12]Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer:
M2D2: A Massively Multi-Domain Language Modeling Dataset. EMNLP 2022: 964-975 - [c11]Machel Reid, Graham Neubig:
Learning to Model Editing Processes. EMNLP (Findings) 2022: 3822-3832 - [c10]Machel Reid, Mikel Artetxe:
PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining. NAACL-HLT 2022: 800-810 - [c9]David Ifeoluwa Adelani, Jesujoba O. Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Emezue, Colin Leong, Michael Beukman, Shamsuddeen Hassan Muhammad, Guyo Dub Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ajibade, Tunde Ajayi, Yvonne Wambui Gitau, Jade Z. Abbott, Mohamed Ahmed, Millicent Ochieng, Aremu Anuoluwapo, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Kalipe, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing K. Sibanda, Andiswa Bukula, Sam Manthalu:
A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation. NAACL-HLT 2022: 3053-3070 - [c8]Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa:
Large Language Models are Zero-Shot Reasoners. NeurIPS 2022 - [c7]Machel Reid, Mikel Artetxe:
PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining. RepL4NLP@ACL 2022: 20-28 - [i13]Machel Reid, Yutaro Yamada, Shixiang Shane Gu:
Can Wikipedia Help Offline Reinforcement Learning? CoRR abs/2201.12122 (2022) - [i12]David Ifeoluwa Adelani, Jesujoba Oluwadara Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Chinenye Emezue, Colin Leong, Michael Beukman, Shamsuddeen Hassan Muhammad, Guyo Dub Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ayoade Ajibade, Tunde Oluwaseyi Ajayi, Yvonne Wambui Gitau, Jade Z. Abbott, Mohamed Ahmed, Millicent Ochieng, Aremu Anuoluwapo, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Koffi Kalipe, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing K. Sibanda, Andiswa Bukula, Sam Manthalu:
A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation. CoRR abs/2205.02022 (2022) - [i11]Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa:
Large Language Models are Zero-Shot Reasoners. CoRR abs/2205.11916 (2022) - [i10]Machel Reid, Graham Neubig:
Learning to Model Editing Processes. CoRR abs/2205.12374 (2022) - [i9]Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer:
M2D2: A Massively Multi-domain Language Modeling Dataset. CoRR abs/2210.07370 (2022) - [i8]Machel Reid, Vincent J. Hellendoorn, Graham Neubig:
DiffusER: Discrete Diffusion via Edit-based Reconstruction. CoRR abs/2210.16886 (2022) - [i7]Machel Reid, Mikel Artetxe:
On the Role of Parallel Data in Cross-lingual Transfer Learning. CoRR abs/2212.10173 (2022) - 2021
- [c6]Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo:
Variational Inference for Learning Representations of Natural Language Edits. AAAI 2021: 13552-13560 - [c5]Machel Reid, Victor Zhong:
LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer. ACL/IJCNLP (Findings) 2021: 3932-3944 - [c4]Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo:
AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages. EMNLP (1) 2021: 1306-1320 - [c3]Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo:
Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers. EMNLP (Findings) 2021: 4081-4090 - [i6]Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo:
Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers. CoRR abs/2101.00234 (2021) - [i5]Machel Reid, Victor Zhong:
LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer. CoRR abs/2105.08206 (2021) - [i4]Machel Reid, Mikel Artetxe:
PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining. CoRR abs/2108.01887 (2021) - [i3]Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo:
AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages. CoRR abs/2109.04715 (2021) - 2020
- [c2]Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo:
VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling. EMNLP (1) 2020: 6331-6344 - [c1]Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo:
Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages. AfricaNLP 2020 - [i2]Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo:
Variational Inference for Learning Representations of Natural Language Edits. CoRR abs/2004.09143 (2020) - [i1]Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo:
VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling. CoRR abs/2010.03124 (2020)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-08-30 20:40 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint