iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://huggingface.co/papers/2305.11206
Paper page - LIMA: Less Is More for Alignment
Papers
arxiv:2305.11206

LIMA: Less Is More for Alignment

Published on May 18, 2023
Β· Submitted by akhaliq on May 21, 2023
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.

Community

What is a distraction and why is happening again and again

Where can we download the 1000 data?
Or you guys plan to open this trained model?

How is this different from few-shot? except it's 1000-shot.

They are actually fine-tuning(Supervised learning) on this small 1000-curated dataset, which is different from k-shot where we put some examples in a prompt.

The link to the dataset is: https://huggingface.co/datasets/GAIR/lima/

You can also find Korean translated version of LIMA dataset HERE!

Thanks for sharing the paper! If I'm not mistaken the model described in the paper is not public yet, will the model be made public in the near future?

So LIMA is basically "fine-tuning with curated/diverse datas"? Is it right to understand that way?

Can we achieve LIMA by using LoRA too then?

LIMA: How Less Data Creates More Powerful AI Alignment!

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 15

Browse 15 models citing this paper

Datasets citing this paper 13

Browse 13 datasets citing this paper

Spaces citing this paper 22

Collections including this paper 11