Big Language Models Can Train Small and Cheap Language Models

Large language models can train smaller language models and uplevel them quickly. Stanford researchers trained Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. they used GPT 3.5 to train it. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small…
Big Language Models Can Train Small and Cheap Language Models


Large language models can train smaller language models and uplevel them quickly. Stanford researchers trained Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. they used GPT 3.5 to train it. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<600$).

Here is a video describing the work.


Self-Instruct: Aligning Language Model with Self Generated Instructions


Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi


Large “instruction-tuned” language models (finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We introduce Self-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off its own generations. Our pipeline generates instruction, input, and output samples from a language model, then prunes them before using them to finetune the original model. Applying our method to vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT_001, which is trained with private user data and human annotations. For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT_001. Self-Instruct provides an almost annotation-free method for aligning pre-trained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning.

Read More

Total
0
Shares
Leave a Reply

Your email address will not be published.

Related Posts
Campbell Addy presents immersive solo exhibition at 180 Studios
Read More

Campbell Addy presents immersive solo exhibition at 180 Studios

I ❤ Campbell features 36 original pieces amongst set design by Ibby Njoya and soundscape by producer CKTRL. London fashion photographer and artist Campbell Addy, whose work has been shown in Dazed, i-D, Vogue and TIME, and exhibited in locations including Paris, New York and Oslo, will present a brand new, immersive solo exhibition at…