You can call me AI

Is OpenAI’s game up? Let the arms race begin…


Alpaca: A Strong, Replicable Instruction-Following Model

Authors: Rohan Taori* and Ishaan Gulrajani* and Tianyi Zhang* and Yann Dubois* and Xuechen Li* and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto

We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<600$).


They have provided a web demo (which was down when I checked it) and released their materials on GitHub.

Edit: 452 source lines of code, Python… :thinking:

There’s a bit more discussion in this article.



So… :thinking: which BBS contributor’s text do we feed to it first? :grin: Volunteers?

1 Like