And by research, they mean they asked GPT-4.
This FUD is like asking a crypto company how things will go a couple years ago
Big deal! So can a nail clippers.
In a prepared statement, the CEO said " I’m doing it right now!"
Now this is a proper use of ML, limited scope, instructed by experts, improves things in a way computers are good at.
I got 7/10…
Is OpenAI’s game up? Let the arms race begin…
Alpaca: A Strong, Replicable Instruction-Following Model
Authors: Rohan Taori* and Ishaan Gulrajani* and Tianyi Zhang* and Yann Dubois* and Xuechen Li* and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto
We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<600$).
They have provided a web demo (which was down when I checked it) and released their materials on GitHub.
Edit: 452 source lines of code, Python…
There’s a bit more discussion in this article.
So… which BBS contributor’s text do we feed to it first?
Volunteers?
AI roundup
ETA
Considering Sam Altman’s other effort, be wary of what information they could collect from users.
I gave Bard another fun prompt and hoo boy the later part of the second part is a teensy bit concerning.
Loving the ‘medical textbook’ part of the prompt
Marcus also has concerns about the potential for misuse of these systems. In particular, he’s worried about how easy it is to use them to create convincing misinformation and disinformation.
“It makes the cost of generating misinformation almost zero,” he said.