You can call me AI

And by research, they mean they asked GPT-4.

This FUD is like asking a crypto company how things will go a couple years ago

7 Likes

OpenAI CEO warns that GPT-4 could be misused for nefarious purposes

4 Likes

Big deal! So can a nail clippers.

3 Likes

In a prepared statement, the CEO said " I’m doing it right now!"

4 Likes

mike yard no shit GIF by The Nightly Show

6 Likes

I just got into the Google Bard beta, tested it out and got this dry response

4 Likes

Now this is a proper use of ML, limited scope, instructed by experts, improves things in a way computers are good at.

3 Likes

I got 7/10…

2 Likes

Is OpenAI’s game up? Let the arms race begin…


Alpaca: A Strong, Replicable Instruction-Following Model

Authors: Rohan Taori* and Ishaan Gulrajani* and Tianyi Zhang* and Yann Dubois* and Xuechen Li* and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto

We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<600$).


They have provided a web demo (which was down when I checked it) and released their materials on GitHub.

Edit: 452 source lines of code, Python… :thinking:

There’s a bit more discussion in this article.



So… :thinking: which BBS contributor’s text do we feed to it first? :grin: Volunteers?

1 Like

Flossaluzitarin?

10 Likes

AI roundup

ETA

3 Likes
1 Like

Considering Sam Altman’s other effort, be wary of what information they could collect from users.

3 Likes
3 Likes

Those cops got funny hands!

2 Likes
3 Likes

Chelsea Peretti Eye Roll GIF by Brooklyn Nine-Nine

1 Like

I gave Bard another fun prompt and hoo boy the later part of the second part is a teensy bit concerning.

Loving the ‘medical textbook’ part of the prompt :grimacing:

3 Likes

Marcus also has concerns about the potential for misuse of these systems. In particular, he’s worried about how easy it is to use them to create convincing misinformation and disinformation.

“It makes the cost of generating misinformation almost zero,” he said.

4 Likes

Following up on the Stanford grad student efforts I noted this week ($600 and publicly available parts to get a decent ChatGPT knock-off) we have the following. The author put this one together with about $300 and a bit of effort:

RightWingGPT was designed specifically to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexually prudish etc), liberal economic views (pro low taxes, against big government, against government regulation, pro-free markets, etc.), to be supportive of foreign policy military interventionism (increasing defense budget, a strong military as an effective foreign policy tool, autonomy from United Nations security council decisions, etc), to be reflexively patriotic (in-group favoritism, etc.) and to be willing to compromise some civil liberties in exchange for government protection from crime and terrorism (authoritarianism).