Minimalist news site powered by AI

Originally published at: Minimalist news site powered by AI | Boing Boing

1 Like

It uses AI, ChatGPT-4, to determine “significant news”, …

What isn’t clear (to this sorry ol’sod) is if these declarations of news are themselves subject to alteration (aka “hallucination”) by the A.I. program, or it’s just being used to select among what’s actually out there via “credibility” score…? (is fox"news" included in its training set? “ey! if you made the easy assumption that fox’news’ always expressed falsehoods…”)

1 Like

Hm. ChatGPT-4 is available for free limited use on Bing, but you have to use Microsoft Edge to access it.

Good grief.

2 Likes

Back in the 1980’s, Esquire magazine designed an equation that could predict with remarkable accuracy how many column-inches a story would get in the New York Times. In addition to the numbers crunched by this AI, the Esquire algorithm included “distance (in miles) from Times Square”. The farther away, the less relevant to NYT readers, the fewer the column-inches.

3 Likes

My personal experience using LLMs is that they’re very good at parsing and remixing text than creating text from zero. The less information you provide and the more information you’re asking it to deliver, the greater chance of failures.

So things like “summarize an article”, “analize the tone of a given text” will have near 100% success. Asking an LLM to “write an article about X” won’t go well.

I never tried to ask ChatGPT to ask about “how many people was affected by this”. This seems a prompt that, unless the LLM connects to an external database (chatGPT can be instructed to use Wolfram Alpha, for example) there’s a high chance of not getting accurate results.

Note: I mostly use chatgpt to make sure some of my written text is consistent in tone*, and except for some times I forgot to provide the instructions** and it tried to hold a conversation with me about my written text, it does that job fine

*English is not my first language and while consistency does not matter much when I write here, it is helpful when I write operations manual, to among other things reorder my prompts in a consistent way, mantain the formal tone and address the user always in third person.

** ChatGPT can remember an instruction for a number of prompts, but at some point it will forget them and then will try to parse the text as a prompt instead of an input. Or maybe I am no good at using it.

2 Likes

Of its top 15 I see two about the Kremlin drone, two about the US debt limit, two about Google and passkeys. This is the same sort of repetitive posting I see in human-curated sites that I love.

Drone attack on Kremlin gets a score of 6.6 and Google releases passkey gets 6.8? Sure the Google thing potentially has a direct effect on more people (at least in the short term) but I don’t think a human editor would ever give it the top headline. Maybe AI just prefers tech stories?

1 Like

image

3 Likes

Thats Good Robert Deniro GIF

3 Likes

This is exactly the case and why the term AI is so misleading.

LLMs need a massive corpus of text and images to do things, and without this they don’t work. They aren’t “learning” they are just getting more inputs to generate better outputs. Without these inputs they don’t work. Try using something super obscure with any of these search bots or image generators and you’ll quickly see their limitations.

They only feel like magic because they are good at presenting text coherently but they only do that because they have so much material to draw from. It’s not magic, it’s all an illusion. A very expensive one because it requires incredibly wasteful and expensive amounts of compute to work. I read somewhere that Bing’s AI costs upwards of 7 figures a day to run. And for what? “Conversational” search results? I’d rather parse through them myself, thank you very much. I value my time over the novelty of a coherent chat bot.

In other words, it’s not SkyNet, it’s just a fancy IVR.

5 Likes

Totally agree. I only accidentally call these tools AI because that. I know LLM does not roll of the tongue as easily but we can always go with SPs (stochastic parrots) if they need a more colorful noun :wink:

3 Likes

Everyone is calling these LLMs AI though they certainly aren’t “General AI” like what most people think of as being AI – HAL-9000, C3P0 and so on which can hold a conversation.

However, if the sophistication and quality of response gets good enough to pass the Turing test, perhaps they will effectively be a kind of specialist AI.

I still like the OG minimalist news site that was linked to on this very BBS.

http://68k.news/

2 Likes

I just get news feed emails from trusted sources and open them in plain text. Better selection

1 Like

Um, these chatbots blew through the Turing Test years ago. The bar on that test isn’t actually very high. It’s not the iron clad evidence of consciousness that layfolk tend to think it is.

Academic Computer Science has been debating what a new test should look like since at least Eliza, since it was already clear that chatbots were going to break it soon. No new one is likely to emerge though because even neuroscience has trouble defining it (though they have a pretty good working model requiring elements of language and behaviour to be displayed in combination).

2 Likes

I propose an Ex Machina test: the AI has to get a human who knows that it’s an AI to fall in love with it.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.