Originally published at: Nnedi Okorafor and a positive side to CHatGPT | Boing Boing
Originally published at: Nnedi Okorafor and a positive side to CHatGPT | Boing Boing
It is a matter of who owns the technology and the products of its labor. If ownership is concentrated in the hands of a few, then so too will wealth be. As is always the case. First it was physical labor. Now it’s knowledge work. When human work is devalued, more value is created for the benefit of fewer people: those who contribute nothing but having the capital in the first place to buy the technology.
For clarity, I was asking it about scientific information that’s been well researched. It gleaned all the info it had access to and spat it back to me in a clear, concise way, answering questions I wanted answers to right at the moment.
And was she able to verify the information that was spat back? Because there’s a lot of solid anecdotal evidence that it just tells you things that sound superficially convincing but which are too often a mix of fact, conjecture and flat-out made-up stuff with no way of distinguishing between them.
Chatbots are rather akin to those experiences when an older sibling or a parent told you something authoritatively and you instinctively believe them because, after all, they ‘know’.
Exactly! An employer requested I take a look at ChatGPT so I asked it a series of questions, all very different from each other.
Some of the questions it just knocked right out of the park, I mean, the answer it gave to “what was the significance of the battle of Lepanto” and “who is the ugliest man in the world” for example. And other questions, like “What is a good recipe for cold weather marine concrete” and “what is the function of a euphro” it just couldn’t answer coherently at all. But with certain questions, like “what is the difference between a Lochaber axe and a halberd”, it gave very convincing, extremely well composed answers… that were completely wrong.
I forgot to ask it why a raven is like a writing-desk, I need to go back and do that.
I read Ms. Okorafor’s novel “Who Fears Death” and liked it.
Update: it did good on ravens and writing desks. And somebody taught it what a Lochaber axe is since the last time I asked, also.
Yeah, but would you know that the ‘new’ answer was better than the ‘old’ answer unless you already knew? If I asked it about that difference, and got a different answer this week, would I have any way to know if this week’s answer was better or worse than last week’s? (Since I don’t know what the difference is - at least, not beyond recognising both words!)
Yeah, when you call it out on being wrong and ask it for literature to back up a claim which is popularly held, but wrong, it backpedals furiously and agrees with you.
Because it’s just stats and doesn’t understand anything.
I feel like this is another case for media literacy. A friend of mine has been using chatgpt to help with curriculum development, and it can be all over the place, but you can also learn how to ask the right questions to get what you need, just as you would program code.
And, this is the relevant part, she always asks for the sources, and it includes links, so it’s easy to check if it’s sifting the info from reputable sources or not.
I’m not using it much yet, but my impression so far is that, if used well and wisely, it’s kind of like having an entry-level assistant, at least for the stuff I do.
It feels like a “smarter” version of Wikipedia so far. It can tell you things but it doesn’t know things, so for now it’s possibly better to use it to find where to look, like you say, as an assistant, rather than treat it as knowledgeable in itself.
Never trust an unaccredited librarian
Of course the real issue isn’t whether or not this tool is any good, it’s the continuing struggle to get the students to see the content and the work as the process and point of their education and not the certificate handed out at the end. Cheating is seen as a shortcut to a better grade. Technology is just helping people who want to cheat do so. The situation may develop the way drug use in sports did-if the people using got better results, everybody ended up using to some extent.
A concern I’ve had about this for a while now is that in the very near future a huge proportion of the “sources,” even the formerly reputable ones, will be largely populated with information that was itself generated by ChatGPT-type AI. So it’s all going to get very muddled and recursive.
I’m not too concerned about that. To my mind, those would no longer be reputable sources.
But I get what you are getting at, I think, that it’s going to take more and more media literacy for the regular citizen to be able to discern. Like, yeah, 5 years ago articles on X journal were something I might know as scientifically valid, but anything after such and such date is questionable.
This is, of course, a known problem with Wikipedia which (sensibly) asks for citations for claims but which (admittedly not often) ended up using citations of articles from ‘accepted’ sources that themselves originally cited the Wikipedia article that was being checked.
As I think it was Ted Chiang observed, we aren’t being told if the corpus that the next generation of LLMs is being trained on includes material from the previous generation, and it’s not hard to figure out why not.
Wikipedia has many, many faults. But it does also have some excellent elements. Specifically it is an encyclopaedia which explicitly allows you to see the history of its creation, whether something is contested, the discussions around the articles, and lots of markers indicating people’s ideas on the quality or lack thereof of the articles, the sources or lack of them cited,
That makes it crucially better than the traditional encyclopaedia in some ways. Adam Smith half inched his description of a pin factory from Diderot’s article in his encyclopaedia. Diderot nicked it, also uncredited, from someone I’ve forgotten. Neither of them knew what they were talking about, and neither had ever been to a pin factory. It was of course all bollocks.
Pre-trained generative large language models remind me of nothing so much as Google once it tooled up for its IPO in 2003. They dropped the page rank and made search results a black box for commercial gain.
We have been seeing bullshit Markov chain generated text and networks of SEO optimised chum as the canon people have to deal with ever since.
I am merely echoing, from a different perspective maybe, people before me’s concerns that there are sources citing sources and reinforcing each other in SEO/AI feedback loops. Commercial AI is always a black box, always opaque, always inscrutable, always unverifiable, always a breach of privacy and data protection.
I use it, but I also use lots of other tech which is dangerous, ilegal, and damaging.
Can confirm. I asked it to provide me with some citations in regard to several topics I am familiar with, and it came up with authors who are in the field, titles which were convincing, and (without being asked to) quite concise summaries.
What I found interesting was that it could not name the journals, or books. And when I did some digging, not a single of the citations seems to exist.
Yeah, well. It never did provide any references for me when asking about sources.
Nice description, I’m going to use that.
Does anyone know if it does work with ontologiesz and if so - if it could work on emergent ontologies?
Just for the record: I tend to try to get to the original source whenever citing something in a scientific context. I have often ended up staring in disbelief at my results. Sometimes, the original source is unavailable. Maybe destroyed, lost in time, or simply only available as a hardcopy in one or two libraries connected to any catalogue I could find. That is ok, especially for stuff which might be hundreds of years old. But sadly, quite often the original source does not provide enough evidence for the interpretation provided. Sometimes the secondary or tertiary literature also distorted the original sources content in ways which make me wanna go full panda bear.
You’ve all heard someone mutter about the reproducibility crisis of one or the other discipline. I suppose that feedback loops already work, and our brains heuristics do quite a lot to ignore that.
I’m always amazed how we get any stuff done, anyway. Most of our science must be correct. Maybe.
So, LLMs and the like might seem like the riders of the apocalypse currently. They might turn out to be the writers of some apocryphs instead. But they might also just evolve into something which integrates quite well into the stories Pan narrans tells itself…
That was the upshot of my report. I can’t use it for anything except learning what to look up next, because the wrong answers are so well written.
A lochaber axe has a hook on the end.
This topic was automatically closed after 5 days. New replies are no longer allowed.