"What will happen to me?" Stories written by the GPT-2 neural network

Originally published at: "What will happen to me?" Stories written by the GPT-2 neural network | Boing Boing

1 Like

Teenage existential angst, but on a computer!


Where can I get a copy of this zine?


These sound a lot like the liner notes from Springsteen’s The River album…


Is there a link to buy the zine?


Speaking of “publishing the written works of a machine”
I am reminded of Racter’s book, “The Policeman’s Beard is Half Constructed.” A few quotes…

Slice a visage to build a visage. A puzzle to its owner.

Enthralling stories about animals are in my dreams and I will sing them all if I am not exhausted and weary.

I was thinking as you entered the room just now how slyly your requirements are manifested. Here we find ourselves, nos to nose as it were, considering things in spectacular ways, ways untold even by my private managers. Hot and torpid, our thoughts revolve endlessly in a kind of maniacal abstraction, an abstraction so involuted, so dangerously valiant, that my own energies seem perilously close to exhaustion, to morbid termination. Well, have we indeed reached a crisis?

Check out the book here: The Policeman’s Beard is Half Constructed
The Amiga version is here: Racter by Mindscape


You’re not going to have time enough to read it.


This would be a great learning tool if you managed to deconstruct how it was written and the “reasoning” behind it

@teodorberto. I think the whole point of machine learning is that it’s a black box. It’s not “how it’s written,” because it isn’t written. It’s not an algorithm as we understand it.


Speaking as a person reading BoingBoing because I am completely stuck on a machine learning project which resolutely and robustly will not do what it ought to do…

Machine Learning programs are not good analogues for our brains. They have a lot more structure. They are generally either training or predicting, but not both. They are digital, which allows us to trace what is going on with precision. They are all written programs, and they are generally trained with a particular outcome in mind. Here, the aim is to produce good text, and the strange, wandering narrative is probably because it was given “what will happen to me?” with no backstory.

Someone said that thermodynamics owes more to the steam-engine, then the steam-engine owes to thermodynamics. Maybe AI will do the same for our understanding of intelligence. Or maybe we will go from "brains can do this, but we don’t know how’ to ‘brains and some computers can do this, but we don’t know how’. Fun times, though.


If we want programmes that are good analogues for our brains we should probably go for artificial stupidity.


As to the question at hand:

1 Like

I think ‘artificial low cunning’ is nearer the mark: the ability to do as little as possible, to keep my expectations low, to throw ‘Out Of Memory’ fits if I don’t cut up its data-food the way it likes it. It may not be intelligence as we know it Jim, but it reminds me of some people already.


Interesting point!
I’ve still got some learning to do.

1 Like

Thank you for your insights!


Perhaps I should have said their output is not the result of written code alone, given that it is the result of training. With old-style code, you could study the instructions and predict the output from a given input (to the extent that the progam was small enough to keep in your head all at once). A trained ML system will not be predictable, as I understand it.

@anon40366910. I see what you did there. :wink:

I found this book genuinely entertaining and quite comprehensible: You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place by Janelle Shane. ISBN-10: 031652524

Reading it was the first time I grokked this stuff. YMMV.


That’s sort-of true. But it is really hard to ‘know’ what a threaded program is doing too. You write the thing to be robust, and to handle keyboard interrupts in a robust manner, but I remember a software developer saying he has no accurate idea of what Windows is doing after 100 microseconds. I feel a big part of the ‘unknowability’ of our intelligence comes from its hugely parallel, asynchronous, threaded nature, where machine learning programs like Keras (this I am using) are very regular and structured, so you may not know exactly what it does when it is trained, but you will know exactly how it is doing it.

This topic was automatically closed after 5 days. New replies are no longer allowed.