Originally published at: AD&D adventure, complete with fantasy artwork, made with AI tools | Boing Boing
Originally published at: AD&D adventure, complete with fantasy artwork, made with AI tools | Boing Boing
AI tools can fuck right off. I hate this new trend of soulless “creativity”.
No offense to you whatsoever, @garethb2.
Edit: Maybe I should clarify instead of simply putting a hotheaded comment. IMHO, using AI as a starting point for inspiration isn’t a bad thing. Using AI to completely create/fabricate something without additional human refinement is just lazy and soulless. (Again, this is just my opinion.) And by “additional human refinement” I don’t mean, “Oh, but I inputted this totally original textual description, therefore anything the AI spits out is my creation” mindset.
Again, I’m stating this as my own opinion and I’m not expecting everyone to agree. Sorry for my original statement coming off soo strongly.
this is kinda cool. Dear Brother has a crew that has been playing D&D for 40 years together and they still meet up online once a week (from their different corners of the country) to quest. he was explaining how he used th GPT AI to create a monster to encounter “on the fly” by entering stats into the prompt and was delighted at what was produced!
of course he has his doubts as to the current usefulness and the ethics of copyright in using AI to create stuff, but in the spirit of the game, it saved his crew valuable time that would have been spent rolling for the creature’s stats.
Yeah, I see that there are many facets to the usefulness and integrity of using AI tools for different applications. Expounding on my comment, I can find no fault in someone using AI tools for their own private use to help augment their own personal recreation. I’m sure this is the thought that @garethb2 had when sharing this post. I guess my biggest beef with AI is when it’s used in a professional/product driven situation. I have a lot of artist friends who are wringing their hands because AI driven artbots are supposedly able to do their job not only better, but faster. Ironically, these AI creators can only create their art because they have sampled previously human created artworks. It saddens me.
Again though, using AI tools for personal use and personal recreation isn’t inherently bad.
oh, i am thoroughly onboard with presenting AI “creation” as original in anything is not ethical. DB is also of similar mind as he is (among other things) a writer and video creator. he was a professor, traching writing to undergrads at Penn and has been fascinated of late with the use of ChatGPT (in particular) in writing passable undergraduate papers. this is one of his major dives into the AI content generating - to find its strengths and weaknesses and how it can be discovered if used by a person presenting the work as their own.
sticky and slippery all at once.
It’s definitely better to see it as a tool, especially for assisting creators, rather than a replacement for creators.
Chat AI can serve as an editor and a “person” to bounce ideas off of. Not every burgeoning writer can pay hundreds of dollars to get their work edited professionally. It can help people with ADHD or other focus issues develop complicated story outlines and think through plot holes and identify concepts that need better development. The writer is still writing, they’re just utilizing the AI as a writer’s circle.
And the particular usage of it as a DM for a solo RPG is great if you don’t have the opportunity meet up with friends to play a game. Another idea is for a writer to explore their story character by playing them in an AI-run RPG so they’re getting a better idea of how the character would act in different scenarios. And where the AI sometimes provides generic and boring concepts is actually good because that friction you feel with it being cliché is your opportunity to come up with something more creative and interesting.
Stable Diffusion is good for exploring concepts visually. I’ll take writing inspiration from browsing Google Images and Pinterest on a particular topic, using them like a randomly generated vision board, but sometimes artists haven’t rendered a concept I want to explore, so it’s nice to be able to have the AI render a concept and even make ten or twenty variations so I have a lot to work off of. It’s great for descriptive writing exercises to describe what you see and have it suddenly take a life of its own where you imagine a whole narrative and end up with a complete story.
I think the question isn’t necessarily “what good can come out of this technology”… it’s true that those things might be helpful for the reasons you describe and that can be part of the conversation…
But we also need to ask “how will corporations use this technology to cut costs” because we can guarantee that they will. If this can be used to replace actual people or at least pay them far less, then they will do that. We know because that’s been a primary means of driving down labor costs for centuries now. The original luddites were not anti-technology weirdos, they were skilled workers rebelling against their forced obsolescence from their industry.
We live in a world of capitalism, where the first goal is almost never helping people, it’s always about driving profitability.
Absolutely. I consider it a forgone conclusion that corporations and the wealthy will use whatever means they have available to continue to drive down labor costs and exploit workers. They’ve used computers, the internet, smart phones, robots, etc. to do that before AI and there will be newer technology in the future that likewise benefits their machinations. Heck, they’ve been saving money with FOSS for a while despite the general intent of FOSS to democratize computing and software.
I just haven’t been seeing much conversation at all about the benefits of this technology for the poor, for students, for the neurodivergent, etc. And there are useful benefits that I’m afraid people aren’t going to utilize because most of the conversations I’m seeing are negative and it’s going to give them a bad impression before they discover what it can do for them. Like any other technology, we can and likely will have extensive discussions about how to use it ethically (and the wealthy will completely ignore those discussions), but I just want people to know that can help them in novel ways they haven’t yet imagined.
Putting aside for a moment the terrible ethics of how current AI are trained (and it is waaay more than just problematic). I think the proper way to look at AI is much the same as photography.
It’s a tool, just like photography, there are lazy and terrible photographers and the skill in photography is exactly the same skill used in selecting what’s good and what’s mediocre in AI generated art. Knowing what questions to ask an AI is perhaps perfectly analogous to knowing how to frame the camera, what settings to use and what makes for good art.
It’ll eventually be seen as such and won’t compete with other art anymore than paintings compete with sculpture or photographs compete with murals.
Also, I think the proper word for what AIs do is Dream. It seems the best way to describe the results to me.
I don’t know about elsewhere, but we have had such conversations around these parts… Even there, access is a key issue, too.
So, it’s very much a conversation worth having, as long as we understand that employing them in such ways is a choice to make, but often the people making those choices are often not part of any of those categories and are not thinking along those lines. Those possible good outcomes are by-products, not the goal of such technologies.
But there should be much more of a focus on what, say, neurodivergent folks actually NEED rather than how we can shoehorn in shiny new things for their needs. The needs should be driving development, I’d argue, not the other way around, if that makes sense… I don’t think that’s the case with these AI tools, honestly.
Those possible good outcomes are by-products, not the goal of such technologies.
In my experience, the goal of a technology in the mind of the developer is often irrelevant to how it ends up getting used. Hedy Lamarr was just trying to help the war effort with her work on frequency hopping. I don’t know that she imagined it would lead to the development of wifi and bluetooth. Tim Berners-Lee helped create a great information exchange medium…that has been used for a lot of good and a lot of ill.
The needs should be driving development, I’d argue, not the other way around, if that makes sense… I don’t think that’s the case with these AI tools, honestly.
I’m an educator who has no access to the development side of AI at the moment, so figuring out positive uses for tools and technology is the only angle I can really approach it from right now. In that sense, because a part of my job is to inspire students to find academic programs that match their skills and interests, I can discuss ethical uses of technology as some students may be interested in pursuing one of the AI programs being offered.
I’m fully aware that digital stock imagery sites are a thing (and stock photography has been an option since the early 1920s), but these common day stock imagery sites pay graphic artists and photographers a fraction of what they actually make from selling their creations. (It’s a very complicated conversation and I realize there are benefits to listing your work on stock sites, but bear with me.)
So fast forward to today where AI created content could potentially take over as the new source for digital imagery. Everytime you see a site use AI generated imagery, that’s money that’s being taken away from a real creator (but still being put into the pocket of developers who created it). I love BB, but I see them use AI generated imagery quite a bit. Just saying.
So as we’re in the infancy of such things, we can only expect them to get worse from here on out. (Did you know that even Adobe could potentially be using your data to train AI and you have to opt out to make sure it doesnt?) It’s not that AI is simply replacing traditional artists and photographers. It has the nerve to use their existing works as a way to learn how to create it’s own. It’s straight up theft, IMO. Don’t just take my word for it.
That’s why I have such an utter hatred for AI generated creatives. It’s another way for someone to make a buck off of other people’s hard work. Again, it’s a nuanced conversation and there are “proper” ways to use it, but once money is involved you and I know that will go out the window.
While true, the reality is that much technological development, especially with regards to stuff like this, is very much profit driven. This is especially true I think in more recent years where the federal government has far less of a hand in directing how research develops and for what purpose. And rarely are individual goals of development the only definer of how a technology is used…
That’s more than fair enough… But how much of a better world would be in right now if we had development that was not just driven by market considerations, but by the needs of educators and their students (all of them!).
I think I’m arguing that we can find better ways to serve the needs of folks like yourself (or myself) or whoever, by focusing on them, rather than making us all scavenge from the needs of large corporate interests.
A friend was using this to come up with game ideas recently, and what strikes me with the chat GPT stuff is that it’s very impressive but at the same time, the output is too obvious to be remotely useful. It throws out a lot of familiar - cliché, even - elements, obviously based on what’s most commonly found in the training sets, and then doesn’t really do anything interesting with them. No weird (and therefore potentially interesting) juxtapositions, nothing even really unexpected. Which makes sense, as this is exactly the sort of process you want in order to get a coherent chat, but not if you’re looking for it to generate ideas, where the whole point is to get something unexpected. So it ends up being exactly the sort of thing you would come up with if you were putting no effort into it and just threw out the most generic elements that came to mind.
Some of the images are nice despite being generic, and others end up being sort of accidentally interesting, in that there is a certain degree of different elements being juxtaposed - when it makes buildings, for example, it can’t distinguish tree from house, door from window (or tree), roof from hillside. So you get some interestingly organic buildings that spark my imagination far more than the text itself.
I can’t wait to see how Hasbro is going to freak out over this violation of their new OGL.
By coincidence (?), in today’s Hack-A-Day:
This topic was automatically closed after 5 days. New replies are no longer allowed.