I should do more testing, but in my experience, itās much worse even than that. I found that, when prompted for code to do a specific thing, the result would often consist of boilerplate code with nothing in the place of the asked-for capability.
I mean actually nothing, not even code that was wrong. Sometimes thereād be a comment like // Do the prompt thing here.
From what Iāve heard, GitHubās Copilot might be more useful, though what Iāve actually seen is essentially a high-quality autocomplete. Helpful, but not earthshaking.
As I say though, I should try some more actual testing. YMMV.
A fancy autocomplete could be useful, but not when it sends your drafts to Microsoft.
Thatās going to be a problem for a lot of places with security responsibilities.
āQuick! Slap some shitty shiny AI on our product!ā
Powered by buzzwords!
Today, Red Hat announced a developer preview of Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform to seamlessly develop, test and run best-of-breed, open source Granite generative AI models to power enterprise applications.
It doesnāt matter if itās shit. Weāll polish it!
Who stole whose property? The content creators might want a word here.
Companies once again trying to own stuff, that they canāt, if they want to keep Section 230 safe harbor protection. (Not that Iām cheering for the AI scrapers.)
OpenAI Co-Founder Wojciech Zaremba: We have a Slack channel dedicated to the welfare of Artificial Intelligence.
MS Word sure is a strange beast. The company pretty much perfected the word processor by the early 1990s and the vast majority of changes theyāve crammed in since have ranged from the unnecessary to the outright infuriating (did anyone like Clippy?).
But the company canāt seem to leave well enough alone so now weāve got bots sharing our private business documents, correspondence and horny fanfic with Satya Nadellaā¦
No, Big Tech has distracted the world from the existential risk of Valley Bro con-artist billionaires.
(Turing was very gay and had a sex-segregated education and then work life: he basically didnāt know women and his alienation is palpable. But todayās techbros donāt have any such excuse, and the emphasis on femme-coded AI is ā¦ telling.)
Only half his students had empathy is another way to look at that.
Hmm, maybe a few of them who didnāt gasp did have empathy?
Iād guess that some of them who would be empathetic in other situations just declined the invitation to feel empathy for a pencil. Iād hope I would have (and I do have empathy).
Would you also hope and expect to have empathy for an āAIā algorithm like a google chatbot? That question, and the societal implications of it, was probably the point of the demonstration. Having empathy for something that seems (very superficially) lifelike is absolutely part of human nature for well-balanced individuals, but having such empathy for commercial products that absolutely are not worthy of your or anyone elseās empathy may get us into trouble.
My apologies if that came off as mansplaining. I was just fixated on your statement about how you āhopedā that you would have felt empathy for the pencil even though you know that it didnāt need your empathy and I wanted to explore that reaction. I would have no doubt that you are an empathetic person even if you didnāt care about the pencil.