Oh, woe is google… if ONLY they could DO something about AI-generated disinformation on their very OWN platform!!! Sadly, they can only sigh as progress marches on!!! /s
Hey there- it’s not as if this was predictable!
This started before AI generated content was common but it’s really accelerated with Google’s embrace of AI bullshit:
Does this mean we’re going to see that cage match after all?
I’ve never been an Apple user, and I think it’s time to consider just getting a new laptop and trying Linux.
But, I know nothing about it. Is it still free? And bug free? Is there a steep learning curve? I suppose I should try to find a course somewhere.
Free - yes. Bug free - as much so as any other software assuming you use a major distro and aren’t insisting on being a guinea pig for the bleeding edge of development.
The learning curve really depends on the distro you choose. There are very crunchy ones and then there are ones where you probably wouldn’t notice much difference from Windows.
I’d say Ubuntu is a good one to start with. Try a few and see what you like. You can run most as a Live CD/USB.
I’m no Linux expert but I’ve used Ubuntu on a laptop that is too old to run modern windows. It was easy to use, but I haven’t done anything harder than type, watch videos, and play freeciv.
There are plenty of free options, yes.
No software is bug-free, but some forks are better maintained than others…
Depends on the variant. I’m just using Red Hat, but as L0ki said…
is a popular option… I think someone around here once suggested Mint?
Some links that might help:
https://www.linux.org/pages/download/
Maybe check out Linuxchix, as they might have some good primers:
In AI interviews, bots manage questions (posed in writing or through prerecorded video) or ask them, and humans respond verbally. The software records what candidates say, converts their speech into text, and does something remarkable that would have seemed like science fiction not long ago: it assesses the responses, possibly determining whether to eliminate an applicant. The Resume Builder survey found that 15 percent of companies expect to use AI to “make decisions on candidates without any human input.”
Hilke Schellmann, an Emmy-winning journalism professor at New York University, emphatically argues that companies are giving AI too much control. In her new book The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now, she walks the reader through the devastating consequences of current trends and explains why there’s no easy path forward.
Some insiders are aware of the various issues exposed by Schellmann. A Harvard Business School study reported “that 88 [percent] of executives know their AI tools screen out qualified candidates but continue to use them anyway because they’re cost-effective.” It would seem that far too many companies are willing to sacrifice fairness, and possibly lose out on top talent, to minimize expenses. So, what should be done? Several options can be eliminated—for instance, AI analysis of speech inputs (e.g., vocal tone and pitch) and faces. There are no viable ways to use technology or policy to fix such tools, so they ought to be prohibited outright. But since US governance makes it exceptionally difficult to create national technology bans, we’ve thus far had to settle for mediocre state initiatives, such as those in Illinois and Maryland, that require candidates to consent to have algorithms evaluate them during video interviews. These don’t go nearly far enough. As privacy theorists have long pointed out, people are wary of withholding consent when they lack bargaining power and worry about missing out on meaningful opportunities.
In other words, crap.
Hopefully this means that the bubble-pop is closer. I wonder what Nvidia is planning to do when AI/crypto demand drops off a cliff?
The point of GPUs was supposed to be graphics. Gamers and digital artists aren’t going away.