Site will tell you if your job is safe from automation

Originally published at: Site will tell you if your job is safe from automation | Boing Boing


This is really telling from that site:

Most searched
Computer Programmers

It’s amazing how AI has suddenly made white collar professionals panic about the things that labour has been panicking about for hundreds of years.

I’m guessing the responses by society will be very different now that the upper middle class is soon to be impacted. Either that or software engineers will suddenly realize that, hey, those “unions” they’ve disdained for their whole lives suddenly look like a pretty good idea.


“Does the subject have time to login to the website and check whether they’re subject to replacement by AI?”
Shirker: fire employee - deploy AI


“Corporate Prostitute” doesn’t show up at all. I’m safe!

pant suit wow GIF


I didn’t get past the Cookies screen; I’ve obviously got something to hide. Honestly, I don’t think that the AIs will waste resources to replace me. I’ll probably just serve as an example to society.


Some people keep trying to lower the bar for software engineering, from COBOL back in the days, to flowchart programming, to the recent NoCode trend.

They all make the same mistake: they see arcane programming languages, and assume that it is difficult because mastering the language is hard. They think that removing the language barrier will help, but invariably, it doesn’t, because the language is just a vector for analytical thinking; programming, like math, is fundamentally not a language task, it’s an analytical thinking task. Something current LLMs are terrible at.


No Way Wtf GIF by Harlem


It should be easy for LLMs to replace CEOs. From their chess-playing style, when in trouble, they change the game history, invent new pieces, and make an illegal move. Perfect!


Sad that Chiropractors are so high on the list, since they were huge on the Covid misinformation train (at least in my neck of the woods). I saw that coaches and scouts was #52, I bet AI could do that (Why yes I do live in one of those states where our highest paid public employee is a freaking football coach).


Everywhere, I’m sure. The world would be a better place if chiropractic was not in it. Woo factories, they are.


I don’t disagree in principle; but for labor market purposes I suspect that will turn out to not be relevant in a lot of cases.

There are some people who get paid to program because they are both doing the necessary analysis and conversant in the necessary syntax; but there are a lot of people there because that spec isn’t (quite yet) going to turn itself into ugly LoB Java and asking a well formed but informal question about what’s even in the database isn’t the same thing as SQL-querying it out.

I don’t doubt that there will be some dangerous amateurs making foolish mistakes in a cloud of marketing puffery; but I also strongly suspect that we’ll see a great deal of attrition among those where were in ‘Applications’ because they debug better than the people tagged as ‘business analysts’ do.


informal question about what’s even in the database isn’t the same thing as SQL-querying it out.

Great example, because SQL syntax is very close to natural English. I think the reason some people are good at SQL-querying things is not that they have mastery of the syntax; it is that they have an understanding (sometimes entirely intuitive) of relational algebra.

I’ve seen people try to use ChatGPT as a learning tool in this area, and It’s a great replacement for a documentation search engine, but (so far) it is hopeless at solving real-world problems; it will hallucinate functions or APIs that do not exist, it will emit code that looks plausible at a glance but is completely nonsensical, and it can’t really do proper architecture.

There’s a reason why stackoverflow banned it. If the human prompting it doesn’t know exactly what they are doing, the result often takes more time and energy to debug than if it was written without assistance (because it looks so deceptively realistic that you will miss obvious problems.)

It’s particularly infuriating when someone comes to you “Can you help me fix this thing, it doesn’t work and I cannot find the bug” and it turns out the thing was written by ChatGPT, they have no understanding of it whatsoever, and it and has absolutely no chance of working without a complete redesign.


True. But that is also true for the rest of the output of LLMs. I am under the impression the whole system is optimised to reproduce a response statistically likely, based on some input in “natural” language (i.e., not logically formalised but based on our quite fuzzy and ongoing linguistic evolution), and trained on corpora of the very same stuff.
I am not sure if this a correct assessment, but my fuzzy brain gives this as a likely response on the fuzzy perceived input I got.

Now, if I was trained on a completely formalised logical corpus, and the input would be in the same formal way, I think I still couldn’t reproduce a formally correct description of what those models do. But I am under the impression that this could probably a way to make those models to actually produce working code?


You just inspired me to go ask ChatGPT to write a validation function for name fields. If you feel a shudder and hear sirens, you’ll know I’ve crashed the AIwebs.


What GenAI is likely to do at some point (it’s not there yet) is to make computer programmers a lot more productive. The value a programmer is adding, as you point out, is in the analytical thinking, not the typing of code in the form of a programming language. However, a lot of the programmer’s time is spent typing out code, and a lot of it is often fairly boilerplate. If an AI tool can relieve the programmer of that burden, then more time can be spent solving analytical problems, and the time between identifying a solution and implementing it can be reduced.

So, you’re right that AI is unlikely to completely replace any individual programming jobs (I.e., you’re not going to have stand-alone AI programmers that interface directly with business stakeholders). However, you very well may see the productivity increase reduce the total need for programmers.

1 Like

This! The immediate peril of LLMs is that they are extremely good at avoiding the tiny mistakes that provide a convenient mental shortcut for discounting an answer. They produce a very clean, completely wrong answer that takes precious time and thought to recognize is batshit.

On the other hand, I asked ChapGPT to write a letter of resignation to Batman for the Joker and that was entertaining! Now if I can just get Mark Hamill to read it.


They said that my job was in great danger of being automated-I’m a baker and a cooking teacher. The parts of baking that can be automated already have been-large scale production, where the goal is thousands of units per day or hour. Small scale stuff will continue to be made by hand because the investment in the machinery is too high, if you can even find some that will make the numbers you need. Some tasks have machine help, like icing individual cakes one at a time, or the water-jet cutting machine for sheet cakes and brownies and so on. Batter depositors for cookies, glazing machines, dough cutters and rounders to make rolls-but they all need an experienced operator to perform properly. So the site doesn’t seem to see the difference between making wonder bread and working with Amaury Guichon.


Kidding aside, this is where LLMs are going to show up first in software engineering, and will be a useful tool. Boilerplate code for input validation, writing unit tests, scripts for database jobs, that sort of thing. The tedium and repetitive work that doesn’t require a six-figure software engineer to handle (and in fact that engineer is likely to make a mistake somewhere). Leave the engineers to do the architectural stuff, API design, etc.

After that, who knows what might happen.


I do seriously wonder whether we might be able to get some really good results with an LLM writing in a language with formal proofs that a proof checker could then validate (rinse, repeat). The human in the loop wouldn’t have to deal with a gnarly, verbose formal language and the proof checker could keep the LLM off the hookah long enough to get the login page written.

The comments for lawyers are way off base. They mention critical thinking skills and analysis.

In reality for litigation, criminal and family law, its the opposite. Dealing with people who may be dumb as a bag of hammers but they are the client, opposing counsel, judge, or a jury. But you have to navigate around that to get things done

Artificial intelligence can’t get around natural born stupidity.