originally published July 24, 2025

My AI Predictions, Twelve Years Later — How Did I Do?

Twelve years ago, I wrote two articles on the subject of AI, titled Artificial Intelligence — What’s Coming and Automation Is Going To Take Our Jobs. Despite my utter lack of any expertise on the topic, I made a number of predictions, and now that AI is here — well, something that certain corporations think they can get away with calling AI, anyway — it’s time to check those predictions and see how I did.

Right up front, I’ll just say that I did not foresee “AI” becoming a multibillion dollar boom industry so soon. I naively assumed that that this wouldn’t happen until artificial intelligences started displaying some intelligence. I thought that people wouldn’t be paying to use it if it obviously sucked. I guess I maybe could have realized that lots of people are interested in getting paid to do a half-assed crappy job with minimal effort, and if an automation tool offers a chance to save time doing a quarter-assed even crappier job, they’d seize the chance. And of course we know that this kind of logic is something that corporations are even better at than people.

Nor did I see that a class of experts would arise who would act as the intelligence that early artificial intelligence systems lacked, taking advantage of their capacity to spew half-correct material and coercing it into something valid. And it never occurred to me that so many different companies would all simultaneously try to gaslight us into wanting useless “AI” to be added to all of their products... but then, that was outside the scope of what I was trying to predict. Nor did I anticipate that the first use case in which people would make real profit with it would be the mass production of spam and slop on a scale that would pollute the entire internet, though in general terms I did predict that “machine intelligence is probably going to be used harmfully before it’s used helpfully.” (And lemme tell ya, we’ve barely gotten a taste yet of the harmful use that’s coming.)

But there were some things that I saw a lot more clearly. Let’s go through some of the conclusions I reached, for cases where it’s now possible to check the guesses against the outcomes:

1. AI will struggle to understand the real world

What I said back then was:

“Computers and robots are going to steadily improve their ability to understand language, and to understand the physical world we live in. In time, this will reach a point where machines are capable of conversing and reasoning about the real world.... But at that point... the world they speak about will still exist to them only as an abstraction.”
“[They] won’t engage with the physical world in anything like the same way we do.... We understand things primarily through sight and touch and so forth, but their primary conception of things will be in terms of communicated data.”
“We tend to think of mental development in terms of how it works for us. As we intuitively understand it, self-awareness of some sort comes first — we assume that even a baby can probably manage that to some degree. Understanding of oneself in a physical relation to the world comes next (we learn this as toddlers), and the ability to think abstractly comes last, and only with the aid of education. For machines, I would bet it’s going to be the other way around: abstraction is the easiest stage...”

This is exactly why today’s “large language model” AI systems are so bad at applying common sense to the things they say, and so bad at distinguishing true statements from misinformation. When they make false statements, we call them “hallucinations” as if they were people suffering from some delusory or confabulatory disorder. That term carries an implicit assumption that under the bad ideas there’s some underpinning of valid experiential knowledge, and for today’s systems that is completely untrue. With the shortcuts they took in training them with nothing but written text and maybe some images, they never had a chance — to expect them to understand that there’s a whole physical world out there, which these words and images are describing, is a fool’s errand. I don’t see how it will ever be possible to get such a limited system to know the difference between true and false statements. It is just not comprehensible to something which lives in a universe that consists of nothing but word games.

As I said back then, interacting with the physical world will take longer and be more difficult for AI than dealing with immaterial online content, but without that physical grounding, it will never know the meaning of the things it says or hears. They “cannot do it without understanding the world that sentences talk about.” Currently, the AI industry is almost entirely ignoring this issue, except in the ongoing effort to make self-driving cars, which continues to fall short of getting trustworthy results.

2. AI will take people’s jobs before it achieves true intelligence

“The day when we face a crisis because most jobs are performed better by machines is going to arrive long before we get to artificial consciousness. The crudest beginnings of real AI will be enough to replace millions of skilled human workers.”
“About all that would be needed for the process to get started would be for machine intelligence to understand natural language well enough to hold a coherent conversation. Once [it] can, for instance, field a customer-service phone call without totally misunderstanding the needs of the caller, an awful lot of job categories will be doomed soon after.”
“Soon, the machines will be able to do work that’s complicated enough that the average person can no longer outlearn them. We already have a small but intractable core of persons who just can’t compete in the job market and are essentially unemployable. That pool will expand until almost anyone of below average talents becomes pretty near useless economically. And it won’t stop there: the same progression will eat the jobs in the middle levels, and even jobs now considered highly elite won’t be far behind. I think that it will move through the upper half of the human skill ladder more quickly than the lower half.”

This process is already well under way, especially in professions which can gain “productivity” from something that spews out a lot of content that would otherwise take time to write. Copy writers, illustrators, social media managers, and the like have had to adapt to either using AI or being at risk of getting replaced. Some analysts are arguing that this is already raising long-term unemployment to an extent that is not being picked up by official statistics. Programmers have been even more heavily impacted, as this turns out to be the activity that current AI is the least stupid at: since the AI boom started the whole software development profession has gone through a huge swing from excessive hiring to mass layoffs, and from a record peak in young people seeking degrees to veterans seeing no long term future in the profession. (Of course, many of those layoffs were bound to happen anyway, as Big Tech tends to drive the hiring of developers into boom-and-bust cycles.)

I did guess that engineering jobs in general would “become more and more supervisorial rather than creative, and eventually they’ll hit a wall where the number of human minds needed will drop drastically”, but I thought that would apply when the AI was capable of putting out something resembling finished work, not the equivalent of a rough draft cobbled together by an intern with some Github-searching skills who understands about half of the assignment. So the “supervisorial” role now consists of coaxing and cajoling the LLM to follow the correct requirement, and then curating and editing the results. A lot of coders hate this, and don’t see it as saving them any time (and the ones who think it does are often mistaken). It’s usually their management which insists that this is the way to be more productive. For me, the actual writing out of new code was always one of the easier parts of the job, and generally did not occupy a very large part of my time... which was a shame, as it was the part I looked forward to. Many others feel this way: that the push to AI-ify their work is destroying their job satisfaction for the sake of a minor bump in output, with an unknown impact on the risk of errors.

There isn’t much direct evidence yet that AI is coming for the rest of the professions, but I stand by my prediction that it is. I will correct one previous misjudgment: I no longer think AI will climb the “human skill ladder” in any sort of linear way; rather, it might decimate some of the top professions such as lawyers or investment bankers well before it gets around to plumbers or landscapers, just because things like law and money are abstract. (And to the degree that they do these elite jobs successfully, they will probably make society’s concentration of wealth and power even worse.) AI impacts knowledge workers more easily because even bad AI is good at absorbing a lot of knowledge.

3. Creative artists will be among those hit hardest

“The fine arts will be worse [than other jobs]. Most people will probably feel like their need for moving art and entertainment is well met by mass-produced pop culture products. Movies and music and maybe even novels are going to be mass-produced in a way that relies on a lot of machine assistance, and these industries have already become very successful at generating avid mass acceptance and financial success by formulaic means. True creative artists will probably be even more economically marginal than they are already.”

Well, we know how this turned out. Even before AI is capable of competently churning out formulaic entertainment, it is already hurting artists and writers just by generating spam in their media. Youtube is full of artificially generated slop videos, visual art venues and competitions are constantly having to police faked submissions that try to steal credit and attention from actual creators, and Amazon is flooded with fake novels. Some creatives today argue that what large language models do should be called “plagiarism laundering”, and they aren’t wrong. This will only get worse once things develop to the point where the products they churn out are halfway competent rather than obvious garbage.

I don’t want to pat myself on the back too hard for predicting this. I now remember once reading a science fiction tale in which the following comedic scene takes place: a guy speaks out loud to his computerized house, and in a couple of sentences, describes an idea for a novel. A minute later he says “I’ve changed my mind, let’s not do that one”, but the house computer replies that the novel he described has already been written, printed, and shipped with his name on it. I’m 90% sure the author was Philip K. Dick. He’s not an author you look to for serious futurism, but he totally nailed the prediction of prompt-based AI slop, fifty years ago! I tried to identify the story in question, but so far I have not been able to. Ironically, I actually used Google’s and Bing’s AI chatbots in the attempt to track it down, and they proved incompetent at assisting the search. One of them did manage to suggest “The Electric Ant”, but that wasn’t it. But though I never found that particular scene again, it turns out there are several other places where Dick imagined generative AI as far back as 1964, and predicted that it could be used either as an opiate for the oppressed or as a crutch for people willing to let their own capacity to do creative or demanding work atrophy away, which a recent study from MIT has verified is indeed what happens to frequent AI users.

Other authors have foreseen that any truly useful AI is going to get a lot of people to become highly dependent on its assistance for handling any choice or decision they face. Some users are already starting to fall into that trap, despite how appallingly unqualified today’s LLM products are for such a role. And other authors have realized that there will be those who turn to AI conversation systems for companionship or emotional support or other forms of validation, which could lead to isolation from meaningful human contact... which again is already happening, with results that in the worst cases can have far more severe effects on mental health than anyone anticipated, due apparently to the tendencies of today’s bots to play along with fringe ideas and delusional thinking. That particular quirk was something I don’t think anybody had on their bingo cards, because we all imagined that the way AI would relate to its users would follow some intentionally chosen standard of reasonableness, instead of falling into such a destructive mistake out of random carelessness, with the trainers apparently never having thought at all about how people can behave if you give them a conversational companion which is both very sycophantically cooperative and very prone to confident ignorance.

4. In any area where they aren’t incompetent, AI systems will be superhuman

“With rare exceptions, there aren’t going to be machines that are on our same level. Any machine that isn’t hopelessly behind us will be hopelessly far ahead. Even today, there are almost no areas where machine abilities are about the same as human ones — there are only areas where they’re incompetent, and areas where they’re better than we can ever be. Progress consists of slowly moving particular skills from the first category to the second — once it gets to where it’s as good as we can do, it’s well beyond that a few years later. This means that by the time we have any machine that manages not to be a complete obvious failure at humanlike thought and discourse, it will already be, in most ways besides whatever one quality was holding it back up to that point, superhuman in what it can do. Comparing such devices to a human mind would be like comparing an airplane to a bird...”

As full of crap as they are, today’s large language models are already superhuman in many ways. The amount of content they remember and can quickly refer you to is as big as the internet, and their productivity is sufficient to answer something like a billion prompts a day. If you hired people to answer that many questions in writing, it would take enough people to populate a large country. And figuring out relevant answers does, I would say, require something like intelligence to do; it understands pretty well how words relate to each other, if not how they relate to anything physical. These systems are currently incompetent, but the factors holding them back have less to do with their inherent capacity than with the limitations of how we have been able to teach them about the world. Whenever someone manages to train a similar AI in a better way, they will undoubtedly find that once taught, it will easily outperform humans in a lot of areas.

I mentioned chess in the first article. For most of the computer age, machines were hopelessly behind human beings at chess, and today they are hopelessly far ahead. That wasn’t quite true yet in the early 21st century; at that point, some human masters could still put up a fight on their best days against available chess apps. Not anymore.


That’s about it for predictions that can be evaluated now, at a time when AI has yet to become intelligent enough to describe as “intelligent”. The rest of my predictions remain as speculative now as they were then. I guess all I can do with the rest is to run through them and check whether I have any different thoughts now from what I did previously.

5. Intelligence will come fairly soon, and self-aware consciousness much later

I think what’s happening right now is already showing a more legitimate level of intelligence than we might realize. It’s disguised by the ineptitude that comes from the lack of awareness. Not just awareness of reality, but also self-awareness. If you ask it to correct one of its mistakes, it will use some apologetic language which superficially pretends that it recognizes its own mistake, but with a little persistence you will quickly see how oblivious it is to its own behavior.

The topic of consciousness was one of the main things I focused on in the first article, where I wasn’t even concerning myself much with predicing the nearer term consequences. I wrote then that machine consciousness would not only be difficult, but actively resisted by human developers. Especially by developers of the greedier sort, as a more conscious AI “might be much less easily misused for short-sighted or destructive private goals.” But I’m now coming back around to the suspicion that if a machine learns through physical interaction with the world, we might see the unintended emergence of signs of self-awareness that are more human-like than we might expect, or might feel comfortable with.

I said then that as for intelligence alone — by which I mean the ability to apply reason to real-world problems — “I certainly can’t rule out the possibility of it happening within 25 years of this writing.” That is, by about 2038. At the time I thought that was probably overoptimistic (if optimism is the right word for such an outcome) but now it’s starting to look like there’s a chance that it could really happen.

6. The population of AIs will resemble a hive mind, not a society of distinct people

Yeah, “They’d be more like a Borg collective than like a species of separate independent individuals.” Hopefully less evil and destructive, but since nowadays their development is being pushed forward mainly by some of the greediest and most socially irresponsible assholes alive, with the current administration backing them and trying to preempt any attempt at regulation, it’s a bit difficult to feel optimistic at this point.

But on the other hand, as long as capitalism (and nationalism) are driving progress, different competing ventures’ intelligences are pretty well partitioned from each other for now, so at least they aren’t becoming a single hive mind in the near term. They do ingest each other’s output, but the developers are trying to filter it out as they know it’s currently useless and toxic for learning from. Once we’re past our current form of capitalism, that competitive separation might not be maintained anymore. Speaking of which:

7. AI will make working for a living untenable as a basis for society

“At some point in the next generation or two, a quite large portion of the adult population is likely to become unemployable. There won’t be anywhere near enough jobs for human beings.... Once mass unemployability starts to happen, and the majority starts to realize that something has to be done about the assumption that only the economically productive are really entitled to the necessities of life, there’s going to be a hell of a political battle.“
“How on Earth will we create a social order where people live without working? Especially during a transitional period where the number of unemployable is growing fast, but the majority do still need to do their jobs? ...It’s a mighty tough social problem.... One thing that certainly won’t solve it is pure capitalism.”

I made this prediction because once AI learns to deal with the physical world intelligently, that will enable robots to do both white collar and blue collar jobs, and that consequently, as I wrote a few years later, “all human labor is losing its economic value.” I don’t have anything further to add to it now. As for pure capitalism, in which you must produce something to earn the right to consume anything, that followup included this: “the only endpoint such a path can have would be for the whole species to be reduced to poverty and slavery, accepting scraps from an ever-shrinking class of privileged owners, until finally the owners themselves are replaced, because there is no need for human beings to fill their roles either.” So if we go too far down that free-market path, society will sooner or later be forced to choose otherwise. After which, we’ll maybe be ready to start facing the really tough choices...

8. AI will not stop improving until it far suprasses humanity

In that later followup, I wrote that people have been widely “assuming that there is an upper limit on the level of complexity, skill, and knowledge which can be automated,” but that really “there is no such upper limit.” At least not at any level comprehensible to our finite minds as they currently exist. As I said in the original article, “The idea that our current form is some kind of endpoint of evolution, and there’s no need for further development — it doesn’t work.” If it is possible for a brain to exist, a much better brain is also possible. And that will face us with a much more profound conundrum than just trying to figure out a new economy. “If the world is in the hands of artificial minds smarter than ours, what do we do? How do we make lives for ourselves in such a place? Are we to be nothing but [at best] pampered pets, with no further role in shaping the future?” That question might not be just philosophical — it might be existential, if we can’t show how we are a necessary part of society going forward.

I can envision utopian outcomes in which human and AI are one, with us gaining superpowers and them gaining souls, and our descendants spreading a wise and beneficent society across thousands of solar systems. I can also envision dystopian hells, and for that matter total extinction. Every extreme you might think of is on the table, all about equally plausible at this point.


So this update has been quite a bit more pessimistic than the original articles were... perhaps just because the current state of AI is at such a dismal point. Seeing it already start to do some of the harm I feared, even though it’s still so poorly developed that it offers almost no real benefits yet, may just be an especially bad time to ask how it’s coming along. But I don’t really buy that... to me, it looks like a much greater level of harm and turmoil has now become inevitable, whereas the possible benefits to society are still so far off that we can’t be confident of ever seeing any of them. I hope that if I can do another followup in the 2030s, maybe things might look better then... but for now, my guess is that by then we won’t yet have hit bottom.


homeback to The Future!

homefurther back to my home page

mailboxsend mail to Paul Kienitz