Debunking AI Myths: A Review of ‘The AI Con’

February 23, 2026—Last Monday, I wrote about the idea of intelligence and mentioned that I was in the midst of reading The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want – Exposing Surveillance Capitalism and Artificial Intelligence Myths in Information Technology Today by Emily M. Bender and Alex Hanna.

I noted that my Kindle said that I was 28% through the book at the time. I expressed skepticism about that number because of the large quantity of text following the end of the book, which Kindle apparently counts when computing your progress through a book. Now that I have finished the book, it says I’ve completed 60% of it. Sigh.

Now that I am (“60%”) finished with the book, I’m going to give my reaction to it. I don’t know if you’d call this a book review. But, heck. We all accept meandering text coming from ChatGPT and it’s ilk as something “real,” so I guess it wouldn’t hurt to call this a book review.

* * *

“The AI Con” is a mostly enjoyable review of the state of so-called “artificial intelligence” as of sometime last year when it was published. Based on my experience, nothing much has changed in that time (except more of the same), so the book’s observations about the shortcoming of “artificial intelligence” and the longcomings of the hype remain valid—and will likely remain valid for a long time—because Bender’s and Hanna’s analysis is grounded in human nature, not tech.

Human nature is often disappointing, but it doesn’t disappoint in its stability and predictability. We’ve seen it before. But we often don’t remember that we’ve seen it before. (Gotcha again!)

So, let’s start with the tech (as Bender and Hanna do). “Artificial intelligence,” they say, is not one thing. There are bots that play chess at the highest level. There are bots that predict the weather. You may have noticed that the weather bots are not as good at what they do as the chess bots.

There are bots that comb through data looking for patterns. They may be looking for red flags of insurance fraud. (I sometimes write about that in my day job.) They may be looking for chemical or biological patterns that could contribute (hopefully) to cures for disease. They could be surveilling social media looking for enemies of the state.

Calling these separate processes magnifies the supposed superiority of machines. But there is no Swiss Army Knife of automated cognition. These tools are all separate.

Bender and Hanna spend a fair amount of time talking about one of these tools that has inspired a lot of excitement and dread, the large language models (LLMs) like Open AI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, Meta AI, and so on. My company wants me to use Copilot, so that’s what I do—though I’ve encountered all of these because they have been incorporated in a variety of online products and are, therefore, hard to avoid.

My experience with Copilot is that it is glib. It’s prose is smooth, but it often doesn’t “know” what it’s talking about.

We humans have a tendency to attribute intelligence to traits that seem human to us (see last week’s post, noted above).

So, I just said that Copilot “often” doesn’t know what it’s talking about. According to Bender and Hanna, Copilot “never” knows what it’s talking about because it’s not capable of knowing. I believe them because the behavior of Copilot that I’ve seen is completely consistent with that viewpoint—and inconsistent with the opposite viewpoint.

Never knowing doesn’t mean that Copilot can’t sometimes be useful. For me, it often dredges up some unanticipated and interesting ideas, like a super search engine. But, so far, it doesn’t save time for me because it hands me valid and invalid ideas as if they are all valid. I still have to sift through them and throw the bad ones out.

Furthermore, as a writer, while I recognize that Copilot writes competently at, maybe a high-school or even college level, its prose is also pretty boring. I rewrite everything Copilot gives me. I value a writer’s voice, including my own. The composite voice drawn from LLMs is, well, better described as compost.

(I do understand why people who have difficulty writing are wowed by this. Even though, in my heart of hearts, I believe they probably could write better than they think they can, these people find the process painful and are grateful that the AI gods have given them a way to avoid writing—even if it sometimes screws up.)

“The AI Con” goes way beyond what I could personally do in critiquing the output of these LLMs. I’m limited to my areas of professional expertise, but the authors have researched a variety of fields—academia at the student level, academia at the professor level, scientific research, attempts to automate social services, and so on.

They all fall short in the ways that I’ve experienced in my own field. And I should say that my primary care doctor, who is also an academic, has told me the same for so-called AI in the medical field. You sometimes get good or even surprising results, but it’s mixed with garbage.

* * *

According to Bender and Hanna, you don’t actually see the worst of the garbage that could be spewed from these LLMs (except possibly from Elon Musk’s Grok, which seems specially designed to spew garbage).

LLMs are “trained” on humongous quantities of text and images taken from the internet. This data is statistically analyzed so that when you ask a question (or, as they say, input a prompt), it delivers an answer based on that statistical analysis, enabling it to “predict” each word in the answer word-by-word.

The LLM is able to take one step at a time, but without knowing where it is going. The result is the glib answer I’ve already talked about that makes us feel like we’ve gotten an answer.

In my field, the data relevant to the laws I am writing about includes the laws themselves, but presumably also includes proposals for laws that didn’t make it or amendments that were dropped. It also includes a wide variety of debate leading up to the law that made it—or the law that didn’t make it, as well as analysis of the good or bad effects of the law. The LLM apparently lacks a way to separate the good from the bad, so I get a composite. It’s up to me to sort it out.

In other areas, the bad content is really bad. The folks who want you to believe that LLMs are fabulous (i.e., the venture capitalists who have invested in it) don’t want you to see the really bad stuff because it is pornography or violent or extremist propaganda. That’s what happens when the data is the whole internet. The internet is flowing with slime.

So instead of relying on you to sort out the bad stuff, they hire very-low wage people around the world to presort their “intelligent” content. Needless to say, not only are these people paid poorly, they are constantly exposed to traumatizing content.

So, it turns out that “artificial intelligence” isn’t so artificial after all.

These “sorters” aren’t the only ones paying the price for the adoption of so-called artificial intelligence. Scads of employers listen to the hype and see AI replacement for their well-paid workers. Glib demos impress managers who don’t dig in to find the pretty-but-wrong answers to their problems. They immediately sign up for fear of missing out. And the folks who know how to find the right the answers are on the street—or hired back to clean up the mess. Mess cleaning, of course, is less well paid.

* * *

All this artificial intelligencing uses a lot of electricity—so much that the global electric grids are taxed and new capacity is required. And so Silicon Valley giants are getting into the power generation business—meaning that trends toward sustainability are out the Windows. Furthermore, new electrical generation capacity uses a lot of water.

These are real threats: job losses or job downgrading and degradation of our environment, not to mention theft of copyrighted content. But the promoters of this technology don’t talk about real threats. Instead they debate whether AI will soon become superintelligence that will brush humanity off into the galactic trash pile. They call this the Singularity.

This is a science-fiction worry. Bender and Hanna call people in this category “Doomers.” But class this as back-handed hype. You see politicians falling for this kind of hype. They notably name Sen. Chuck Schumer for this.

And the purpose is to have the political class fretting over nonexistent far-future threats and ignoring the real present day threats of job loss, theft of content that occurs when the LLM “scrapes” copyrighted material into its maw, and accelerated global warming resulting from the hyper power requirements of these systems.

Systems that are not performing as advertised—and not likely to perform as advertised—except for the rich who are able to invests and reap profits from the technology.

For the rest of us, poor performance has become the norm. Bender and Hanna call this “enshittification,” a term they borrowed from the Canadian writer Cory Doctorow, who coined the term to describe the progressive degradation of online products and services to maximize profits for investors.

* * *

Before I finish, I want to get back to where I started in last Monday’s post, considering what intelligence is.

In that post, I discussed a broad variety of things that we humans have considered intelligent: humans (of course), octopi, bees, gods, and now artificial intelligence.

What makes these diverse things intelligent?

I’m not inclined to credit intelligence to some sort of Platonic music-of-the spheres. There is no “general intelligence,” so AI promoters are full of shit when they talk about the next stage in AI as “artificial general intelligence (AGI).”

In my view, intelligence is an ability to solve a problem. I might add the word “flexibly” to this definition. An intelligent system can evaluate different inputs and flexibly solve a problem that the solver has.

If the solver is an octopus, its problem is to avoid danger, and octopuses have evolved brains that let one octopus evaluate a seascape environment and, if danger is detected, communicate the warning to other octopi. (This is just one example of the type of problems octopi solve. I suspect, however, they are not very good at solving problems on dry land.)

If the solver is a worker bee, its problem is to find nectar sources, return nectar to the hive, and communicate the location to other worker bees so they can find the same nectar sources and return enough nectar so the hive can survive through the winter.

Snooty biologists have declined to call this intelligence, preferring to call it instinct. But this is where my word “flexibly” comes in. There is no way that the location of nectar sources can be coded into bee DNA. What is encoded is a process for finding nectar in a complicated environment and communicating the location to other bees. This kind of flexibility deserves to be called intelligence. Bees are able to solve the problem of survival through the winter.

Humans have the flight-from-danger problem that is similar, but not identical to the octopus problem (we do it on dry land mostly, for starters). Humans have the survival-through-winter problem (or through drought, or other types of famine). And it’s solved by storing food—not unlike the bee solution (but complicated by the diverse types of food we eat—and our range of habitation). We solve these problems with intelligence.

Humans also have problems that arise from living in social groups. Bees have this to some extent and have solved it through biological differences between queens, drones, and workers. Octopi are much more solitary, as far as we know.

But human social relations have become so complex that we have developed language. I could stop there. Language is a great advantage but is extremely complex. We also have agriculture and manufacturing, religion and politics. All pose problems that require—but don’t always get—intelligent solutions.

The point is that intelligence emerges from problems that an organism (or other system) has to flexibly solve. No problems, no intelligence.

Computers don’t have problems to solve. They are tools. Whatever “intelligence” they have is solving human problems. They are an extension of human intelligence, not artificial, not super.

And there is no super intelligence—only intelligence that solves problems.

That is my opinion.

And based on that opinion, I would urge you to read the Bender and Hanna book. It’s important to understand how we can be sucked into a sci-fi fantasy of supercomputers replacing humanity. But it’s not going to happen.

What is going to happen is that new computing capacity will enable elites to steal intellectual property, to shed jobs, and to degrade the cyber environment, all to add to their wealth.

This greed doesn’t mean that computing tools lack value or capability. It also doesn’t mean that their capacity won’t grow. But it means that decisions about who owns what and who benefits and who loses need to be made without false illusions—and with deep concern about how the choices affect all humanity, not just Silicon Valley CEOs.

Decisions without false illusions? Decisions that consider anybody but the rich?

Is that possible?

* * *

What do you think? Scroll down to comment

Like what you read? Share with your friends.

If you are new to EightOh9, check out the site and let me know what you think.

Leave a comment