Humans, Octopi, Bees, Gods, and Artificial Intelligence

February 16, 2026—We like to think of ourselves as the apex of biological intelligence—at least here on Earth. That’s certainly true if we measure it on a scale that is defined by what humans are capable of. Octopi can’t write sonnets, so they lose on the human-scale IQ competition.

But Octopi can communicate with others of their kind by changing color. Humans can blush when embarrassed, but that’s pretty much the extent of it. So humans lose on the octopus-scale IQ competition.

Bees have been communicating maps of pollen sources to their sister bees for eons. Humans only began creating the global positioning system (GPS) in 1978 and it didn’t become fully operational until 1995—and you probably didn’t have access to it until sometime in the aughts. So humans lose on the bee-scale IQ competition—though we are catching up.

What’s the point?

I’ve been reading a lot about artificial intelligence (AI) both for my job and out of curiosity. The latest is The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want – Exposing Surveillance Capitalism and Artificial Intelligence Myths in Information Technology Today by Emily M. Bender and Alex Hanna.

I’m 28% of the way through it, if you believe Kindle. That figure is probably not true—though I’m not going to calculate the true figure. I’m guessing that the figure is not true because the book has endnotes and lots of other material at the end that most people would not count in the computation. Kindle does.

Whatever the true number is, so far, the book is pretty skeptical about the capabilities of so-called “artificial intelligence,” and even more discouraging about the Silicon Valley types who are either promoting AI as a utopia or warning about the end of humankind—all in the service of making a buck—or trillions of bucks.

The question that has intrigued me, since I became aware of so-called artificial intelligence, isn’t really the artificial kind. My question is: before we get to identifying artificial intelligence, can we please have some kind of definition of intelligence first.

Bender and Hanna mention this question, but so far haven’t gotten to a definition—other than to say that AI isn’t it. I’ll let you know if they come up with something good.

* * *

But I wanted to write my ideas about intelligence before I read what, if anything, Bender and Hanna had to say.

You might guess, from how I started this post, that I see intelligence as a tool. Octopi have octopus intelligence that is honed to solve octopus problems. Bees have bee intelligence to solve bee problems.

My dog has dog intelligence to solve dog problems. I have no idea what problem my dog is solving when it sniffs a tree, but it’s clearly important to my dog. And I have no way of solving her problem for her.

Until recently, academia didn’t even like calling octopus or be or dog behaviors intelligence. They called that anthropomorphism. We project the idea of human intelligence on other animals—supposedly without justification.

But we do this anyway. And, by the way, we project the idea of human intelligence on nature as a whole and call it “god.” This is not to say that “god” doesn’t exist any more than the anthropomorphism of dog intelligence implies that dogs don’t exist. (Of course, I can see my dog. But that’s a different type of proof.)

So, if we anthropomorphize animals and gods, it stands to reason that we might anthropomorphize computers and call them artificially intelligent.

The thing is, there is a clear relationship between octopus intelligence and octopus needs and survival. There is a clear relationship between bee intelligence and bee needs and survival. And there is a clear relationship between human intelligence and human needs and survival (even if we sometimes act like there isn’t).

No such relationship exists between artificial intelligence and computer needs and survival. Computers have no such needs. They are human tools. Computer intelligence serves human needs and survival.

Or not.

The Bender and Hanna book, in it’s first 28%, seem to suggest that the capabilities of so-called AI systems are greatly exaggerated.

I have no reason to doubt what they say. They have pretty good evidence. And I’ve worked with AI myself in my day job and find it fairly incompetent. Polished, but incompetent.

But make no mistake, the incompetence of today’s AI apps is still a step ahead of what we had before. That’s why it’s so easy to run what Bender and Hanna call the “AI Con.” It looks good until you go below the surface.

* * *

What do you think? Scroll down to comment

Like what you read? Share with your friends.

If you are new to EightOh9, check out the site and let me know what you think.

Leave a comment