The Future Of Autosapiens, AI That Learns And Evolves

Steve Rosenbaum
4 min readJan 9, 2024

I’ve been talking to robots a lot lately. I suspect you have, too. It starts, of course, in my living room, where I have an ongoing conversation with Alexa: lights, radio stations, weather, news. Over the last four to six months, the conversation has become somewhat contentious. You may be having the same experience. As it turns out, Alexa is getting stupider. At least, that’s been my experience and what others on the Internet are reporting.

But she’s hardly alone. It turns out that due to an industry’s haste to replace humans with robots, a whole series of audio-only interactions have become contentious. Your healthcare provider, UPS, your airline, or any government agency or bank — each of them has put a voice recognition robot between you and an answer, or an automated AI-driven Chatbot, often with a shockingly limited number of decision tree options.

They are relentless, dense, and often simply stupid. They ask you to read them 15-digit account codes, repeat your birthday numerous times, and remind you that they value you as a customer even as they put you in 20-to-40-minute-long queues to speak to the few remaining humans. In the name of efficiency, the race to turn customers into plug-in-play users is stressful at best, and dangerous at worst.

Even though I know the humans that I finally do speak to are hardly responsible for these decisions, it’s hard not to be short-tempered when you’re asked the same question over and over, with little chance of getting a coherent answer. Some of it is just systems not talking to each other. The doctor’s office doesn’t speak to the insurance company. The insurance company doesn’t speak to the mail-order pharmacy. And along the way, there are crosswires that often result in basic prescriptions being caught in a tangle of poorly integrated systems.

It turns out there may be another chapter coming, with potential better results, at least in the short term.

Jeremy Heimans and Henry Timms, writing in the Harvard Business Review, discuss a future in which new AI systems can learn autonomously and make complex judgments.

“A new generation of AI systems is no longer merely our tools — they are becoming actors in and of themselves participants in our lives becoming behaving autonomously, making consequential decisions and shaping social economic outcomes.”

To which I say both “yay” and “yikes!”

Heimans and Timms say this isn’t just better tech, it’s a new thing that they call auto sapiens: “’Auto’ in that they are able to act autonomously, make decisions, learn from experience, adapt to new situations, and operate without continuous human intervention or supervision. ‘Sapiens’ in that they possess a type of wisdom — a broad capacity to make complex judgments in context — that can rival that of humans and in many ways outstrip it.”

They describe the unique change as having four distinct characteristics. “They are agentic (they act), adaptive (they learn), amiable (they befriend), and arcane (they mystify).”

What’s so immediately frightening here is that our experience dealing with bad computer-human interfaces is so painful and aggravating that the arrival of auto sapiens may seem almost immediately delightful.

Imagine being able to change an airplane reservation easily with a charming, pleasant auto sapien who expresses sympathy for your need to change, and does so quickly, without multiple requests for arcane strings of numbers or baffling passwords.

For a moment, your life is better. But as Heimans and Timm worry, the nature of human autonomy may be at risk. “The great challenge of this new age will be finding the paths that enhance our own human agency rather than allowing it to contract or atrophy.”

To put it in stark terms, the end of bad robots that we have now, when Alexa forgets the names of your dining room lights, or your airline has “no record of you upcoming trip,” may be replaced by an efficient auto sapien that gets the basic stuff right, knows what kind of manner and tone you prefer, and is programmed to offer you smaller concessions.

So: “Operator, Operator, Operator,” as I chanted into the phone for hours this weekend, may not respond with weak excuses of “unusually high call volume,” but instead simply fix your problem.

What then, when a robot who can “learn” and be “amiable” is able to decide, on its own, based on your credit score or bank balance, that you value time more than money, and simply charge you more? The dangers of charming robots are chilling — and, it seems, inevitable.