The Danger Of AI Search

Steve Rosenbaum
3 min readMay 14, 2024

--

The thing about Google search is that it is, by definition, a “human-in-the-loop” process. If you search for information, Google provides links. Some of them are sponsored, some of them are organic. But at the end of the day, it’s up to the human searcher to decide what to believe and what to reject.

ChatGPT offers no such interface. In fact, it is very much the opposite. It responds to questions with ANSWERS. It’s unequivocal and downright certain about its digital wisdom.

The other day, I queried ChatGPT about journalist Bob Garfield. The conversation was fun and frothy. It knew who he was, knew that he wasn’t related to the cat by the same name, and then declared with certainty that Bob’s current podcast gig “Future Forward” is one he co-hosts with Rebecca Jarvis of ABC News.

There’s only one problem. I am Bob’s co-host, and the Jarvis answer is flat-out wrong.

If the rumors are correct, Open AI is broadening its reach and is taking on Google search in a full-frontal assault, aiming to replace Google’s source-based, human-filtered search business with OpenAI’s charming — but most certainly wrong — AI. This isn’t charming, it’s dangerous.

In a blog post framed as a debate, noted AI critic Gary Marcus wrote: “The reason I don’t think the systems are intelligent isn’t just because these systems are next word predictors (which they are) but also because, for example, they are utterly incapable of fact-checking what they say, even against their own databases, and because in careful tests over and over they make silly errors over and over again.”

So what is going to happen after OpenAI replaces the boring (but accurate) search results of Google with the charming but relentlessly flawed answers of the OpenAI conversational answer engine? Well, it seems OpenAI has a plan for that as well — to replace the sources of the open web with a pay-to-play model, the Preferred Publishers Program, which offers licensing deals to media companies, as reported in one publication.

Unlike Google’s mission to organize the web, OpenAI is looking to pick sources, and then boost them based on some unknown set of priorities.

The economics of this charm offensive amount to a standoff. The New York Times is suing OpenAI For stealing its content, and CNN has blocked the crawlers. Meanwhile, Axel Springer, Le Monde, Pirsa Media, and others are doing deals with the fast-growing platform.

This puts the editorial framework of news organizations on a collision course with the investors in ChatGPT.

I asked ChatGPT how much it has raised, and it responded with a somewhat lengthy paragraph that ends up at $4 billion. This is easily, and provably wrong. The number is at least $11.7 billion, according to tracxn.com.

It seems that our soon-to-be-most-popular answer engine is often wrong about facts and numbers. And with more than eleven billion dollars on the line, investors are hungry for their 10x returns. That means OpenAI results are going to be targeting ads at you in the very near future.

So much for AI and truth.

--

--