AI At The Precipice Of Truth

Steve Rosenbaum
3 min readAug 8, 2023

--

The New York Times had a headline that can only be described as scary. “What Can You Do When A.I. Lies About You?” asks the Times. The short answer is, essentially nothing. “Sometimes the technology creates and spreads fiction about specific people that threaten their reputations and leaves them with few options for protection or recourse.”

The article is ripe with examples, one that struck home for me. “One legal scholar described on his website how OpenAI’s ChatGPT chatbot linked him to a sexual harassment claim that he said had never been made, which supposedly took place on a trip that he had never taken for a school where he was not employed, citing a nonexistent newspaper article as evidence.”

Some months earlier I’d asked ChatGPT to write my biography. It was sharp and well written, except on three key facts where it was entirely wrong: where I went to college, what degree I’d earned, and what I did for a living. It was not any claim of a criminal conviction — but it might have as well been, since I would have had no recourse in any case.

So AI can harm you — but just with inaccurate data until now.

Michael Parekh, a noted tech analyst, investor and blogger, reports the arrival of AI and robots, saying they are “a natural fit like peanut butter and chocolate.” Um, OK. I get the sci-fi charm.

Writes Parekh: “Google adds LLM AI to Robots: Google is leveraging their deep experience at DeepMind with robots, adding LLM AI capabilities to robots, Elon’s ‘Optimus” robots, Boston Dynamic (Hyundai) robots, Softbank’s robot aspirations, and of Amazon, with their budding ‘Astro’ robot line, especially after their purchase of Roomba. “

He describes it as “a delicious step forward,” building on the peanut butter metaphor, which works unless you’re allergic to peanuts. Then, it’s not delicious — but deadly.

So if the robots use LLM logic to secure a property, or arrest felons, or aim military drones — their mistake of labeling you a terrorist becomes deadly serious.

Leaders in AI tech seem to know that they’re facing the consequences of AI’s habitual telling of mistruths — or perhaps the word “lies” is more accurate here?

To help address mounting concerns, seven leading AI companies agreed in July to adopt voluntary safeguards, such as publicly reporting their systems’ limitations. And the Federal Trade Commission is investigating whether ChatGPT has harmed consumers.

There’s only one problem with the voluntary safeguards agreement: According to explodingtopics.com, there are 57,933 artificial intelligence companies worldwide. China has the highest rate of AI deployment (58%), closely followed by India (57%). As of 2022, the U.S. is lagging behind with a comparatively low 25% Exploding Topics cites Tracxn Technologies, which tracks startup businesses, as the source of this data. I’m not sure those numbers are a precise count, but suffice to say the number of AI companies far exceeds the seven that shook hands with President Biden in making the safeguard pledge.

If AI driven by a large language model is built to impersonate human conversation, then it makes sense that AI will lie — as people are known to do. Where does that leave truth?

Michael Graziano, a professor of psychology and neuroscience at Princeton University, thinks a “post-truth world” may be driven by a rapid shift to AI. “Reality has become pixels, and pixels are now infinitely inventable,” Graziano told Wired magazine. “We can create them any way we want to.” His fear is that AI likely could make it easier to convince people of what we now call fake news. Michal Kosinski, a computational psychologist and associate professor of organizational behavior at Stanford University, says, “We are sliding, very quickly, towards an AI-controlled and AI-dominated world.”

Truth is complicated but essential. The scientific community tends to shy away from the notion of absolute truths, as scientific theory is often tested, substantiated, and sometimes disproven. Relative truth is — as it sounds — relative. But AI’s computer-generated decisions seem essentially scientific and driven by data, therefore hard to dispute or ignore.

When AI produces a resume or calls for a criminal prosecution, it’s hard to imagine how human implementers will question algorithmic statements of absolute truth. And that brings us to a precipice of truth, as we look over the edge.

--

--