Ask Questions Later

Ask Questions Later

Share this post

Ask Questions Later
Ask Questions Later
AI's Next Terrifying Leap: Verification
Copy link
Facebook
Email
Notes
More

AI's Next Terrifying Leap: Verification

Your job is safe because AI makes mistakes. For how long?

Dan Perry's avatar
Dan Perry
Apr 07, 2024
∙ Paid
5

Share this post

Ask Questions Later
Ask Questions Later
AI's Next Terrifying Leap: Verification
Copy link
Facebook
Email
Notes
More
2
Share
magnifying glass near gray laptop computer
Unsplash

Many who experience generative artificial intelligence reach the same two quick conclusions. First: It's amazing, since it writes and reasons better than many of my colleagues! Second: It's terrifying, since it writes and reasons better than many of my colleagues! For the fretters there has been a single saving grace: AI messes up.

How badly? Comically. People who couldn't come up with a short story to save their lives ask ChatGPT to find a review of their latest novel and then delight when the bot finds one. My job is somewhat safe, they think. But if they're smart, a little voice might add: For now.

This inability to trust the results of queries (delivered though they are with what reads like supreme assurance) has been at the center of several panel discussions I've been on about AI and journalism. For now, the industry is being very careful with AI. But an obvious question hangs in the air: What happens if they fix the problem? After all, if there's one thing that applies to all technology except Microsoft updates, it's that it keeps improving.

UPGRADE TO PREMIUM

I took this concern to ChatGPT, as anyone would, and received this: "I believe that as AI continues to advance, there will be significant improvements in its ability to verify the results of inquiries. Currently, AI systems (can't) independently verify the accuracy of the information they produce. Future generations are expected to ... cross-reference information, evaluate sources, and discern between reliable and unreliable sources.

"These advancements could lead to AI systems that ... provide more accurate and trustworthy responses," it added. "Techniques such as fact-checking algorithms, knowledge graph integration, and probabilistic reasoning could enable AI systems to assess the credibility of information more effectively."

Of course, "as an AI natural language model"—which it reminds you of tirelessly—the bot has no agenda beyond pure analysis (though that can be a fine line, as anyone who's written a "news analysis" can confirm). But the overeager detail of that last sentence sure seemed to contain a glimmer of a gloat. Perhaps a poke in the eye of the salaried stiff.

Loading...

The claim is convincing—but as we know, could also be untrue. ChatGPT might have found a satirical article about probabilistic reasoning in the Onion. So, I consulted Elik Eizenberg, a successful London-based serial entrepreneur whose Scroll.AI startup develops tools that are relevant to this discussion.

Keep reading with a 7-day free trial

Subscribe to Ask Questions Later to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Dan Perry
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More