Readers Should Insist AI Not Lead to Smaller Newsrooms
Media companies should use AI to repurpose journalists to higher-value work - and not as an excuse to fire people and further dumb down the product
This is shaping up as the year that the other shoe has dropped on artificial intelligence: what began as a plotline of science fiction has matured into global terror that AI might replace masses of white-collar workers. It’s a big deal for many industries, but few cases should spark as much concern as journalism – an industry that’s critical to society and which has already been disrupted to the breaking point.
It may seem self-serving for a veteran journalist to claim journalism is so important, but I stand by it: with all due respect to social media influencers and independent bloggers – the other two primary “sources” of information – neither have the commitment to verification, standards and ethics that the quality media does, or the resources to pursue them.
Those things are essential to maintaining the public’s trust, which is already being hammered by the toxicity of modern politics, which has tainted everyone in the ecosystem including the media. If that trust deteriorates any further, and if the younger generations are not reclaimed for real journalism, then society’s ability to navigate free markets and democratic politics will plummet.
Recent days brought two developments that underscored the pace of developments.
First, the Associated Press agreed to license its news archive to OpenAI (which will improve the bot’s understanding of world history and events) in a deal that also gave the agency, which I spent decades at, access to OpenAI (which will increase the temptation to use it). AP had already been experimenting with writing automated articles, especially for sports results.
And then it was reported that Google is in discussions with news publishers including the New York Times and Washington Post about building and selling artificial intelligence tools that could produce written journalism. The reports said Google claims the product – which the Times said bears the Star Trekky name of Genesis – was pitched as able collect information, write stories and more.
Everyone issued the requisite reassurances. But realistically, we face the prospect of bots writing stories and headlines. ChatGPT itself has boasted to me of AI’s ability to do this, and even to perform journalistic tasks like information-gathering (as well as summary generation, news monitoring and more). So I asked: Because you gather information online, could you not end up offering untruth as fact?
ChatGPT is programmed to be fair (if evasive), so it offered no disputation: “I rely on the information that is available on the internet (so) while I strive to provide accurate and reliable information, there is a possibility of encountering inaccuracies … Journalists and individuals should critically evaluate and verify the information they receive from multiple reliable sources before considering it as factual.”
For now, this need for human verification is understood to be serious (if you doubt it, ask AI which novels you’re most famous for having written, and you’ll probably see). But we may forget this as the bots get better. When I studied computer vision and AI in grad school in the late 1980s, the challenge was getting the thing to tell a circle from a square; now it can produce fake poetry and art. That’s a steep learning curve.
Consider the two basic advantages bot-journalists will always enjoy, even before machine learning expands their realm of competence:
Keep reading with a 7-day free trial
Subscribe to Ask Questions Later to keep reading this post and get 7 days of free access to the full post archives.