Artificial intelligence technology threatens to further distance us from the news organizations that produce our news, but it is also a tool we can use to do more with our news. This week, her Stuff, which warns that AI could disrupt the business of journalism, wrote an article about reader opinion using Chat GPT. Mediawatch asks unimpressed editors do we all need to get used to this now?
Last week, readers browsing the country's most popular news site were lured in by an enticing headline.
“Polls say Christchurch, New Zealand is the best place to be,” the article begins, before adding, “New Plymouth isn't happy with it.”
The headlines included two important selling points for the news: controversy and metaphysical challenges. First of all, is Christchurch really the best place in New Zealand? Is it possible that the Stuff reader poll is wrong in some way?
But more importantly, how can New Plymouth, a metropolitan center not blessed with the gift of consciousness, express its unhappiness? Are the clouds rolling down from Mt Taranaki and hanging over the city to show their displeasure? Do the waves crash harder at Fitzroy Beach?
This story doesn't answer those questions. Perhaps because it was written by another being, unawakened to the joys, confusion, and horrors of mortal existence.
At the beginning of the story, Stand First explains that it was assembled by a robot.
“This article was condensed from the original Stuff report and published member comments using the generative AI tool Chat GPT, with supervision and editing by Stuff journalists.”
For some, Stuff's publication may have seemed a little strange, given that Stuff's leaders have been outspoken about the threat AI poses to journalism.
Its chief executive, Sinead Boucher, warned of the potential for AI-driven media havoc at a recent select committee hearing on the proposed Fair Digital News Bargaining Act.
“Over the past year, we have seen the rise of AI technology, hailed as a changer for humanity by the tech companies that own it, but at its core, we have “There is a terrible, large-scale theft of intellectual property,” she said. . “For news organizations around the world, this development is increasingly looking like an extinction-level event.”
But if you read Stuff's AI policy correctly, it allows leeway to do things like summarize polling data about New Zealand's best cities, provided there's a human editor overseeing the final product.
Boucher listed several other potential use cases for AI in an interview with Mediawatch last year.
“Text reports based on company performance, sports results, etc. It saves effort. There's no intellectual property in it,” she said.
“It's an assistant in a way, because it allows journalists to focus on stories where human insight, human creativity, empathy, human relationships, all of those things really matter.”
One media organization is already using AI for one of these purposes.
BusinessDesk uses ChatGPT to compile stock exchange coverage. The company's editor-in-chief, Matt Martell, says the initiative will save time and free up reporters for other tasks.
“We use it to copy NZX market data. Articles that used to take at least 30 minutes now take less than 30 seconds,” he told Media Watch.
The real problem with Stuff's latest AI article is not that it violates the company's editorial policies, but that its entire existence is a cardinal crime against writing, and perhaps creativity itself.
As I read this story, I imagine it's similar to what it feels like to have a spider crawling over your brain.
The writing is more unsettling than terrifying, existing in a kind of uncanny valley between sense and nonsense.
David Farrier, a journalist and author of the blog Webworm, highlighted a “terrifying” passage in the story.
“Staff threw this question into the ring and New Zealanders, unashamed to stand by their patch, swung with opinions as diverse as the flavors of Whittaker's sampler,” one person read.
Other journalists also took aim at the AI-generated work.
The Spinoff's Madeline Chapman wrote an article with the headline, “Outrageous!” It turns out that AI is bad at writing. ” He rapped the story hard for starting with the word “ah” and for using the phrase “New Plymouth lovers.”
But the reality is that articles about AI are unlikely to stay this bad forever. The Stuff farce is perhaps just the first incursion into AI's inevitable annexation of land traditionally occupied by journalists.
Chapman told Media Watch that AI reporting could work well as a time-saving device if it frees up journalists' time to do research or other quality work.
But she worried that the use of AI could undermine media companies seeking financial support from readers for their journalism.
He said readers often take the worst things produced by media companies and ask why they should pay for it when they're facing a paywall.
“I wish people wouldn't approach journalism that way, because it's an unattainable hurdle. Everything you publish is the favorite of someone who wants to pay for it. But at the same time, if that's the approach people are taking, we don't want what they choose to be “literally written by a robot.'' ”
Nevertheless, Chapman saw little point in avoiding AI inroads into journalism altogether.
She said newsrooms like hers need to focus on what new technology can't offer. The Spinoff features interesting articles and long-form investigations, but results may vary at other publications.
“People are willing to pay money if they know a lot of work has gone into it and this is the only place they can get it,” she says.
“I think it's either pragmatic or powerless, because I think that no matter what happens, people will find a way to innovate with it and find a way to make it work.”