How will AI change news in the future?

Artificial intelligence is already impacting newsrooms, both positively and negatively, journalists said yesterday during a panel at Microsoft’s offices in Mountain View. From left to right: Jon Swartz of MarketWatch, Julie Jammot of AFP, Chris Matyszczyk of ZDNet, Michael Nunez of VentureBeat and Boone Ashworth of Wired. Post photo.

BY BRADEN CARTWRIGHT
Daily Post Staff Writer

The rise of AI will lead to more false news reports spreading, but it could also result in journalistic masterpieces, according to a panel of reporters talking about how AI will impact journalism in Mountain View last night (April 25).

“I’m convinced this is bigger than Google,” said Michael Nunez, editorial director for the tech news outlet VentureBeat.

Nunez was one of eight journalists on a panel at Microsoft’s campus in Mountain View hosted by the San Francisco Press Club. The event was held as AI programs, such as ChatGPT and Google’s Bard, have been released in recent months with the ability to generate human-sounding language.

Nunez said he uses AI every day. The programs write headlines, come up with ideas for stories and edit his stream-of-consciousness writing, Nunez said.

“It’s like having another person on the team,” he said.

Fast research

AI still needs to be edited and fact-checked, just like any human writer. But AI can do two hours worth of research in two seconds, Nunez said.

AI also will emulate different writing styles, such as Ernest Hemingway or Rolling Stone magazine. 

Within a year or two, AI will create its first masterpiece, such as a Pulitzer Prize-winning piece of music, Nunez said.

Promise and peril

Reporters from larger outlets, such as Reuters and AFP, were less enthusiastic about AI than Nunez.

“There’s the promise, and there’s the peril,” moderator and Bloomberg reporter Rachel Metz said.

AI programs have a habit of producing wrong information that sounds right, Metz said.

AI has also created realistic-looking images that went viral on social media, including photos of Donald Trump getting tackled by New York City police officers before he was arrested.

Disclamer suggested

Images and writing generated by AI should come with a disclaimer, said Benjamin Pimentel, a tech reporter for the San Francisco Examiner.

“The problem is when you’re dealing with AI, and you don’t know it’s AI,” he said.

Nunez disagreed with Pimentel. He said using AI as a tool is like using Photoshop to enhance an image or reposting content on social media.

“I don’t think our readers care, to be honest,” he said.

Julie Jammot, a tech correspondent for the wire service AFP, said her company won’t touch AI “with a six-foot pole” so that people trust the writing is coming from a real person.

Legacy news companies such as the New York Times will be slower to use AI, while blogs will get on board quicker, Nunez said.

Help with dinner

Boone Ashworth, a staff writer for Wired, said he asked AI what he should make for dinner based on the ingredients in his fridge. 

The recipe didn’t have the right proportions, but it inspired him to find a real recipe online, he said.

Ashworth said that if AI could tell people what new products are coming out, then he could spend more time telling them why these new products matter.