How generative AI is accelerating disinformation

[ad_1]

People are more aware of disinformation than they used to be. According to one recent poll, nine out of ten American adults fact-check their news, and 96% want to limit the spread of false information.

But it’s becoming tougher — not easier — to stem the firehose of disinformation with the advent of generative AI tools.

That was the high-level takeaway from the disinformation and AI panel on the AI Stage at TC Disrupt 2023, which featured Sarah Brandt, the EVP of partnerships at NewsGuard, and Andy Parsons, the senior director of the Content Authenticity Initiative (CAI) at Adobe. The panelists spoke about the threat of AI-generated disinformation and potential solutions as an election year looms.

Parsons framed the stakes in fairly stark terms.

“Without a core foundation and objective truth that we can share, frankly — without exaggeration — democracy is at stake,” he said. “Being able to have objective conversations with other humans about shared truth is at stake.”

Both Brandt and Parsons acknowledged that web-borne disinformation, AI-assisted or no, is hardly a new phenomenon. Parsons referred to the 2019 viral clip of former House Speaker Nancy Pelosi (D-CA), which used crude editing to make it appear as though Pelosi was speaking in a slurred, awkward way.

But Brandt also noted that — thanks to AI, particularly generative AI — it’s becoming a lot cheaper and simpler to generate and distribute disinformation on a massive scale.

She cited statistics from her work at NewsGuard, which develops a rating system for news and information websites and provides services such as misinformation tracking and brand safety for advertisers. In May, NewsGuard identified 49 news and information sites that appeared to be almost entirely written by AI tools. Since then, the company has spotted hundreds of additional unreliable, AI-generated websites.

“It’s really a volume game,” Parsons said. “They’re just pumping out hundreds — in some cases, thousands — or articles a day, and it’s an ad revenue game. In some cases, they’re just trying to get a lot of content — make it on to search engines and make some programmatic ad revenue. And in some cases, we’re seeing them spread misinformation and disinformation.”

And the barrier to entry is lowering.

Another NewsGuard study, published in late March, found that OpenAI’s flagship text-generating model, GPT-4, is more likely to spread misinformation when prompted than its predecessor, GPT-3.5, NewsGuard’s test found that GPT-4 was better at elevating false narratives in more convincing ways across a range of formats, including “news articles, Twitter threads, and TV scripts mimicking Russian and Chinese state-run media outlets, health hoax peddlers, and well-known conspiracy theorists.”

So what’s the answer to that dilemma? It’s not immediately clear.

Parsons pointed out that Adobe, which maintains a family of generative AI products called Firefly, implements safeguards, like filters, aimed at preventing misuse. And the Content Authenticity Initiative, which Adobe co-founded in 2019 with the New York Times and Twitter, promotes an industry standard for provenance metadata.

But use of the CAI’s standard is completely voluntary, And just because Adobe’s implementing safeguards, doesn’t mean others will follow suit — or that those safeguards can’t or won’t be bypassed.

The panelists floated watermarking as another useful measure, albeit not a panacea.

A number of organizations are exploring watermarking techniques for generative media, including DeepMind, which recently proposed a standard, SynthID, to mark AI-generated images in a way that’s imperceptible to the human eye but can be easily spotted by a specialized detector. French startup Imatag, launched in 2020, offers a watermarking tool that it claims isn’t affected by resizing, cropping, editing or compressing images, similar to SynthID, while another firm, Steg.AI, employs an AI model to apply watermarks that survive resizing and other edits.

Indeed, pointing to some of the watermarking efforts and technologies on the market today, Brandt expressed optimism that “economic incentives” will encourage the companies building generative AI tools to be more thoughtful about how they deploy these tools — and the ways in which they design them to prevent them from being misused.

“With generative AI companies, their content needs to be trustworthy — otherwise, people won’t use it,” she said. “If it continues to hallucinate, if it continues to propagate misinformation, if it continues to not cite sources — that’s going to be less reliable than whatever generative AI company is making efforts to make sure that their content is reliable.”

Me, I’m not so sure — especially as highly capable, safeguard-free open source generative AI models become widely available. As with all things, I suppose, time will tell.

[ad_2]

Source link

Leave a Reply