News

FCC proposal aims to increase transparency over AI-generated political ads

From deepfakes to misinformation campaigns, officials warn artificial intelligence is expected to play a major role in political ads this year.

The Federal Communications Commission (FCC) proposed a $6 million fine for the robocalls spoofing President Joe Biden’s voice earlier this year. Now, the agency is trying to get ahead of future uses of artificial intelligence in political ads.

If adopted, a new proposal would review whether the FCC should require candidates and issue advertisements to disclose whenever AI-generated content appears in TV and radio ads. But it wouldn’t apply to political ads shared on social media.

“What we’re more likely to see are deepfakes of politicians being put on the airwaves by their political opponents,” said Tim Harper, Center for Democracy and Technology.

Tim Harper is a Senior Policy Analyst for Democracy and Elections for the Center for Democracy and Technology. He said potential deepfakes could be very simple.

“Small modifications to someone’s appearance, it could put them in circumstances that weren’t true, right? Those sorts of deceptive modifications could be really harmful to the airwaves,” said Harper.

Some FCC officials aren’t in favor of this proposal.

The FCC can only muddy the waters. AI-generated political ads that run on broadcast TV will come with a government-mandated disclaimer but the exact same or similar ad that runs on a streaming service or social media site will not? Consumers don’t think about the content they consume through the lens of regulatory silos,” said Commissioner Brendan Carr, the senior Republican on the Federal Communications Commission. “I don’t see how this type of conflicting patchwork could end well. Unlike Congress, the FCC cannot adopt uniform rules.”

As federal requirements are developed, tech company, Reality Defender is already taking some action. It uses specialized software to scan files for manipulated audio, video, and images to help governments, institutions, and platforms detect deepfakes.

“As technology gets better, the challenges get a little bit more challenging,” said Ben Colman, CEO and co-founder of Reality Defender.

With so many AI tools available to anyone, Reality Defender CEO, Ben Colman believes these federal requirements shouldn’t be limited to just politicians and campaigns.

“We have a huge opportunity to kind of accelerate regulations in the space to protect average consumers that just can’t protect themselves,” he said.

Colman also believes some federal regulation plays a role too.

“We’re hoping that, you know, regulation and kind of requirements, both support the AI innovation, but at the same time, put some guardrails, on at minimum, requiring platforms to indicate not blocked but to indicate that there may be AI within a piece of media,” said Colman.

0