Artificial Intelligence

Regulators consider first federal rule on AI-created political ads

Sidestepping the debate about whether to ban artificial content, the new rule would require political ads to disclose whether they were made with AI.

AP Photo/Seth Wenig, File

File- A woman wearing gloves drops off a mail-in ballot at a drop box in Hackensack, N.J., on July 7, 2020.

Amid a campaign tinged by concerns about so-called deepfakes, the Federal Communications Commission is proposing a first-of-its-kind rule to mandate disclosure of artificial intelligence-generated content in political ads, though it may not go into force before the election.

Regulators have been slow to grapple with the new technology, which allows people to use cheap and readily available AI tools to impersonate others. FCC Chair Jessica Rosenworcel says disclosure is a critical — and, perhaps just as important, doable — first step in regulating artificially created content.

“We spent the better part of the last year in Washington hand-wringing about artificial intelligence,” Rosenworcel said in an interview. “Let’s do something more than hand-wringing and pearl clutch.”

The new rule would require TV and radio ads to disclose whether they include AI-generated content, sidestepping, for now, the debate about whether that content should be banned outright. Existing laws prevent outright deception in TV ads.

“We don’t want to be in a position to render judgment; we simply want to disclose it so people can make their own decisions,” Rosenworcel said. 

The move was inspired in part by the first-known deepfake in American national politics, a robocall impersonating President Joe Biden that told voters not to turn out in January’s New Hampshire primary. 

“We kicked into high gear because we want to set an example,” Rosenworcel said of the swift official response to the New Hampshire deepfake. 

The political consultant behind the deepfake robocall, who was outed by NBC News, faces a $6 million fine from the FCC and 26 criminal counts in New Hampshire courts. The U.S. Justice Department on Monday threw its weight behind a private lawsuit brought by the League of Women Voters. 

The consultant, Steve Kramer, claimed he made the ad only to highlight the danger of AI and spur action.

Some political ads have already started using artificially generated content in both potentially deceptive and nondeceptive ways, and the generic AI content is becoming more common in nonpolitical consumer ads simply because it can be cheaper to produce.

Some social media companies have banned AI-created political ads. Congress has considered several bills. And about 20 states have adopted their own laws regulating artificial political content, according to the nonprofit group Public Citizen, which tracks the efforts.

But advocates say national policy is necessary to create a uniform framework. 

The social media platform X not only has not banned videos created with AI, but its billionaire owner, Elon Musk, has been one of their promoters. Over the weekend, he shared with his 192 million followers a doctored video made to look like a campaign ad for Vice President Kamala Harris.

The government does not regulate social media content, but the FCC has a long history of regulating political programing on TV and the radio, including maintaining a database of political ad spending, with information that TV and radio stations are mandated to collect from ad buyers. The new rule would simply have broadcasters also ask ad-buyers whether their spots were made with AI.

The Federal Elections Commission, meanwhile, has been considering its own AI disclosure rules. The Republican chairman of the FEC wrote to Rosenworcel asking the FCC to stand down, arguing his is the rightful regulator of campaign ads.

Rosenworcel brushed past the interagency squabbling, noting both agencies — along with the IRS and others — have played complementary roles in regulating political groups and spending for decades. The FCC also regulates a wider variety of ads than the FEC, including so-called issue ads run by nonprofit groups that do not expressly call for the defeat of a candidate. 

And advocates note the FEC has a difficult time doing much of anything because it is, by design, split evenly between Republicans and Democrats, making consensus rare.

“We’re barreling towards elections which may be distorted, or even decided, by political deepfakes. Yet this is an entirely avoidable dystopia if regulators simply demand disclosures when AI is used,” said Robert Weissman, a co-president of Public Citizen, who said he hopes the FCC rule will be finalized and implemented “as soon as possible.”

Still, while Rosenworcel said the FCC is moving as quickly as possible, federal rulemaking is a deliberate process that requires clearing numerous hurdles, as well as time for public input.

“There will be complicated questions down the road,” she said. “Now is the right time to start this conversation.”

This story first appeared on NBCNews.com.  More from NBC News:

Copyright NBC News
Exit mobile version