Government Stalls on Ending Deepfakes Like Elon Musk’s Harris Ad

Government Stalls on Ending Deepfakes Like Elon Musk’s Harris Ad


Elon Musk recently made headlines when he posted a fake video of Vice President Kamala Harris, with her voice altered to make it sound like she described herself as a “diversity-minded staffer” who doesn’t know “the first thing about running a country.” A month ago, a Republican congressional candidate in Michigan posted a TikTok video using the AI-generated voice of Dr. Martin Luther King Jr. to say he had returned from the dead to endorse Anthony Hudson. In January, President Joe Biden’s voice was duplicated using AI to send a fake robocall to thousands of people in New Hampshire, urging them not to vote in the state’s primary the following day.

AI experts and lawmakers have been sounding the alarm, calling for more regulation as AI is used to promote misinformation and disinformation. And now, three months out from the presidential election, the United States is ill-prepared to deal with a potential onslaught of fake content heading our way.

Digitally altered images—also known as deepfakes—have been around for decades, but thanks to generative AI, they’re now significantly easier to make and harder to detect. As the threshold for creating deepfakes drops, they’re now being mass-produced and increasingly difficult to regulate. To make matters more challenging, government agencies are fighting over when and how to regulate the technology—if at all—and AI experts fear that failure to act could have a devastating impact on our democracy. Some officials are proposing basic regulations that would disclose when AI is being used in political ads, but Republican political appointees are standing in the way.

“Anytime you’re dealing with misinformation or disinformation that interferes with elections, we need to think of it as a form of voter suppression,” says Dr. Alondra Nelson, who was deputy director and acting director of Joe Biden’s Office of Science and Technology Policy at the White House and led the creation of the AI ​​Bill of Rights. She says that the misinformation that AI provides “prevents people from having a trusted information environment in which they can make decisions about issues that are very important to their lives.” Rather than preventing people from going to the polls to vote, she says, this new form of voter suppression is “A slow, insidious erosion of people’s trust in the truth” that affects their confidence in the legitimacy of institutions and government.

The fact that Musk’s fake video is still online shows that we can’t rely on companies to play by their own rules on misinformation, Nelson says. “There need to be clear boundaries and clear lines around what is acceptable and unacceptable for individual actors and companies, and the consequences for that behavior.”

Several states have passed regulations on AI-generated deepfakes in elections, but federal rules have been difficult to come by. This month, the Federal Communications Commission is accepting public comments on proposed rules the agency is considering that would require advertisers to disclose when AI technology is used in political ads on radio and television. (The FCC has no jurisdiction over online content.)

Since the 1930s, the Federal Communications Commission has required television and radio stations to keep records of who buys campaign ads and how much they paid. Now, the commission is proposing to add a question asking whether artificial intelligence was used in producing the ad. The proposal would not ban the use of AI in ads; it would simply ask whether AI was used.

“We have this national tool that has been around for decades,” says Jessica Rosenworcel, chair of the Federal Communications Commission. Rolling Stone In a phone interview. “We decided now was the right time to try to update it in a very simple way, when I think a lot of voters just want to know: Do you use this technology? Yes or no?”

Rosenworcel says there is a lot of work to be done when it comes to AI and disinformation. She points to the fake Biden robocall, which the FCC responded to by call The Telephone Consumer Protection Act of 1991, which restricted the use of synthetic voices in telephone calls. The FCC then worked with the New Hampshire attorney general, who brought criminal charges against the man who created the robocall.

“You have to start somewhere, and I don’t think we should let the perfect be the enemy of the good,” Rosenworcel says. “I think building on a foundation that’s been in place for decades is a good place to start.”

Republican Federal Election Commission Chairman Sean Cooksey opposes the FCC's latest proposal, claiming it would “create chaos” because it is so close to the election.

“Every American should be alarmed that the Democratic-controlled FCC is moving forward with its radical plan to change the rules on political advertising just weeks before the general election,” Cooksey said in a written statement to Trump. Rolling Stone“These vague rules would not only encroach on the FEC’s jurisdiction, but they would sow chaos among political campaigns and confuse voters before they even go to the polls. The FEC should abandon this misleading proposal.”

The Federal Election Commission has routinely deadlocked on some issues for years, with Republicans on the commission blocking new regulations on almost anything for years.

The watchdog group Public Citizen has applied to the Federal Election Commission to participate in the rulemaking on AI, and Cooksey has said in the past that the agency will provide an update in early summer.

Cooksey told Axios: The FEC will not move to regulate AI in political ads this year, Cooksey said, and the commission is scheduled to vote on whether to close the Public Citizen petition on Aug. 15. “The better approach is for the FEC to wait for guidance from Congress and study how AI is actually being used in the real world before considering any new rules,” Cooksey told the newspaper, adding that the agency “will continue to enforce its existing regulations against fraudulent representation of campaign power regardless of the medium.”

AI experts believe urgent action is needed. “We’re not going to be able to solve all of these problems,” Nelson says, adding that there’s no magic bullet to fix all AI-powered deepfakes. “I think we often come at the AI ​​problem space with this kind of perspective rather than saying, ‘Unfortunately, there’s always going to be crime and we can’t stop it, but what we can do is add friction. We can make sure that people have consequences on the other side of their bad behavior that hopefully mitigate it.’”

Rep. Evatte Clarke (D-NY) has been advocating for legislation in Congress on AI for years. The Senate recently passed a bipartisan bill targeting fake pornography created using AI without the consent of both parties.

“It was inevitable that these new technologies, especially AI, that allow you to distort images and sounds, would be weaponized at some point to sow confusion, misinformation and disinformation among the American people,” Clark says.

“[There’s] There is no real way to tell a fake image from something real and real, [which] “This puts the American people at a disadvantage, especially in these election campaigns that don’t know any rules or regulations.”

Clark introduced the Real Political Ads Act in May 2023, which would require campaign ads to disclose and digitally watermark videos or images in ads that were generated by generative AI. “We’ve had quite a few co-sponsors for the legislation, but it hasn’t moved forward,” he said. [Republican] “The majority on the Energy and Commerce Committee,” Clark says.

“It’s an open space for people who want to spread misinformation and disinformation right now, because there’s nothing regulating that,” Clark says. She notes that she’s also working on this with the Congressional Black Caucus, given that marginalized and minority communities are often disproportionate targets of misinformation. “We’re falling behind here in the United States, and I’m doing everything I can to push us into the future as quickly as possible.”

Dr. Roman Chowdhury ran ethical AI for X (formerly Twitter) before Musk took over and is now the U.S. science envoy for AI. She says the broader issue at hand is that America has reached a dangerously low level of trust in government, elections and institutions of communication. And she says the FEC could further erode its credibility by not taking action.

“We’re in a crisis right now about which institutions and government we should trust, and they’re going to sit back and say, ‘We don’t know if we should do anything?’” Chaudhry says. “If they’re not seen to be doing something about deepfakes, it could actually hurt their image in the eyes of the American people.”

Most popular

As for Musk’s specific sharing of the fake Harris video, Choudhry says she doesn’t know why people are surprised he did it. Musk has turned X (formerly Twitter) into a disinformation machine since taking over the platform.

“Is this terrible? Absolutely,” Chaudhry says. “But it’s like we’re the people who attended the leopard-face-eating party. Are you going to be angry that this guy is doing exactly what he said he was going to do? If you’re angry, don’t use Twitter. Or know that if you’re on the platform, you’re complicit in allowing this guy to manipulate the course of democracy.”



.

Leave a Reply

Your email address will not be published. Required fields are marked *

gomen gomen gomen gomen gomen gomen gomen gomen