‘HOLDING MY BREATH’ — An investigation by the British television station Channel 4 that published today found evidence of over 400 instances of digitally altered deepfake pornography of more than 30 high-profile UK politicians. The new revelations come in the days before the country’s July 4 elections, and they’re far from unique. Deepfake technology — in which users manipulate a video, photo or audio clip, often to replace one person’s likeness with another — is proliferating rapidly with the help of AI, and with little regulation from governments around the world. Its most malicious uses range from spreading election disinformation to creating fake sexually explicit content that looks like the real thing. In the United States, lawmakers are finally attempting to tackle the issue, which has become exponentially worse due to AI tools that make creating this content much easier. States like Florida and Colorado have already passed a bill that would require campaigns to disclose the use of deepfakes in political ads. Meanwhile, Michigan is looking to ban the creation and distribution of deepfake porn with full bipartisan support. On a federal level, Congress has made attempts — though mostly unsuccessful — to curb the unregulated expansion of synthetic videos, the latest instance being a bipartisan effort led by Sen. Ted Cruz (R-Texas) to police and remove deepfake porn from the internet. These efforts come during a crucial period: The election is just four months away, during a time when trust in government is at an all-time low to begin with. Misuse of deepfakes doesn’t just spread disinformation, according to University of Virginia Law Prof. Danielle Citron, who has studied and written about the use of deepfake videos since 2018. It further erodes the already crippled trust in our institutions, she said, creating an environment ripe for authoritarianism. To better understand the laws surrounding deepfakes and the technology’s future, Nightly spoke with Citron. This conversation has been edited for length and clarity. Sen. Ted Cruz and a few other senators are trying to pass a bill that protects victims of deepfake porn, which activists say is a step in the right direction but still a very narrow bill. What do you think needs to be included in a more comprehensive deepfake bill? I think that we have needed a federal, comprehensive intimate privacy bill for too long. It’s overdue. And we need a comprehensive approach to intimate privacy violations that include the manufacture of intimate images. But I think a larger looming problem that we must address is a recognition that it’s really hard to find the creators. And it’s almost impossible to get them to pay anything, even if you have civil penalties because they don’t have any money. Where we really should focus on is the gatekeepers, the intermediaries, the content platforms that are making money hand over fist because of the likes, clicks and shares of negative content. They’re immune from responsibility at the content layer: Section 230 [the law that provides tech companies broad immunity from legal challenges to content on their platforms] provides them a shield from liability. If we don’t have a solution that deals with them, it’s always half measures. How concerned are you about deepfakes in the 2024 U.S. presidential election? I’m pretty worried because we’ve already seen fakery. And now we have the tools of deepfakery, which make it really cheap and easy to create synthetic video and audio to change an election, and in ways that are at that choke point — the two days or the day before the election to suppress the vote. I’m worried. And I’m holding my breath over here. My fear is that we’re going to see more and more really well-timed videos. You don’t need a large number of things to do a lot of damage. So especially around elections, around significant events, we might see at certain choke points really well-timed fakery. We’ve seen in countries not our own, like India and other countries, deepfakery of candidates doing and saying things they never did or said in ways designed to hurt their candidacy. What are some particular concerns? When deepfakes spread and everyone starts believing them, then there’s no trust. That’s definitely a cycle. We coined this term: the liar’s dividend. It captures the idea that liars, when there’s truthful images of them doing and saying something wrong, they can point to the real video and say, “Oh, that’s not me. Everything is fake.” That’s the liar’s dividend. When people are caught doing bad things that are real, they can disclaim them as lies. Outside of election misinformation, how might this affect our lives? The technology of synthesizing things that people can’t do and say can be used for good. We see it in the Star Wars movies. Carrie Fisher, after she passed away, she was in the last Star Wars movie. Her family permitted it, with full consent. The real question is deepfakes without someone’s permission. Using their audio and video without their permission for the most part is bad because it has the potential to hurt or violate inherently their privacy — which is dignity denying. The use of synthetic audio and video without someone’s permission? That’s bad. Is there a future where deepfakes and AI images are just ingrained in our society and we learn to live with them? Someone asked me this early on in my deepfake work, “Why don’t we just say no video is real? Why don’t we just give up on the project?” You can’t have fakery if no one believes it. You can’t have damage if, epistemically, no one thinks it is true. And my response is always that without our belief in images, we can’t bear witness to the Holocaust. We can’t bear witness to what’s going on in Gaza. We can’t bear witness to any atrocity. And that is to say, give up on proof of truth, and I’m not doing that ever. That’s the end of democracy. That’s what an authoritarian would love: “No, just believe my truth.” Democracies die when authoritarians are the ones who say, “I am the arbiter of truth.” Welcome to POLITICO Nightly. Reach out with news, tips and ideas at nightly@politico.com . Or contact tonight’s author at ckim@politico.com or on X (formerly known as Twitter) at @ck_525 .
|