njsbf new jersey state bar foundation logo a 501c3 non profit organization

Informed Citizens

are Better Citizens

by Michael Barbella

There is a saying that “seeing is believing” and another that says the “camera doesn’t lie.” New technology is challenging both notions.

In April 2019, former President Barack Obama issued a public warning about phony digital content and reality distortion. In the minute-long broadcast alert, the former president is seated before the Stars and Stripes, wearing his trademark navy blue suit and American flag lapel pin. He speaks directly to the viewer, addressing the threat posed by online misinformation. But something seems…off.

About halfway into the segment, the video reveals who is really behind the audio being heard—actor/director Jordan Peele. The entertainer teamed with BuzzFeed.com to alert the public to the dangers of falsified web content. Forged videos such as Peele’s are better known as deepfakes, named for their use of deep machine learning, which uses algorithms to create realistic, digitally-altered material. The technology allows for the manipulation of images and video footage to create new videos of people doing or saying things they never said or did.

Quickly developing technology

Deepfakes first surfaced in late 2017 after an anonymous Reddit user named “deepfakes” began posting phony videos of celebrities in “compromising positions.” By early 2018, the technology was being shared through a free app (FakeApp) that uses artificial intelligence (AI) to swap out faces and change voice recordings. Peele used FakeApp to create the deceiving Obama video, which reportedly took a total of 56 hours to produce.

Since its debut, FakeApp and other copycat programs have generated a flood of phony online videos of famous people. Celebrities or public figures are most susceptible to the technology because of the amount of footage out there for deepfake creators to use.

Deepfake technology has also been used to create silly and/or prank videos—actor Nicolas Cage, for example, has suddenly appeared in films with which he was never previously associated, and Leonardo da Vinci’s famous Mona Lisa painting has been spotted talking (and smiling). A museum in Florida used deepfake technology to create an interactive experience with the famous painter Salvadore Dali.

Deepfake technology has come a long way in a short time, moving past the celebrity realm and into the business and political arenas, where it has been used to rip off companies and discredit lawmakers. Cybersecurity software provider Symantec reports that scammers used phony audio of three different CEOs in recent months to trick senior financial executives into transferring cash.

Politicians, including President George W. Bush, President Donald Trump and Democratic House Speaker Nancy Pelosi, have also been deepfake targets. A three-minute video of Speaker Pelosi showed her slurring her words. Technically, the Pelosi video isn’t considered a deepfake since it was created not by image manipulation but by slowing down the video speed. However, it demonstrated how quickly a deepfake could go viral before it can be discredited. The video was quickly exposed as a fake, but not before it had racked up 2.5 million Facebook views.

Real world consequences

So far, deepfakes have not had major consequences, but U.S. intelligence agencies warn about their potential influence during the 2020 election.

In a hearing before the House of Representatives Intelligence Committee, Danielle Citron, a law professor at the University of Maryland and an expert on deepfakes, urged lawmakers to take the dangers seriously, testifying, “A deepfake could cause a riot; it could tip an election; it could crash an IPO [initial public offering].”

The Pentagon is taking the dangers of the technology seriously and is currently researching how to best detect deepfakes. The technology raises serious concerns about the potentially damaging impact of phony online content on personal reputations, truth and overall trust in what we see online. Many are concerned that deepfakes could call into question legitimate videos; spreading more doubt about what is to be believed. In addition, even if a deepfake is identified, the damage may already have been done.

Those concerns have intensified in the last year as improvements in deepfake technology have outpaced detection methods. Deep learning computer applications are now so readily accessible that it has become virtually impossible to prevent the creation of deepfakes or their spread on social media, technology experts claim.

“Technologically speaking, there is nothing we can do,” Ali Farhadi, senior research manager for the computer vision group at the Allen Institute for Artificial Intelligence in Seattle, told Business Insider. “The technology is out there, [and] people can start using it in whatever way they can.”

A narrow focus

Any effective solution to address the potentially damaging consequences of deepfakes must address the way in which the technology is used rather than the AI technology itself, authorities note. Lawmakers throughout the country have proposed deepfake-targeted legislation that attempts to penalize those who misuse the technology.

In Virginia, for example, it is now illegal to share real or fake nude photos or videos of someone without his or her permission. The law, which took effect July 1, 2019 also covers photoshopped images or any other kind of fake footage.

Two bills pending in the California State Assembly would prohibit the creation of bogus digital imagery and sexually explicit audio-visual works without proper consent, and the distribution of phony political audio or visual media within 60 days of an election. Federal lawmakers are targeting manipulated media content through proposals like the Malicious Deep Fake Prohibition Act, which was introduced in December 2018 and is currently before the U.S. Senate Judiciary Committee; and the DEEPFAKES Accountability Act, introduced in June 2019 and referred to the House subcommittee on Crime, Terrorism and Homeland Security.

These proposed laws, however, could potentially violate the U.S. Constitution’s right to free speech, including the creation of parody videos, which are protected. Proposals to address deepfakes must be narrowly tailored and clearly define the potential harms involved, according to Ellen P. Goodman, a professor at Rutgers Law School—Camden who specializes in information policy law.

“If it’s not narrowly tailored and the harms are not defined, it will be unconstitutional,” Professor Goodman notes. “To avoid First Amendment issues, the law has to be very narrowly tailored to the kind of content you’re targeting. It also has to be content neutral and it cannot address any kind of editing or alteration.”

Some legal experts believe legislation is pointless because existing laws give deepfake victims appropriate options for legal challenges. Deepfakes used for criminal or harassment purposes, for example, are subject to criminal laws, according to the Electronic Frontier Foundation, an international nonprofit digital rights group. Likewise, bogus content used in blackmail attempts are subject to extortion laws.

Copyright infringement claims offer some protections against deepfakes, but that approach will not stop their spread because the videos (and images) may fall under the “fair use” exemption. The fair use rule in copyright law allows for some unlicensed use of material that otherwise would be copyright protected.

Defamation laws are another potential weapon against the harms of deepfakes, but victims must prove the content portrays them in an embarrassing or offensive manner.

“Defamation is one approach—if the deepfake is about an actual person, and it’s false, and the content is damaging to that person’s reputation, then defamation laws would apply,” Professor Goodman says.

As with any new technology, deepfakes may ultimately force a change in law or legal precedent. Brett R. Harris, a Woodbridge attorney who specializes in technology law, says that existing law already offers protection for victims of deepfakes.

“AI is often surrounded by a ‘wow’ factor, where the novelty often distracts from the underlying legal issues,” Harris says. “In these situations, new laws may be developed to make clear that [basic legal] principles should be applied.”

Professor Goodman points out, “We are entering a world where public figures—Congress members, U.S. senators, presidents—can be depicted saying things that they never said, and there doesn’t seem to be anything under current law to prevent that. I do think there is an opportunity for new laws to prevent that from happening, but they will have to be very narrowly focused.”

Discussion Questions

  1. What potential harms can you think of related to the use of deepfakes?
  2. What potential benefits can you think of related to the use of deepfakes?
  3. Do you think the government should put more regulations on technology or less? Explain your answer.
  4. Right now, social media platforms are policing themselves with regard to deepfakes. What are potential problems with the current oversight? Who should bear the responsibility of flushing out deepfakes and overseeing their abuse? Explain your reasoning.

Glossary Words
algorithm: a set of rules to be followed in calculations.
extortion: the act of obtaining property or money by the use of violence, threats or intimidation.

This article originally appeared in the fall 2019 edition of The Legal Eagle.