Intelligence, Inside and Outside.

Tech’s Ever-growing Deepfake Problem

 

The run-up to the U.S. presidential election is also speeding up the arrival of a tipping point for digital fakery in politics, Axios’ Ashley Gold reports.

What’s happening: As the election, a pandemic and a national protest movement collide with new media technology, this political moment is accelerating the proliferation and evolution of deliberately deceptive media, leaving companies struggling to enforce often-vague policies.

Driving the news:

  • This week, thousands of Twitter and Facebook users including White House deputy chief of staff Dan Scavino circulated a manipulated video appearing to show Democratic presidential candidate Joe Biden falling asleep during a live interview. The Trump campaign also separately posted a selectively edited video of Biden.
  • Ady Barkan, a health care activist and lawyer who speaks in a computerized voice due to the neurological disease ALS, wrote in the Washington Post this week about his experience when House Minority Whip Steve Scalise shared a video (which he later took down) featuring Barkan speaking with manipulated audio.
  • Last month, a fake video of House Speaker Nancy Pelosi appearing to be drunk or drugged (the second of its kind) circulated on Facebook.

The big picture: Platforms are now taking fairly mild measures against even such crudely edited content, often slapping them with labels after they’ve already spent hours or days circulating. Experts are worried about Silicon Valley’s ability to meet the challenge once AI-generated deepfakes become widespread, and it’s trivially easy to make any famous person appear to say anything.

“We’re unprepared because social media companies have failed to detect this type of content at internet scale, and detect it fast enough to stop the spread of it before it does damage,” said Jeffrey McGregor, CEO of photo authentication firm Truepic.

Some especially tricky challenges we face or will soon face, according to experts:

  • “Cheapfakes,” or “shallow fakes” — like the slowed-down Pelosi video — can spread quickly before getting caught, and even then may not be taken down. Facebook, whose manipulated media policy largely focuses on specifically thwarting deepfakes, labeled that video as misleading but left it up.
  • “Readfakes” are on the rise — a coinage from Graphika researcher Camille François referring to AI-generated text, which can take the form of fake articles and op-eds.
  • Generative Adversarial Networks, which can create images of non-existent people, let disinformation campaigns make fake social media accounts or even infiltrate traditional media.
  • “Digital humans” expand on that idea, relying on voice synthesis and faked video to create entire faked personas.
  • Sheer volume is a concern, as AI gets better at generating a lot of images or text at once, flooding the internet with junk and making people less sure of what’s real.
  • Plus: Those sharing faked media are getting smarter about skirting the line of breaking platform rules, such as by claiming a video is just parody. Of course, some manipulated videos are meant as parodies, which only makes this problem tougher.
Read More  Using The Power Of Blockchain To Combat Deepfake Videos

Solutions are tough to come by. Experts agree that attempting to catch and kill deepfakes and cheapfakes on a case-by-case basis may never work at scale. But some tactics can help.

  • Putting deepfake detection tools in users’ hands would help platforms address the challenge of scale, Graphika’s Camille François told Axios. And it may give users more confidence than a platform telling them what’s real or fake.
  • Best practices shared across the industry are a must, said Rob Meadows, chief technology officer at the AI Foundation, which recently partnered with Microsoft on Reality Defender, a deepfake detection initiative. Ideally these would include some sort of objective criteria for assessing the likelihood that a given piece of media is faked, he said.
  • Authenticating images and videos by tracking when they’re taken and every subsequent edit or manipulation afterward could prove more effective in restoring trust than trying to detect and quash deepfakes once they’re already circulating. Truepic is among the companies working on an open standard to do just that.

Background: Tech platforms have been working on combating deepfakes and have rolled out new policies in the past year or so. Results have been mixed.

  • Facebook’s Deepfake Detection Challenge only detected them 65% of the time, per results the company announced in June.
  • A Facebook spokesperson told Axios the company expects deepfakes to spread and wants to “catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes before they become a bigger problem.”
  • Google said it is exploring and investing in ways to address synthetic media and researching deepfake detection.
  • YouTube prohibits deceptive uses of manipulated media. In early 2020, Facebook, TikTok, Twitter and Reddit all sought to tighten their deepfake policies.
Read More  As Generative AI Gains Pace, Industry Leaders Explain How To Make It A Force For Good

Go deeper: Tech platforms struggle to police deepfakes


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

Toward A Machine Learning Model That Can Reason About Everyday Actions

Next Post

Taking Charge For An Electric Future

Read next