Pulling Back From The Deep-Fake Crisis

 

Hell is when other people are fake” — Jean-Paul Sartre writing in 2020. 

Have you ever read Jean-Paul Sartre’s famous play No Exit? The main character, Garcin, cries out “Hell is—other people!” after realizing hell is not torture racks and fire, but interpersonal strife manufactured by Lucifer to push sufferers to the brink of psychological collapse. Physical torture would be infinitely easier to bear, Garcin declares, than the vicissitudes of continuous socialization. If you had roommates in quarantine, you might relate. 

1
1
1

Garcin is trapped in a room with two others; but he knows they are real even if he doesn’t like them. In Sartre’s hell, reality itself is not distorted, even if its setup is contrived. Even Sartre failed to imagine a hell where no part of reality can be trusted: A world where synthetic media, otherwise known as deep-fakes, fractures social trust to the point where a Common Truth is nothing but a quaint ideal of yore. 

250 million Americans (80 percent) are on social media, 3.5 billion worldwide. Why? Because it is so much fun. There is no imaginable level of societal derangement that will lead individuals to give up the mother of all toys em-mass. 

Much ink has been spilled on the problem of fake news, including a major Netflix docu The Social Dilemma (which we wrote about in an earlier story). But if fake news is a strong wind blowing against democracy, synthetic media is a category-five hurricane. Social media, the “digital equivalent of a town square“ as Zuckerberg put it, is awash in sensationalization. Content is arm twisted for a higher viral coefficient, and not to spread the truth.

Platforms so far have not been held accountable for the content users publish, thanks to Section 230, the immunity granting provision added to the Communications Decency Act in 1996, back when the internet was a bunny-hopping digital arcadia.

Section 230 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Jack Dorsey (Twitter), Zuckerberg (Facebook), and Sundar Pichai (Google) are testifying before Congress on October 28th to state their case for this controversial exemption. 

Fake news shows up as exaggerated or falsified articles or videos with the occasional doctored image. Usually, with a little digging, you can find something close to the truth because the raw events, videos, and audio, are still largely trustworthy. Synthetic media changes that. Synthetic media obliterates any pretense to the Truth, even for the savvy user. Thanks to a technological breakthrough in a kind of machine learning called generative adversarial networks (GANs), computers can manufacture images that are indistinguishable from reality. Let’s look at some examples. 


This guy doesn’t exist. He was created by a GAN. A GAN so powerful that he looks like a real person: A guy who’s had his heartbroken, bought a house, loves Shakespeare. All the emotional intimacy of human beings is there, yet it’s fake. The proverbial “uncanny valley,” the idea that emotional connection with a synthetic human will never match “the real thing,” is filling in. 

Even cat videos, the symbol of the benign early days of social media, are no longer trustworthy. Does this matter? Maybe not. But it shows the power of synthetic media, and at the very least, it degrades the quality of the entertainment to have to constantly question: “Is this a real cat drinking straight out of the kitchen faucet?”

To show how fast synthetic media is advancing, Martin Scorsese’s film “The Irishman,” released in 2019, spent millions de-aging their septarian stars Robert De-Niro and Joe Pesci. In the two to three-years since filming the Irishman, free de-aging software now does a better job than Hollywood studios did just a couple of years ago. Check out this remarkable side-by-side video comparison of free software and Scorsese-level resources. 


These examples are harmless enough. Free content generation tools allow long-tail content creators, like small YouTubers, to compete with the big dogs. But there are private and public actors committed to maximizing the chaos and social unrest in the world, and as a result, these tools are a force for evil as much as they are a force for good, clean fun. 

Where we get Sartre-level is when we start seeing fake videos of political candidates doing unseemingly things that make Watergate seem like Nixon just kissed a baby. And can you imagine what a fake video the US president announcing a nuclear strike against Russia could do to humanity? In a very short number of years, these fake videos will appear indistinguishable from reality.

I was at CES 2020 (with 200,000 in-person attendees! Simpler times…) where an arms race played out on the showroom floor between deep-fake AI detectors and deep-fake generators. The financial incentives are so heavily on the side of the content generators that it is all but impossible to imagine the detectors keeping up. It’s as if the US entered a cold war with the Vatican. 

So what’s the path forward? The next step for users is to educate themselves on both the threats and the possible solutions. We’ve already discussed threats, here are a couple of solutions. Truepic, a San Diego based photo, and video verification company, recently partnered with Qualcomm, a large chip manufacturer for Android phones, to attach meta-data to photos taken by smartphones that include information on when, where, and with what phone the picture was taken. Cryptographic digital signatures attached to images act as a form of digital watermark that can’t be altered, proving an image or video’s authenticity. 

1

At IoTeX, we are committing to maximizing user control over data at the level of hardware, by combining secure hardware, confidential computing, and blockchain technology. Blockchain, in particular, is a spectacularly powerful way to store digital watermarks, and track the provenance of a digital asset, like a video, to make sure all the changes and alterations to it are publicly auditable on a decentralized ledger not controlled by any single person or entity. Trust is synonymous with having access to the truth, and there is no word more central to a well-functioning society than trust. There are powerful tools that can help us re-establish trust in a world overrun with deep-fakes, but we must embed these tools at the level of hardware, and commit to using tools that are decentralized, and user-centric. 

If you enjoyed this article, share it with a friend or colleague to help spread the word about deep-fakes, and the need to use decentralized technology to maintain trust in our society. Thanks for reading. And please subscribe to our newsletter The Good Disruption for our weekly take on user-well being and privacy. 


Source:https://hackernoon.com/pulling-back-from-the-deep-fake-crisis-lx123w7f


No comments:

Post a Comment

How to Let Your Partner Know You're Not in the Mood without hurting

  For couples, emotional connection and sexual intimacy are dynamically related, so difficulties in one area often destabilize the other are...