A ‘deepfake’ synthesizes hyper-realistic images, speech, music and video by combining and superimposing original images onto existing images. It is a technology which allows any video to be altered to the creator’s pleasure.
Light-hearted games like 'Humen' can make anyone “dance” and 'Modulate' can change video-gamers’ voices in real time. However, some algorithms edit voices and manipulate mouths: seamlessly twisting truths into lies.
2017 SAW hacking hobbyists generating humorous impersonation videos and fake celebrity pornography. The technology was glitchy and could be discerned from actual footage: more a gimmick than a threat.
However, deepfake technologies have rapidly improved. Hany Farid, a professor at University of California, says that in January 2019 deepfakes were buggy and flickery. Nine months later, he’s “never seen anything like how fast they’re going”, concluding that this is “only the tip of the iceberg”.
These deepfakes have caused issues in election campaigns and financial markets (amongst other things). They essentially undermine the validity of all video content. Manipulating videos of trusted people can hugely affect the swing in an election or the share price of a stock.
Startups, academics and multinationals are racing to combat the threat of deepfakes by using artificial intelligence to detect distortions. Mark Zuckerberg promised that Facebook would use algorithms to identify fake news stories to Congress last October. More recently, Google’s Jigsaw, who build technology to tackle global security, are collaborating with partners in the space generating hundreds of deepfake simulations as a dataset to help train detection software.
Some have noted that some of these videos are not to the same quality as modern deepfakes: highlighting the incessant development of such technologies. However,
Google responded that they’ve provided a range in quality to improve training in the neural networks, a technique that startup Deeptrace Lab also employ.
Regardless, the rate of development is alarming and the technology race is on.
Reliance on lots of deepfakes themselves to train artificial intelligence through machine learning may not be sufficient. The ‘frame’ problem is a significant hurdle to artificial intelligence: this is the lack of understanding of the contextual relevance in videos. Humans may be required to detect some inconsistencies that are to do with the context of a video such as a stray shadow or inaccurately rendered weather. This may be undetected by the advanced algorithms programmed for analysing mannerisms and speech patterns.
Social Media sites are troubled with how to regulate the space. Facebook’s ‘refusal to act’ counts websites as platforms not as publishers to promote free speech. Consequently, Facebook is not responsible for content on their site, as such regulation needs to come from the publishers. However, Facebook is being cooperative in their research in detection: attacking deepfakes at source.
Twitter, just recently, have highlighted the double standard where Facebook seems to fall foul: trying to stop people “gaming systems” to spread misleading information, but if you pay for it you can say what you want (An Orwellian doctrine of free speech: all tweets are equal, but some more so than others). Jack Dorsey, Twitter’s CEO, ‘tweeted’ this week, speaking out against “deepfakes”, committing Twitter to stop all political advertising. This restricts the widespread impact of deepfakes since video content’s popularity will be merit based and not artificially boosted by paying advertisers. The new regulation will be enforced on 22ndNovember.
Governments are also struggling to get to grips with the technologies. In America, the DEEPFAKE Acountability Act only attends to reconstruction of images and voice yet not minor adjustments. In Speaker Pelosi’s video of her seemingly drunk her words were not completely edited, but the slurring of words had similar consequences. Most concerning, deepfakes could be used as a tool by countries to destabilise one another. Experts in the field note that the use of the technology may be able to be intercepted by governmental organisations, or multinationals when there is only the mere firepower of one hacker, but a whole country’s firepower behind targeted deepfakes can greatly destabilise a country.
This is a race of technologies, and we desperately need to improve our capacity to discern fact from fiction. Ultimately AI is still limited by the frame problem and regulation poses tricky questions for platforms and governments alike. Ajder, of Deeptrace, notes that “the concept of truth has never been as solid as we like to think”. The proliferation of deepfakes implores us to question the provenance and validity of all online content and to work out what’s a lie and what’s a truth.
Comentarios