Roughly a year-and-a-half ago, attendees at a film festival in Amsterdam watched a speech by then-President Richard Nixon memorializing astronauts Neil Armstrong and Buzz Aldrin, wistfully confirming they had been fatally stranded on the moon during the 1969 Apollo flight.
The speech was actually written for Nixon as a backup but never delivered because the astronauts in fact weren’t stranded. The video, created by MIT’s Center for Advanced Virtuality, was meant to underscore the potentially powerful influence of so-called deepfakes.
Deepfakes –mostly falsified videos and images combining the terms “deep learning” and “fake” – weren’t limited in 2019 to the Nixon presentation and were not uncommon before that. But today they are more numerous and realistic-looking and, most important, increasingly dangerous. And there is no better example of that than the warning this month (March 2021) by the FBI that nation-states are virtually certain to use deepfakes to help propagate increasingly misleading campaigns in the U.S. in coming weeks.
The warning came only a week after the news of a nation-state breach of Microsoft Exchange Servers, believed to have impacted at least 30,000 U.S. organizations. Yet the deepfake threats may be more ominous because they come amid an explosion of conspiracy theories populating the Internet, further fueled by the January insurrection of the Capital in Washington. Authorities have said that was spurred in part by misinformation propagated online by far right influencers and media outlets.
There are now thousands of deepfake videos online at any one time, according to startup Deeptrace. They create a reality-distortion field that adversely threatens politics, business affairs and the perception of history, and they can even be used in military applications. While still mostly used to change the faces of women involved in pornography, they are rapidly becoming more sophisticated.
Relatively recent deepfake videos, for example, show Facebook’s Mark Zuckerberg admitting that his company’s true goal is to exploit users and former-President Barack Obama using an expletive to describe former-President Donald Trump.
Deepfakes obviously make it difficult to distinguish between what is real and what is fake. Conceivably, there could be deepfake footage of President Biden declaring war on China for invading Taiwan, of overseas U.S. soldiers committing civilian atrocities or of a politician taking a bribe while running for office. The technology to create deepfakes is widely available. So real-looking footage could be created by various groups, even individuals, not just state-sponsored actors.
All this boils down to the weaponization of data, a broader problem not limited to deepfakes alone. Data can be manipulated in any form, including text, numbers and even voice. Cybercriminals, for example, can gain access to corporate networks and insert false data, undermining the business. This is a threat to so-called data provenance – the historical record associated with a piece of data, including its origin and changes made over time. In the data provenance world, all data revolves around accuracy, without which most entities cannot satisfactorily function for long.
Among the stalwarts of pristine data provenance is Matthias Niessner, a professor of visual computing and AI at the Technical University of Munich in Germany, who, curiously, is among the few concerned about deepfakes. “It takes a lot of effort to create a deepfake, and what you do really get out of it?” he asks. His take on data provenance, however, is altogether different. “Everything is data-driven in the information age,” Niessner says. “So if you don’t get the right data, your algorithms will make the wrong decisions.”
In the case of deepfakes, the primary tool is machine learning. A deepfake creator first trains a neural network on many hours of real video footage of the person to develop a deep sense of what he or she looks like from numerous angles and lighting conditions, and then cleans up a lot of details.
The most advanced deepfakes use so-called generative adversarial networks (GANs) – sometimes heralded as the rise of “AI imagination.” Two machine learning models compete against each other. One focuses on a data set and creates video forgeries. The other attempts to detect the forgeries. The one-two process continually repeats itself until the detection model can longer detect the forgery. The larger the set of training data, the easier it is for the forger to create a believable deepfake.
The “good guys” in cybersecurity have not just been idly standing around in the face of these developments. They have designed systems that can analyze videos for strong indicators of a fake, including questionable blinking patterns and how someone’s facial movements relate to each other. For instance, do they tend to tilt their head a certain way when smiling?
The number of hackers far outstrips the number of these software developers, however, and hackers typically develop effective workarounds after new cybersecurity measures initially stymie them.
Some cybersecurity pros are looking for more lasting ways to undermine the growth of deepfakes. One positive step forward would be a bigger embrace of block chain technology. Block chain works by creating an unalterable public ledger stored on multiple computers around the world and made taper-proof by cryptography. Legitimate videos and images can be registered to a ledger at the time of creation…
New laws may also be helpful. Fifteen months ago, the nation’s first federal law related to deepfakes went on the books as part of the National Defense Authorization Act for Fiscal Year 2020. Among other things, it requires the government to notify Congress of foreign deepfake-disinformation activities targeting U.S. elections. In addition, Texas, Virginia and California have criminalized deepfake pornography relatively recently and California has banned the creation and circulation of deepfakes of politicians within 60 days of an election.
More federal laws, in particular, would be helpful, but in the end no single solution will suffice. A big step, though, would be to increase public awareness of the possibilities and dangers of deep fakes. Informed citizens are a strong defense against misinformation and a national security threat.
This article originally ran in Security, a twice-monthly security-focused eNewsletter for security end users, brought to you by Security Magazine. Subscribe here.
Source: https://www.securitymagazine.com/articles/94917-the-cybersecurity-reality-distortion-field-deepfakes-and-other-manipulated-data