I never said that! High-tech deception of ‘deepfake’ videos
Associated Press
WASHINGTON
Did my congressman really say that? Is that really President Donald Trump on that video, or am I being duped?
New technology on the internet lets anyone make videos of real people appearing to say things they’ve never said. Republicans and Democrats predict this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the U.S. and other Western democracies.
This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies. Lawmakers and intelligence officials worry that the bogus videos – called deepfakes – could be used to threaten national security or interfere in elections. That hasn’t happened, but experts say it’s not a question of if, but when.
“I expect that here in the United States, we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital-forensics expert at Dartmouth College in Hanover, N.H.
When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.
Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies to detect fake images and videos. Right now, it takes extensive analysis to identify phony videos.
Deepfakes are so named because they use deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.