top of page
  • Thierry Spanjaard

Should we trust images and videos?

AI progresses are probably the most important trend of our times. Technologies progress faster than anything we had ever seen. The impact on the society and our decisions is just at its beginnings!



Once AI is able to generate text and images, the next thing is videos. This is where deepfakes come into the picture. Deepfakes can be defined as computer-generated videos where an AI replaces the face of a person with another while matching typical face gestures. Now, deepfake generators can not only encode a video and record it but also produce real time avatars that react to human interactions.


Deepfakes can be perfect enough to fool even the most prudent person. The old money transfer scams, in which someone impersonating the CEO of a company was asking for an urgent money transfer to a newly created account, is totally renewed. Scammers can come up not only through a phone call but appear real in a real time video conference!


This is what happened to an employee at a Hong Kong-based multinational company. Fraudsters fooled him in participating in a video call, with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. “Scammers found publicly available video and audio of the impersonation targets via YouTube, then used deepfake technology to emulate their voices ... to lure the victim to follow their instructions,” said Baron Chan, senior superintendent of the Cyber Security and Technology and Crime Bureau in Hong Kong, as reported by the South China Morning Post (SCMP). The company employees in the call looked and sounded like real people, the targeted employee recognized. Believing everyone else on the call was real, the employee followed instructions given during the meeting and made 15 transfers totaling HKD 200 million (EUR 23.5 million) to five Hong Kong bank accounts. The entire episode lasted about a week from the time the employee was contacted until the person realized it was a scam upon making an inquiry with the company’s headquarters, says the SCMP. Police are still investigating and no arrests have been made.


Thanks to AI progress, deepfake generators are becoming increasingly readily available. Already many providers propose users to make their own deepfake videos. The whole scope of needs is covered, from free tools to highly professional ones.

 

As deepfakes can used for a variety of purposes including scams and political influence, there is a will from the AI community, and more globally the society, to identify them. The US government is pushing for imposing a watermark that would allow to identify an AI-generated content. Deepfake generators and tech companies such as Adobe and Microsoft, allow to include watermark, but these cannot become a guarantee against manipulation as they can easily be removed thanks to editing software. Google says it is currently working on what it calls SynthID, a watermark that embeds itself into the pixels of an image. SynthID is invisible to the human eye, but still detectable via a tool, says The Verge


At the same time another approach is to set up a deepfake detector. These allow not only to identify deepfakes but also to trace image manipulations. Such solutions are proposed by Intel and Microsoft as well specialized startups.

 

In the field of deepfakes as well as in many others, technology is progressing faster than the

33 vues
Recent Posts
Archives
Rechercher par Tags
Retrouvez-nous
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page