If you listen to Supasorn Suwajanakorn, aside from thinking what a mouthful his name is, you’ll notice just how insightful he is. By studying the world’s history and expressing it in an interesting and interactive manner, he hopes to help us forge a better future devoid of past mistakes.
He’s also the super brain behind an interesting piece of tech publicly known as ‘deep fake’.
Supasorn set out to create realistic holograms of holocaust survivors that, combined with artificial intelligence (AI), would give students of history an experience that mimics talking to an actual holocaust survivor.
He hoped this would preserve the authenticity of the holocaust tragedy and relay the seriousness of the narration while passing on critical lessons.
READ MORE
Stop blatant exploitation of Kenyans by tech companies
How AI is driving war on mother-to-child transmission of HIV
WTO report examines AI's impact on global trade
Education experts seek to integrate AI in libraries, education institutions
Towards this endeavour, Supasorn created a set of algorithms that can generate an animated three-dimensional face model of a person based on just a series of photos and videos.
Armed with a handful of photos, this meant that he could in theory make a video of anyone saying anything.
And with time, this scenario has played out. Because the tech is available, there have already been viral videos with the likenesses of Facebook founder Mark Zuckerberg and US House Speaker Nancy Peloski.
Potentially, a deep fake video could go as far as having a sitting president appear to declare war on another nation, or on a more personal level, your face could appear to make a video call ending things with your significant other. Scary, right?
Dark side
Lucky for the world though, the potential risks in deep fakes have caught the attention of high-level security agencies and tech businesses across the world.
Supasorn and team also recognised the dark side of this technology and its consequences. For instance, if the public becomes more disbelieving of videos, they’ll eventually stop being trusted as evidence.
Supasorn is currently working on a counter measure innovation in conjunction with the AI Foundation – a start-up founded in 2017 whose main aim is to build tools to protect against the risks of AI.
The project, Reality Defender, aims will be to help the ordinary person identify deep fake videos and avoid falling for AI manipulation.
In essence, Reality Defender is a web browser plug-in that scans all the images and videos that you come across online, and flags potentially fake or AI-generated content in real time as you browse.
The second way you can spot fake videos is through a web browser plug-in created by a pair of UC Berkeley students, Ash Bhat and Rohan Phadte.
It’s called Surf Safe and is available for download on most major browser extension platforms.
The other countermeasure is information. By exposing as many people as possible to the reality of deep fakes, we can make people more critical about what they watch on screen so they can avoid dangerous manipulation.