Pioneering Research

Dr. Khalid Malik is working on an antidote to a potentially highly venomous political weapon threatening the 2020 presidential campaign: deep fakes

Two men looking at each other.

Dr. Malik and his students use various AI techniques to design a reliable digital multimedia forensics system to detect deep fakes. (Photo Credit: Robert Hall)

Department of Computer Science and Engineering

icon of a calendarMay 7, 2020

icon of a pencilBy Arina Bokas

Share this story

Deep fakes — the synthetic audio and artificial intelligence (AI)-generated videos — use generative adversarial networks (GANs) to create fabricated audio and videos.

“Deep fakes are computerized manipulation. They can undermine opponents, mislead voters and erode public trust. There is a lot of anxiety over the new technologies that can create confusion on the eve of a White House vote,” Dr. Malik says.

Deep fakes have already made their appearance in other sectors. According to The Washington Post*, researchers at the cybersecurity firm Symantec claim that they have found at least three cases of executives’ voices being falsified to rob companies of millions of dollars.

“This technology is being massively used to spread misinformation around the globe. Existing methods of detection have many limitations and are unable to provide clear answers, as deep fakes generators are ahead in the game compared to those who are working on detecting it,” Dr. Malik explains.

AI-generated fake multimedia detection is also a major challenge for internet of things (IoT) applications, mainly because of the lack of datasets. The existing datasets, such as ASVspoof and ReMASC, have only the first-order replay recordings, which prevents evaluation of the anti-spoofing algorithms for detecting the multi-order-replay attacks in IoT. They do not capture the characteristics of microphone arrays, either.

To help combat the threat, Dr. Malik uses various AI techniques to design a reliable digital multimedia forensics system that can verify the authenticity and integrity of digital audio, identify the acoustic environment, and model and extract distortion of digital recording devices. Among his recent inventions and discoveries are novel acoustic feature descriptors to capture the non-linearities and microphone signatures from the audio samples; the groundwork for a unified anti-spoofing framework for detecting the multi-order-replay, cloning, and cloned-replay attacks; and voice spoofing detection corpus (VSDC) with multi-order replays and cloned replays.

In the near future, Dr. Malik’s efforts are focused on developing three detectors. Deep Fakes Detector will analyze videos for visual forgery, such as face-swap or lip-sync. Voice Replay Attacks Detector will detect whether the audio stream of deep fakes is bona fide or replay. Voice Cloning Attacks Detector will identify the audio signal of deep fakes video as bona fide or cloned.

Dr. Malik’s students are also actively involved in the research, conducted in the Security Modeling and Intelligent Learning in Engineering Systems (SMILES) laboratory, which he has founded.

“My research interests include design and analysis of algorithms for fake multimedia detection, prediction systems for subarachnoid and ischemic stroke, detection of emerging infectious diseases and pandemics, brain angiograms analytics, hybrid knowledge and machine learning-based information extraction from clinical corpus, automated knowledge generation framework, and secure multicast protocols in intelligent transportation systems,” Dr. Malik says. His research is supported by the National Science Foundation (NSF), the Brain Aneurysm Foundation (BAF), and other national and international agencies.

The researcher’s big goals include the integration of his research on multimedia forensics and intelligent decision support system to develop futuristic secure voice-controlled decision support systems for healthcare, smart homes, and autonomous vehicles.

Share this story