Science & Tech

Detecting Deepfakes

A Texas A&M assistant professor is working with his students to develop better ways to differentiate between real and computer-generated images.
By Hannah Conrad, Texas A&M University College of Engineering March 6, 2020

illustration of Side face of AI robot by network form.
Used largely as bots on social media platforms to spread fake news and sway opinion, deepfakes are a global and interdisciplinary issue.

Getty Images

Deepfake images of people look real. They pose in realistic settings and, in the case of videos, can emote almost naturally. However, everything about deepfakes is synthetic – just a series of codes that come together to form an image of a person who doesn’t exist.

Freddie Witherden, an assistant professor in the Department of Ocean Engineering at Texas A&M University, wrote in a recent paper that the images and bots behind the falsified faces are not without fault.

Used largely as bots on social media platforms to spread fake news and sway opinion, deepfakes are a global and interdisciplinary issue.

“It seems, initially, like something vastly different to what an ocean engineer would normally do day to day,” Witherden said. “But quite a bit of my day to day research involves machine learning, applications of machine learning and, in the same way that deepfakes try and synthesize realistic looking pictures of people, some of my research involves using the same technology to simulate fluid flows or generating fluid flows without having to do a full simulation.”

This article by Hannah Conrad originally appeared on the College of Engineering website.

Related Stories

Recent Stories