Deepfakes gave AI a bad rap but the tech has real uses—it just needs guardrails, experts say

AI leaders have a “duty” to make sure the technology is used in a responsible way, said Daniel Hulme, CEO of Satalia.

May 7, 2025 - 16:20
 0
Deepfakes gave AI a bad rap but the tech has real uses—it just needs guardrails, experts say
  • Using AI to make video content has been both celebrated and scorned as deepfakes polluted the internet. But the technology has developed into something that has ample commercial and consumer use cases. The challenge is keeping the guardrails on the technology, says Daniel Hulme, CEO of Satalia.

We all remember the infamous deepfakes of Taylor Swift, President Barack Obama, and Volodymyr Zelensky. These videos showed the power, mystique, and dark side of using artificial intelligence. 

But AI also has real use cases in creating engaging video content for enterprise and consumers alike, experts said at Fortune’s Brainstorm AI conference in London, United Kingdom, on Wednesday. 

AI “allows us to create content incredibly rapidly, but you have to have the right guardrails and then structures in place to mitigate the risks,” said Daniel Hulme, CEO of Satalia, the enterprise AI arm of British communications, advertising, public relations, and tech firm WPP. “We have a duty of care to make sure that we're using these technologies in the right and responsible way.”

One way to use AI in a responsible way is to continue to have humans’ oversight of the technology. Although AI could get to a point where agents could train new agents, this could create bias—and ultimately lead to the technology failing, experts said. Plus, humans are still much more adaptable than AI at this point, said Peter Hill, chief technology officer of Synthesia, an AI video communications platform. 

“One of the things that humans are incredible at is adaptability. We are resilient. We are robust,” Hill said. 

Instead of using the traditional definition of AI—getting computers to do things humans can do, which Hill said he thinks is a weak definition—the technology should be goal-directed and have adaptive behavior so it can adjust in a rapidly changing world.

“We tend not to see AI systems that are very adaptive,” Hill said. “I think that’s the new and next opportunity. Looking at humans’ ability to use creativity to adapt to a rapidly changing world, I think that is something that’s quite unique to us.”

Still, Hill showed the audience an AI-generated video of himself to illustrate how advanced the technology had become. The avatar looked nearly identical to him, and had similar mannerisms and voice. Most Synthesia customers use the tech platform to create training and education videos for workplaces, so they don’t always have to face the same level of public scrutiny that can come along with AI-generated video content.

“A lot of people are going to put their corporate brands on this and they do not want to be in the midst of anything unintended or otherwise illegal,” Hill said. “It's our responsibility to make sure that their brand is put in the absolute best light.”

WPP and Satalia, on the other hand, have to be more conscious when using AI to generate video content. During the panel, Hulme shared the Jen-AI commercial developed by VML, a WPP company. Many people were fooled by the character in the commercial who appeared to be J-Lo, but was actually just an AI avatar. 

https://www.youtube.com/watch?v=-LAHgWC93cw

Hulme’s company ensures it has the right governance structures in place so users can avoid copyright infringement, he said. Their legal counsel is pioneering how they think about making sure people are using the technology safely and responsibly, he added. 

And although there are many things that can go wrong with using AI, Hulme said we should actually fear when “AI can go very right.” He used the example of "homophily," a human bias in which we tend to like and trust things that look and sound like us.

“If we let AI loose to optimize ads, you might end up in a world where you have you selling to you. Now that might be very good for business, but it might be enforcing bias and bigotry and social bubbles,” Hulme said. “We have a duty of care to make sure that we're using these technologies in the right and responsible way.”

This story was originally featured on Fortune.com