DUBAI: Deepfake technology — AI-generated videos and images that mimic real people or alter events — has surged in recent years, transforming the digital landscape.
Once considered a novelty, deepfakes now pose serious risks, capable of spreading misinformation, manipulating public opinion, and undermining trust in media. As the technology becomes more sophisticated, distinguishing fact from fiction is increasingly difficult, making societies vulnerable to deception and chaos.
The challenge is unprecedented and escalating quickly.
In March 2022, as Russian troops closed in on Kyiv, a chilling video began circulating online. It appeared to show Ukrainian President Volodymyr Zelenskyy, pale and weary, urging his soldiers to surrender.

This illustration photo taken on January 30, 2023 shows a phone screen displaying a statement from the head of security policy at META with a fake video (R) of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons shown in the background. (AFP/file photo)
Within hours, fact-checkers revealed it was a deepfake — an AI-generated hoax planted on hacked news sites and social media to sap morale and spread confusion at a pivotal moment.
Though quickly exposed, the damage lingered. Millions had already seen the clip, and for a brief, uneasy period, even seasoned observers struggled to separate truth from digital deceit. It marked one of the first major wartime deployments of synthetic media — a glimpse into the new battles for credibility defining the information age.
According to identity-verification firm Sumsub, deepfake incidents in surged by 600 percent in the first quarter of 2024 compared with the previous year.
With AI platforms appearing slow to intervene, governments are increasingly seen as the key line of defense. In , lawmakers are moving quickly, leveraging a growing body of legal measures to contain the threat.
Legislating for safety
Anna Zeitlin, partner for fintech and financial services at law firm Addleshaw Goddard, said Saudi legislators are already taking decisive action.
“ is leading the way in this respect, which is actually great to see,” Zeitlin told Arab News.
“Saudi have got the Anti-Cybercrime Law, which basically means things like spreading fake news or misinformation that are considered to threaten public peace or security or national interest — that’s prohibited, it’s a criminal offense. So I guess that is the foundation level, the starting point.”

Anna Zeitlin, partner for fintech and financial services at law firm Addleshaw Goddard. (Supplied)
She added that this framework is supported by the Saudi Data and AI Authority, which Zeitlin described as “really one of the first of its kind.”
“We see lots of data protection regulators all over the world these days, but not really AI regulators, and SDAIA is covering both data and AI. Obviously they go hand in hand.”
“They’ve got a few things we should talk about,” she continued. “The AI Principles and Ethical Controls came out in September 2023, and then the Generative AI Guidelines, which are more for government use, help people deal with or treat the use of AI properly, fairly and sensibly.”
“In addition, they’ve got a public consultation paper specifically about deepfakes, which is really interesting. These are the guiding principles for addressing deepfakes — it’s all about how to deal with them, how to spot them, and how they should be handled. I have to stress that this is just a public consultation, but it will have some legal weight behind it.”
Opinion
This section contains relevant reference points, placed in (Opinion field)
Zeitlin also highlighted the role of ’s General Authority for Media Regulation in enforcing these standards, particularly regarding synthetic content shared online. Using a deepfake to “advertise or promote something” can constitute a criminal offense, punishable by fines or even jail time.
“It’s pretty serious,” she said, noting that while the UAE has similar provisions through its cybercrime and data-protection laws, “Saudi is really leading the charge and moving in the right direction.”
Finding the right balance
Even as regulation advances, experts caution against overreach. Preslav Nakov, Department Chair and Professor of Natural Language Processing at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), describes the challenge as pervasive and the solution as a delicate balancing act.
“The spread of AI-driven misinformation and deepfakes poses a major challenge everywhere. The instinctive reaction is often to call for stricter regulation. Yet, technology evolves too quickly, and blunt restrictions risk stifling the very innovation that the Gulf’s economies are trying to foster,” he told Arab News.
Nakov believes the answer lies in “a multi-pronged strategy” that combines AI-powered detection systems, digital literacy, and cross-sector collaboration.

Preslav Nakov, Department Chair and Professor of Natural Language Processing at MBZUAI
He cited a recent Nature Machine Intelligence study showing that large language models, while prone to factual errors, can assist fact-checkers by identifying claims and sourcing evidence—making them “part of the problem and part of the solution.”
Another study, he noted, revealed that efake-news detectors can be biased, sometimes labeling accurate AI-generated text as false—a growing risk as machine-produced content proliferates.
“Deepfake technology has advanced tremendously in recent years. Today, AI-generated text, images, and videos are convincing enough to catch people off guard. At some point, yes, certain AI-generated content will likely be impossible to distinguish from reality with the human eye alone. That’s why detection cannot be our only line of defense,” he said.
“This is why the answer is smart governance, a balanced approach that combines advanced detection technology, public awareness, and evidence-based policymaking. Only by integrating these elements can we mitigate the harmful effects of AI-driven misinformation while ensuring we benefit from the enormous opportunities AI brings.”
DID YOU KNOW?
• The first deepfake video appeared online in 2017 — just eight years later, the technology can now mimic anyone’s face or voice in minutes.
• Global deepfake-related scams caused over $25 billion in losses in 2024, cybersecurity analysts estimate.
• More than 90 percent of AI-generated deepfakes target individuals rather than organizations.
• ’s AI Principles and Ethical Controls, issued in 2023, are among the first national AI ethics frameworks in the region.
Zeitlin echoed Nakov’s concerns, noting the loss of AI businesses in Europe due to what is perceived as overregulation.
She said the fight against deepfakes and online fraud exists “between politics, regulation,” and emphasized the role of platforms themselves, which have largely avoided strict accountability for policing misinformation.
In contrast, she said, Middle Eastern governments tend to enforce stricter online content controls “to protect people in the region,” while European regulators push for extensive oversight—often clashing with tech companies citing the impossibility of monitoring such massive volumes of content.
“This is not an argument that’s going away anytime soon,” Zeitlin said.

Classroom lecture at the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi. (MBZUAI photo)
For Nakov, whose work at MBZUAI focuses on developing fact-checking tools like LLM-DetectAIve, Factcheck-Bench, and OpenFactCheck, the complexity of the debate calls for a rethink of how society approaches truth online.
“When we talk about misinformation and disinformation, I think it is time to move beyond simple true/false verdicts. Reality is rarely that binary. What matters more are the explanations—the reasoning, the context, the nuances that help people truly understand why a claim might be misleading, partially correct, or simply taken out of context,” he said.
“In fact, many fact-checking organizations have already moved in this direction. They no longer rely on assigning simplistic labels, but instead produce detailed fact-checking articles. These articles are essentially a dialogue between the fact-checker and the public: they unpack the claim, provide evidence, and show why the reality is often more complicated than it first appears.”