Why business leaders in KSA should be paying attention to deepfakes

https://arab.news/jbymp
Artificial intelligence continues to influence the way societies function, offering extraordinary opportunities across industries.
However, while its potential remains expansive, it is impossible to ignore the darker applications that are rapidly emerging. Among the most urgent concerns are the growing use of deepfakes, which are synthetic videos or audio recordings convincingly manipulated to mislead viewers.
In global discourse, the focus has often been placed on political or entertainment-related consequences, but the implications for businesses, especially in , and the wider region, demand equal attention.
has committed to an ambitious and transformative digital strategy under the Vision 2030 framework. As companies across the Kingdom adopt new technologies and build intelligent systems, they are also inadvertently widening the scope for digital vulnerabilities. One study conducted by Kaspersky revealed that only one in five employees in could confidently distinguish a deepfake from a real video, while many others mistakenly believed they had the ability to do so. This disconnect between perceived competence and actual detection skill presents a critical risk, especially as the creation of deepfakes becomes easier and more convincing.
It is tempting for corporate decision-makers to view deepfakes as fringe phenomena, relevant only to political manipulation or niche online content. That perception is outdated. Today, deepfakes are increasingly used in fraudulent schemes, legal deception, and targeted attacks on individuals and brands. In 2023, a retired Indian government official was deceived into transferring money after receiving a video call that appeared to come from a friend in distress at Dubai Airport. The video was completely fabricated but realistic enough to trick someone with significant life and professional experience. This case is not an anomaly. It reflects a broader trend that business leaders must not ignore.
The advancement of generative AI tools has significantly reduced the effort needed to produce highly believable synthetic content. Previously, these types of digital forgeries displayed flaws that trained eyes could catch, such as mismatched facial movements or poorly synchronized audio. Today, many of those imperfections have been eliminated. This has allowed malicious actors to carry out impersonations during video calls, simulate confidential conversations, or create falsified surveillance footage with an alarming degree of realism. In an environment where video is often trusted as a primary source of truth, the consequences of believing manipulated content can be severe.
It is tempting for corporate decision-makers to view deepfakes as fringe phenomena, relevant only to political manipulation or niche online content. That perception is outdated.
Leo Levit
The issue of deepfakes extends directly into the boardrooms and risk portfolios of modern businesses. Organizations depend on video evidence to investigate workplace incidents, verify compliance breaches, assess insurance claims, and resolve legal disputes. Security teams analyze surveillance recordings, human resource departments evaluate footage during misconduct inquiries, and legal counsel relies on verified video when building arguments in court. If this kind of evidence becomes unreliable, every internal process that depends on it is weakened, and the organization’s ability to defend itself from litigation or reputational harm is diminished.
Industries that support essential services, including financial institutions, energy providers, healthcare systems, and logistics operators, are particularly vulnerable. In these environments, an altered video that appears to show a breach, an injury, or an operational failure could provoke regulatory consequences or erode public trust, even before the truth is uncovered. Insurers are beginning to question whether unverified digital media can continue to serve as evidence, which has already prompted a shift in how claims are evaluated and challenged.
For years, the approach to securing video has focused on post-capture protection methods such as storage encryption or watermarking. These steps, while still useful, are no longer sufficient in an era where forgery can occur at multiple stages throughout the video lifecycle from capture to transmission to storage.
The future of trustworthy video requires verification at the moment it is captured. At ONVIF, we are addressing this through a method known as media signing. This approach involves embedding a digital signature directly into groups of frames at the camera level, creating a tamper-evident chain of integrity that remains intact throughout the video’s lifecycle. If even a single frame is altered after the moment of capture, the video will no longer verify correctly.
This solution is designed not as a proprietary tool for a single manufacturer, but as an open standard intended for use across the global security and surveillance ecosystem.
Business leaders in must now recognize that deepfakes are no longer a theoretical threat or a challenge that only affects foreign markets. These synthetic videos are already in circulation and will continue to grow in sophistication and accessibility.
In this evolving digital era, preserving the integrity of what we see has never been more essential, and building trust must begin with the systems that record the world around us.
• Leo Levit is chairman of ONVIF, a global and open industry forum that is committed to standardizing communication between IP-based physical security products.