An AI-powered tool aims to combat fake news in the Arab world and beyond

Special An AI-powered tool aims to combat fake news in the Arab world and beyond
Misinformation and disinformation are considered a top short-term global risk that not only mislead the public but also erode trust, deepen societal divisions and threaten fundamental human rights. (Shutterstock illustration)
Short Url
Updated 22 November 2024

An AI-powered tool aims to combat fake news in the Arab world and beyond

An AI-powered tool aims to combat fake news in the Arab world and beyond
  • Developed jointly with EU academic institutions, FRAPPE is the brainchild of Preslav Nakov of Abu Dhabi’s MBZUAI
  • System trained with 23 different linguistic techniques, can identify specific persuasion and propaganda techniques

RIYADH: Rising concern over disinformation’s role in manipulating public opinion has motivated Preslav Nakov, a professor at the UAE’s Mohamed bin Zayed University of Artificial Intelligence, to develop an AI-powered tool for detecting propaganda.

FRAPPE, short for Framing, Persuasion and Propaganda Explorer, is designed to assess news framing techniques and identify potential instances of information manipulation.

Nakov, chair of the natural language processing department and professor of natural language processing at the Abu Dhabi-based MBZUAI, said that AI plays a central role in FRAPPE by analyzing, categorizing and detecting complex patterns that influence readers’ opinions and emotions.

The tool offers real-time, on-the-fly analysis of individual articles while enabling a comprehensive comparison of framing and persuasion strategies across a wide range of media outlets, he told Arab News.

The UN defines disinformation as inaccurate information deliberately created and disseminated with the intent to deceive the public and cause serious harm. It can be spread by both state and non-state actors and can affect human rights, fuel armed conflict and undermine public policy responses.

The Global Risks Report 2024 by the World Economic Forum identifies misinformation and disinformation as a top short-term global risk. These forms of deceptive communication not only mislead the public but also erode trust, deepen societal divisions and threaten fundamental human rights.

Nevertheless, the WEF highlighted in an article in June that while AI technologies are being used in the production of both misinformation and disinformation, they can be harnessed to combat this risk by analyzing patterns, language and context.




Prof. Preslav Nakov, developer of an AI-powered tool for detecting propaganda. (Supplied)

Nakov said that FRAPPE, trained with 23 different linguistic techniques, “uses AI to identify specific persuasion and propaganda techniques, such as name-calling, loaded language, appeals to fear, exaggeration and repetition.”

“FRAPPE further uses AI to perform framing analysis,” he said, adding that the tool distinguishes “the main perspectives from which an issue is being discussed: Morality, fairness, equality, political, and cultural identity.”

With a database of in excess of 2.5 million articles from more than 8,000 sources, the multilingual system enables users to explore and compare how different countries and outlets frame and present information.

DID YOUKNOW?

• Disinformation is the intentional spread of false information to sway public opinion.

• Propaganda often employs loaded language to elicit emotional reactions.

• A WEF report identifies disinformation and misinformation as a top short-term risk.

Moreover, to build the training data for the system, more than 40 journalists from several European countries contributed to the manual analysis of news content in 13 languages.

This manual analysis, according to Nakov, allows FRAPPE to discern the underlying frames that shape how stories are told and perceived. By identifying the dominant frames within an article, FRAPPE compares these across media sources, countries and languages, providing valuable insights into how framing varies globally.

FRAPPE is designed for a broad audience, including the general public, journalists, researchers, and even policymakers.




With an extensive database, FRAPPE's multilingual system enables users to explore and compare how different countries and outlets frame and present information. (Supplied)

“For the general user, FRAPPE serves as an educational tool to explore how news content is framed, enabling them to identify propaganda techniques like name-calling, flag-waving, loaded language and appeals to fear,” Nakov said.

“For journalists and policymakers, FRAPPE offers a powerful tool to examine and compare framing and persuasion strategies across different countries, languages and outlets,” he added.

The system relies on annotations from journalists who manually identified persuasion and propaganda techniques across a wide range of articles. This minimizes the risk of overly subjective or one-sided interpretations.

Opinion

This section contains relevant reference points, placed in (Opinion field)

Transparency and unbiased analysis were fundamental in the development of FRAPPE. Nakov said: “Users should be aware that our models use neural networks and, as such, they lack explainability.”

He also warned that “despite our intent, due to potential unintended article selection biases, FRAPPE might be favoring some political or social standpoints.”

On the positive side, however, “FRAPPE has the potential to influence the way news articles are perceived and consumed, and journalists may become more aware of the language they use and its potential impact on readers.”




FRAPPE has the ability to spot persuasive or manipulative techniques in news content. (Supplied)

To spot persuasive or manipulative techniques in news content, Nakov advises readers and viewers to “watch out for emotional language designed to provoke strong reactions like fear or anger, and be mindful of loaded words, such as ‘radical’ and ‘heroic, which carry emotional weight.”

He urged readers to critically assess articles that rely too heavily on a single expert or selective quotes, stressing the importance of considering how different outlets might report the same event in contrasting ways.

To gain a clearer perspective, Nakov advises cross-checking sources and comparing how different media outlets cover the same story. This approach helps reveal varying angles, biases or framing techniques.

He also stressed that oversimplified “us versus them” narratives “often indicate manipulation, as do articles that frame an issue with a particular angle, leaving out important details.




FRAPPE has been featured in numerous EU workshops focused on combating fake news. (Supplied)

“False dilemmas, where only extreme choices are presented and repetitive phrases meant to reinforce a point are also red flags,” he said.

“FRAPPE envisions empowering individuals and institutions to make more informed decisions by revealing the framing and persuasion techniques embedded in media content. Its aim is to enhance transparency in journalism, promote trust in media and contribute to a more informed, media-literate public.”

Developed in collaboration with the European Commission’s Joint Research Center and several academic institutions across Europe, FRAPPE was launched ahead of the 2024 European Parliament election, held in the EU between June 6-9 this year.

The tool, integrated into the Europe Media Monitor, has been featured in numerous EU workshops focused on combating fake news.



‘I don’t create suffering, I document it:’ Gaza photographer hits back at Bild over accusation of staging scenes

‘I don’t create suffering, I document it:’ Gaza photographer hits back at Bild over accusation of staging scenes
Updated 08 August 2025

‘I don’t create suffering, I document it:’ Gaza photographer hits back at Bild over accusation of staging scenes

‘I don’t create suffering, I document it:’ Gaza photographer hits back at Bild over accusation of staging scenes
  • Photojournalist Anas Zayed Fteiha came under fire after Bild published an article alleging his photos were manipulated to amplify narratives of Israeli-inflicted suffering
  • Episode fueled broader debate on the challenges of reporting from conflict zones such as Gaza, with expert saying “guiding” photos does not invalidate the reality being portrayed

LONDON: Gaza-based photojournalist Anas Zayed Fteiha has rejected accusations by the German tabloid Bild that some of his widely circulated images — depicting hunger and humanitarian suffering — were staged rather than taken at aid distribution sites.

Fteiha, who works with Turkiye’s Anadolu Agency, described the claims as “false” and “a desperate attempt to distort the truth.”

“The siege, starvation, bombing, and destruction that the people of Gaza live through do not need to be fabricated or acted out,” Fteiha said in a statement published on social media. “My photos reflect the bitter reality that more than two million people live through, most of whom are women and children.”

The controversy erupted after Bild published an article on Tuesday alleging that Fteiha’s photos were manipulated to amplify narratives of Israeli-inflicted suffering — particularly hunger — and citing content from his personal social media accounts to suggest political bias.

The German daily Suddeutsche Zeitung also questioned the authenticity of certain images from Gaza, though without naming Fteiha directly.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Bild claimed the emotionally charged imagery served as “Hamas propaganda,” a charge Fteiha rejected as “ridiculous” and a “criminalization of journalism itself.”

“It is easy to write your reports based on your ideologies, but it is difficult to obscure the truth conveyed by the lens of a photographer who lived the suffering among the people, heard the children’s cries, photographed the rubble, and carried the pain of mothers,” Fteiha said.

Fteiha also accused Bild of repeated breaches of journalistic ethics, citing previous criticism and formal complaints against the paper for publishing misinformation.

The episode has fueled a broader debate on the challenges of reporting from conflict zones such as Gaza, where foreign press access is restricted and local journalists are often the only source of visual documentation.

Following Bild’s allegations, several news agencies, including AFP and the German Press Agency, severed ties with Fteiha. However, Reuters declined to do so, stating that his images met the agency’s standards for “accuracy, independence, and impartiality.”

“These aren’t outright fakes, but they do tap into visual memory and change how people see things,” said photography scholar Gerhard Paul in an interview with Israeli media.

Christopher Resch, of Reporters Without Borders, said that while photographers sometimes “guide” subjects to tell a visual story, that does not invalidate the reality being portrayed.

“The picture should have had more context, but that doesn’t mean the suffering isn’t real,” he said, cautioning media outlets against labeling photojournalists as “propaganda agents,” which he warned could endanger their safety.

Israeli Foreign Minister Gideon Sa’ar also weighed in, using his official X account to describe one of the accused images — used on the cover of Time magazine — as an example of “Pallywood” — a portmanteau of “Palestine” and “Hollywood” — to sway global opinion.

However, the credibility of Bild’s report has itself come under scrutiny. Israeli fact-checking group Fake Reporter posted a series of rebuttals on X, disputing several claims.

The group pointed out that the Time magazine cover image often linked to Fteiha was taken by a different photographer, and argued that claims the children in the photograph were not at an aid site were “inaccurate.”

“From our examination, one can see, in the same place, an abundance of documentation of food being distributed and prepared,” the group wrote.


Deaf Palestinian uses social media to highlight Gaza’s struggles through sign language

Deaf Palestinian uses social media to highlight Gaza’s struggles through sign language
Updated 08 August 2025

Deaf Palestinian uses social media to highlight Gaza’s struggles through sign language

Deaf Palestinian uses social media to highlight Gaza’s struggles through sign language
  • Basem Alhabel describes himself as a ‘deaf journalist in Gaza’ on his Instagram account
  • He wants to raise more awareness of the conflict by informing Palestinians and people abroad with special needs

GAZA: Basem Alhabel stood among the ruins of Gaza, with people flat on the floor all around him as bullets flew, and filmed himself using sign language to explain the dangers of the war to fellow deaf Palestinians and his followers on social media.

Alhabel, 30, who describes himself as a “deaf journalist in Gaza” on his Instagram account, says he wants to raise more awareness of the conflict – from devastating Israeli air strikes to the starvation now affecting most of the population – by informing Palestinians and people abroad with special needs.

Bombarded by Israel for nearly two years, many Gazans complain the world does not hear their voices despite mass suffering with a death toll that exceeds 60,000 people, according to Gaza health authorities in the demolished enclave.

“I wished to get my voice out to the world and the voices of the deaf people who cannot speak or hear, to get their voice out there, so that someone can help us,” he said through his friend and interpreter Mohammed Moshtaha, who he met during the war.

“I tried to help, to film and do a video from here and there, and publish them so that we can make our voices heard in the world.”

Alhabel has an Instagram following of 141,000. His page, which shows him in a flak jacket and helmet, features images of starving, emaciated children and other suffering.

He films a video then returns to a tent to edit – one of the many where Palestinians have sought shelter and safety during the war, which erupted when Hamas-led militants attacked Israel in October 2023, drawing massive retaliation. Alhabel produced images of people collecting flour from the ground while he used sign language to explain the plight of Gazans, reinforcing the view of a global hunger monitor that has warned a famine scenario is unfolding.

“As you can see, people are collecting flour mixed with sand,” he communicated.

Alhabel and his family were displaced when the war started. They stayed in a school with tents.

“There was no space for a person to even rest a little. I stayed in that school for a year and a half,” he explained.

Alhabel is likely to be busy for some time. There are no signs of a ceasefire on the horizon despite mediation efforts.

Israel’s political security cabinet approved a plan early on Friday to take control of Gaza City, as the country expands its military operations despite intensifying criticism at home and abroad over the war.

“We want this situation to be resolved so that we can all be happy, so I can feed my children, and life can be beautiful,” said Alhabel.


MBC CEO granted Saudi premium residency

MBC CEO granted Saudi premium residency
Updated 07 August 2025

MBC CEO granted Saudi premium residency

MBC CEO granted Saudi premium residency
  • Sneesby said in a post on X that he feels immense pride in obtaining the premium residency in this country I have come to love
  • Executive took the helm at the Saudi media group earlier this year after serving as CEO of Nine Entertainment

RIYADH: The CEO of Riyadh-headquartered broadcaster MBC Group Mike Sneesby has been granted premium residency in .

Sneesby said in a post on X that he feels “immense pride in obtaining the premium residency in this country I have come to love, and have chosen to make my home since moving from Australia.”

The executive took the helm at the Saudi media group earlier this year after serving as CEO of Nine Entertainment.

The premium residency was launched in 2019 and allows eligible foreigners to live in the Kingdom and receive benefits such as exemption from paying expat and dependents fees, visa-free international travel, and the right to own real estate and run a business without requiring a sponsor.


Grok, is that Gaza? AI image checks mislocate news photographs

Grok, is that Gaza? AI image checks mislocate news photographs
Updated 07 August 2025

Grok, is that Gaza? AI image checks mislocate news photographs

Grok, is that Gaza? AI image checks mislocate news photographs
  • Furor arose after Grok wrongly identified a recent image of an underfed girl in Gaza as one from Yemen years back
  • Internet users are turning to AI to verify images more and more, but recent mistakes highlight the risks of blindly trusting the technology

PARIS: This image by AFP photojournalist Omar Al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel’s blockade has fueled fears of mass famine in the Palestinian territory.
But when social media users asked Grok where it came from, X boss Elon Musk’s artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.
The AI bot’s untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo.
At a time when Internet users are turning to AI to verify images more and more, the furor shows the risks of trusting tools like Grok, when the technology is far from error-free.
Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.
In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, sparked by Hamas’s October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP — and even that’s “not always available.”
Challenged on its incorrect response, Grok said: “I do not spread fake news; I base my answers on verified sources.”
The chatbot eventually issued a response that recognized the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.
The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.


Grok’s mistakes illustrate the limits of AI tools, whose functions are as impenetrable as “black boxes,” said Louis de Diesbach, a researcher in technological ethics.
“We don’t know exactly why they give this or that reply, nor how they prioritize their sources,” said Diesbach, author of a book on AI tools, “Hello ChatGPT.”
Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.
In the researcher’s view Grok, made by Musk’s xAI start-up, shows “highly pronounced biases which are highly aligned with the ideology” of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right.
Asking a chatbot to pinpoint a photo’s origin takes it out of its proper role, said Diesbach.
“Typically, when you look for the origin of an image, it might say: ‘This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine’.”
AI does not necessarily seek accuracy — “that’s not the goal,” the expert said.
Another AFP photograph of a starving Gazan child by Al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.
That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.


An AI’s bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer.
“Just because you explain to it that the answer’s wrong doesn’t mean it will then give a different one,” Diesbach said.
“Its training data has not changed and neither has its alignment.”
Grok is not alone in wrongly identifying images.
When AFP asked Mistral AI’s Le Chat — which is in part trained on AFP’s articles under an agreement between the French start-up and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen.
For Diesbach, chatbots must never be used as tools to verify facts.
“They are not made to tell the truth,” but to “generate content, whether true or false,” he said.
“You have to look at it like a friendly pathological liar — it may not always lie, but it always could.”


Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze

Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze
Updated 07 August 2025

Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze

Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze
  • One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air
  • The explosive rise of the trend underscores social media’s power to legitimize unproven health practices, particularly as tech platforms scale back content moderation

WASHINGTON: From mouth taping to rope-assisted neck swinging, a viral social media trend is promoting extreme bedtime routines that claim to deliver perfect sleep — despite scant medical evidence and potential safety risks.
Influencers on platforms including TikTok and X are fueling a growing wellness obsession popularly known as “sleepmaxxing,” a catch-all term for activities and products aimed at optimizing sleep quality.
The explosive rise of the trend — generating tens of millions of posts — underscores social media’s power to legitimize unproven health practices, particularly as tech platforms scale back content moderation.
One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air.
“Those who try it claim their sleep problems have significantly improved,” said one clip on X that racked up more than 11 million views.
Experts have raised alarm about the trick, following a Chinese state broadcaster’s report that attributed at least one fatality in China last year to a similar “neck hanging” routine.
Such sleepmaxxing techniques are “ridiculous, potentially harmful, and evidence-free,” Timothy Caulfield, a misinformation expert from the University of Alberta in Canada, told AFP.
“It is a good example of how social media can normalize the absurd.”
Another popular practice is taping of the mouth for sleep, promoted as a way to encourage nasal breathing. Influencers claim it offers broad benefits, from better sleep and improved oral health to reduced snoring.
But a report from George Washington University found that most of these claims were not supported by medical research.
Experts have also warned the practice could be dangerous, particularly for those suffering from sleep apnea, a condition that disrupts breathing during sleep.
Other unfounded tricks touted by sleepmaxxing influencers include wearing blue- or red-tinted glasses, using weighted blankets, and eating two kiwis just before bed.

‘Actively unhelpful, even damaging’

“My concern with the ‘sleepmaxxing’ trend — particularly as it’s presented on platforms like TikTok — is that much of the advice being shared can be actively unhelpful, even damaging, for people struggling with real sleep issues,” Kathryn Pinkham, a Britain-based insomnia specialist, told AFP.
“While some of these tips might be harmless for people who generally sleep well, they can increase pressure and anxiety for those dealing with chronic insomnia or other persistent sleep problems.”
While sound and sufficient sleep is considered a cornerstone of good health, experts warn that the trend may be contributing to orthosomnia, an obsessive preoccupation with achieving perfect sleep.
“The pressure to get perfect sleep is embedded in the sleepmaxxing culture,” said Eric Zhou of Harvard Medical School.
“While prioritizing restful sleep is commendable, setting perfection as your goal is problematic. Even good sleepers vary from night to night.”
Pinkham added that poor sleep was often fueled by the “anxiety to fix it,” a fact largely unacknowledged by sleepmaxxing influencers.
“The more we try to control sleep with hacks or rigid routines, the more vigilant and stressed we become — paradoxically making sleep harder,” Pinkham said.

Melatonin as insomnia treatment
Many sleepmaxxing posts focus on enhancing physical appearance rather than improving health, reflecting an overlap with “looksmaxxing” — another online trend that encourages unproven and sometimes dangerous techniques to boost sexual appeal.
Some sleepmaxxing influencers have sought to profit from the trend’s growing popularity, promoting products such as mouth tapes, sleep-enhancing drink powders, and “sleepmax gummies” containing melatonin.
That may be in violation of legal norms in some countries like Britain, where melatonin is available only as a prescription drug.
The American Academy of Sleep Medicine has recommended against using melatonin to treat insomnia in adults, citing inconsistent medical evidence regarding its effectiveness.
Some medical experts also caution about the impact of the placebo effect on insomnia patients using sleep medication — when people report real improvement after taking a fake or nonexistent treatment because of their beliefs.
“Many of these tips come from non-experts and aren’t grounded in clinical evidence,” said Pinkham.
“For people with genuine sleep issues, this kind of advice often adds pressure rather than relief.”