The US government vetting paradox

The US government vetting paradox

The US government vetting paradox
Security clearance predicts the risk that you will betray your country, such as by leaking and mishandling secrets. (Reuters)
Short Url

There is no doubt that if you give an artificial intelligence tool a job, it will execute it with outstanding speed and efficiency, often far quicker and better than a human can, and exactly as requested. Now that we are increasingly trying to make AI in government work, such tools can make good processes much better — but can they also make a bad process far worse?
Let me be more specific. There is a paradox I have encountered many times with people aiming for a sensitive US government leadership position. They work hard to build credentials, get an education, travel widely, learn languages and cultivate global networks and perspectives with the aim of preparing themselves for a position in diplomacy, defense or policy. But when the time comes to apply for that government job, they are hit with security clearance and every asset they worked on for their application suddenly becomes a liability and a risk. Qualifications become hindrances. It is as if the system is designed so that the least-qualified can sometimes reach the highest positions.
There is a real parallel between the credit scores that banks use when giving out loans and security clearance evaluations: both aim to evaluate risk and both use certain measures as indicators of risk, often indirectly. The credit score tries to predict the risk or probability that you will default on a loan. Similarly, security clearance predicts the risk that you will betray your country, such as by leaking, selling or mishandling secrets. The algorithm uses indicators like payment history and debt-to-income ratio for the credit score and things like criminal convictions, foreign ties and drug use for security clearance.
The US security clearance system was designed in the 1950s to prevent Soviet spies from infiltrating sensitive positions. It made sense then. But today, this same system systematically eliminates exactly the people that are most needed: those with deep cultural knowledge, language skills, international experience and the intellectual curiosity to understand how our adversaries actually think.
The algorithm is simple and uses perverse logic. Foreign contacts are a security risk, time spent abroad raises suspicions and fluency in “sensitive” languages are a red flag. An academic interest in foreign political systems is dangerous and questioning conventional wisdom is subversive. The people that get rewarded instead are those who play it safe, follow rules, color inside the lines and do not ask awkward questions. The result is a bureaucracy and a leadership that excels at following procedures but struggles with the messy, ambiguous world of actual strategy.
Strategic leadership requires worldly people who can navigate complexity and ambiguity, deal with information from several sources, be comfortable with uncertainty, can see patterns across different domains, are willing to challenge assumptions and understand how different cultures and systems operate. The system, however, tends to reward people who execute orders precisely, follow established procedures, avoid controversial ideas and demonstrate loyalty to existing frameworks. Such people do not question the broader context and we have seen the results in Iraq, Syria and Afghanistan. 

It is as if the system is designed so that the least-qualified can sometimes reach the highest positions.

Nadim Shehadi

I knew a graduate student who worked hard to get a travel grant and various permissions to attend a workshop in the region of their specialization, which promised deep learning, access to foreign officials and a chance for first-hand experience in a conflict zone and to interact with real people. They made the difficult decision to withdraw and not travel because they would have had to list every contact and every subject discussed, which would hinder the security clearance process. The only remaining option was to stay at home, play it safe and protect their future while watching others expand their horizons. Another student had to refuse an opportunity to help organize the visit of a foreign dignitary directly related to their thesis for fear that it would hinder their security clearance.
These stories are not isolated. They reflect a system shaped by Cold War fears, where foreign contact equals vulnerability. The clearance process, designed to protect national security, ends up favoring the least-traveled, the least-connected and, sometimes, the least-qualified. Those with knowledge and experience are left behind, while those with simpler, less risky resumes move onward and upward.
In the 1950s, the process probably involved simple interviews and the exercise of common sense, instincts and experience with people. As bureaucracy evolved and became more complicated, with technology taking over, systems became more rigid, with questionnaires and formulas calculating the risks while still using the same criteria and the same questions. How likely are you to betray your country? Far less if you have never been out of Idaho, do not know any foreigners and have not been exposed to a range of ideas and languages.
Imagine feeding this system into an AI-trained large language model to optimize risk reduction. It will do exactly what it is told — efficiently and relentlessly. If the instruction is “find risk,” it will scan every detail of a person’s life for foreign ties, overseas travel, multilingualism and global networks. It will not ask whether those experiences make people wiser or more capable. It will simply flag them.
This is the danger of automation without reflection. A bad process becomes an even worse one. The human nuance — the ability to see that a contact is educational rather than compromising — is lost. The irony is that real wars and crises do not reward conformity. They reward creativity, guts and clear thinking. But those qualities do not always survive the background check gauntlet.
But AI is not inherently the villain. It can also make a good process better. If trained to recognize the value of global competence, it could help identify the most promising candidates. The key is in the design — and in the values we embed.
The greatest risk is in using AI with the same assumptions — and bureaucracies tend to do that. Those in charge of implementation are often not the ones in charge of design. We will then continue to see the least-qualified reach the highest positions — not because they are the best, but because they are the easiest to vet. This is not just unfair. It is dangerous.

Nadim Shehadi is an economist and political adviser.
X: @Confusezeus

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point of view