BY KAHA GEDI
We all knew technology would someday get out of hand, however what we should be focusing on is whether it already has. From people using AI generators as life support to the point where we can’t easily distinguish AI from RI (real intelligence). It’s even being used in finance and banking for fraud detection.
The use of AI has definitely tilted to both sides of the balancing scale of how beneficial, or bad it is for us. Now, let me ask you a question, what if AI were to get into the wrong hands? The hands of people who are supposed to protect you from harm, and let’s say they use it through a faulty algorithm with the intent to improve basic living standards for citizens who may be: unemployed, disabled, elderly, or otherwise unable to support themselves through work, or other means. In this article, I will discuss what is currently going on in the Danish welfare authority, or the Udbetaling Danmark (UDK).
According to Amnesty International, “Fraud detection algorithms, paired with mass surveillance practices, have led people to unwillingly–or even unknowingly–forfeit
their right to privacy and created an atmosphere of fear.” Hellen Mukiri-Smith, who is a researcher on Artificial Intelligence and Human Rights states that “this mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect.”
The Danish welfare authority (UDK) has partnered with ATP (Denmark’s mandatory labor market pension system, designed to provide financial security for workers after they retire) and private corporations (like NNIT) which is a Danish IT services/consulting company that specializes in providing IT solutions/technology services to businesses. They did this in order to develop fraud detection algorithms aimed at identifying social benefits fraud. The UDK uses up to 60 algorithms to flag potential fraud, but these systems are highly invasive and not transparent.
In order to feed these algorithms, the Danish government collects massive amounts of personal data about its citizens (such as: residency, citizenship, family ties, and travel history). This includes sensitive information that can be used to track and monitor individuals’ lives, raising major privacy concerns.
Furthermore, Amnesty International found that the algorithms unjustly affected marginalized groups such as migrants, low-income people, and those in non-traditional living arrangements. For example, one algorithm flags people with “unusual” living patterns like families living far away from each other, such as in care facilities due to disability.
Helen Mukiri-Smith argues that the way UDK and ATP are using AI for fraud detection closely resembles a social scoring system, which is prohibited under the new EU law (AI Act), and that it should be banned.
For those who don’t know what the social scoring system is, it’s a way of evaluating people based on: their behavior, financial habits, and other personal information. It gives people a score that could impact their access to things like loans, services, or opportunities. These systems use data such as your activity on social media, payments, and sometimes criminal records to determine how people are scored. The main issue with social scoring is that it can be unfair, especially if it’s based on biased, or incomplete information. This can influence your access to critical services, such as healthcare, or housing.
Moreover, the surveillance and constant questioning of these individuals will take a toll on them mentally. They’re constantly being investigated by case workers and fraud investigators, and undoubtedly it will make them live in fear, and prevent them from living stable lives.
Amnesty International pushes for the Danish authorities to stop using discriminatory data in fraud detection. They also urge that Denmark make sure its fraud detection systems actually follow the human rights laws, including the EU’s AI Act, which bans practices like social scoring.
Nobody deserves to live in fear, especially at the expense of things so crucial for their well-being. AI makes mistakes. I remember I used ChatGPT, and it gave me the wrong answer and when I told it that it was wrong, it said “Oh sorry.” I know that AI is evolving for “better,” but there are no “Oh, sorry’s,” when these algorithms are making mistakes and screwing with people’s livelihood. This begs the question, should we be using AI in fields so important as pensions, unemployment aid, or childcare support?