The challenge of fair AI: seven biases to avoid
Artificial intelligence represents a before and after in the history of technology. Its ability to analyse large volumes of data, automate complex tasks and offer solutions is revolutionising sectors as diverse as healthcare, industry, software development and communication.
Properly implemented, AI can design better medical treatments, speed up decision-making in companies and, who knows, in the future, redefine the concept of work, driving social progress.
However, the development of AI also involves taking into account the ethical challenges it brings, such as the biases that algorithms can incorporate. Because these systems learn from data, if training sets contain biases or stereotypes, AI can replicate and even amplify them.
This can result in unfair decisions. One of the most cited examples in this area was the COMPAS system, used in the US to predict the likelihood of recidivism for people who had previously been convicted of a crime. The system was discontinued when some researchers showed that it tended to classify black people as higher risk compared to white people with similar backgrounds, indicating a racial bias in its training .
Another historical case was the recruitment system developed by Amazon, which was similarly abandoned after it was discovered that it systematically penalised women's CVs for technical positions. The algorithm, having been trained with historical data on male-dominated hiring, automatically perpetuated pre-existing patterns of gender inequality. But these are not the only biases that can "sneak into the algorithm". Others to watch out for include the following:
Lack of diversity in the results
When algorithms prioritise profiles, products or ideas that always fit the same pattern (e.g. young, white or from certain cultural backgrounds), this is an indication of a lack of representativeness in the training data. This lack of diversity can exclude or invisibilise entire groups, limiting access to opportunities or resources for those who do not fit the predominant profile.
Repetitive or extreme outcomes
The logic of many digital systems seeks to maximise user retention by showing them content that has already captured their interest. This can lead to continuous exposure to information aligned with their ideas or emotions, without access to alternative viewpoints. This phenomenon is known as a filter bubble, and contributes to polarisation and radicalisation of opinions.
Unequal treatment or access
In some digital services or e-commerce platforms, algorithms may generate different results depending on the user profile, resulting in price variations, personalised recommendations or waiting times. These segmentations, although designed for commercial purposes, may discriminate against certain groups on the basis of geographic location, socio-economic status or previous activity on the platform.
Lack of transparency
Many AI-based systems operate as black boxes, without providing a clear explanation of how decisions affecting users are made. This can be particularly problematic in sensitive processes such as granting a loan or assessing a job application, where the lack of information makes review and accountability difficult.
Algorithmic confirmation bias
Algorithms learn from our past behaviour: what we read, what we watch or what we buy. In doing so, they tend to show us more of the same, reinforcing our prior beliefs and limiting our exposure to new information. This effect reduces the diversity of perspectives and makes it difficult to form a critical opinion.
Automation bias
The perception that AI-based systems are infallible can lead users and practitioners themselves to accept their results without question. This overconfidence, known as automation bias, can lead to erroneous decisions in all sorts of areas.
Bias from outdated data
Algorithms that are trained on old or outdated data can generate irrelevant, erroneous or even harmful results. This happens, for example, in systems that recommend products, medical diagnoses or financial decisions based on outdated patterns. Outdated data limits the ability of models to adapt to changing contexts and perpetuates past representations that no longer reflect current reality.
Detecting a biased algorithm is not always easy, but knowing that they exist and how they work makes it easier for users to identify some telltale signs of their presence. Educational institutions should encourage critical thinking, and citizens and institutions should demand greater transparency about how they work, which is key to developing a healthier relationship with artificial intelligence.
Read more
Artificial Intelligence and cybersecurity: the great challenge for Spanish companies
Specialisation in Artificial Intelligence and Big Data: how to get started
