Privacy and transparency: the EU's first-ever AI law
Last August, the European Union's first Artificial Intelligence Act came into force, a body of legislation that is mandatory for the 27 countries that form part of the EU and, for its advocates, a milestone in establishing clear limits with respect to the future uses of AI and its more than certain impact on the lives of citizens.
This legislation, as reflected in the second edition of "FEEL IT: Horizonte IA", a study developed jointly by evercom (a creative communications and marketing agency) and UDIT (University of Design, Innovation and Technology), is in line with the feelings of most citizens. In this sense, as reflected in this study, 66.6% of respondents demand that those responsible for the development and regulation of AI ensure privacy and data protection in relation to its use. These are the highlights of the new law.
First Artificial Intelligence Law: main points
Prohibitions
With the aim of protecting citizens' privacy rights, the regulation limits the use of biometric identification systems in public spaces by the authorities. It will only be allowed for specific uses and only under specific conditions and for serious crimes.
In other words, video surveillance cameras equipped with facial recognition systems, which are so popular in countries such as China, have no place in EU countries.
Intellectual Property
One of the main concerns of the legislator is that AI models should be transparent and "explainable". This implies first of all that they have to comply with copyright law, which will mean in the future the development of regulations specifying the way in which authors who transfer their works for the training of the models will be compensated.
But there will also be a requirement to be able to audit these models, so that the administration will have to have simple mechanisms in place to be able to check what data the different algorithms are being trained on.
Governance and compliance
Organisations that develop their own models will be required to implement AI-specific risk management systems and to designate internal compliance officers to ensure compliance.
In addition, regular audits and detailed documentation of these systems will be required, facilitating monitoring and oversight by competent authorities.
Impact assessment
Before deploying high-risk AI systems (e.g. those trained on citizens' data), companies will be required to conduct impact assessments to ensure that these models are safe, ethical and legally compliant.
This documentation will be available for external audits and reviews, promoting transparency in their application.
Transparency, oversight and sanctions
Finally, the regulation establishes that users will have the right to be informed in a clear and comprehensible manner about automated decisions that affect them, so as to ensure transparency.
Based on these principles, the new EU-driven regulation establishes a framework of significant penalties in case of misuse of artificial intelligence by organisations and companies. Fines can range from €7.5 million or 1.5% of the company's annual turnover, up to a maximum of €35 million or 7% of its turnover, depending on the seriousness of the infringement. In addition, supervisory authorities will be set up at national and EU level to ensure compliance.
On 1 February, the EU is expected to start sanctioning companies and applications that do not comply with key aspects of the new directive and, nine months later, developers working in the field of artificial intelligence will have to adhere to codes of good practice.
DO YOU WANT TO STUDY A MASTER'S DEGREE IN IA?
FEEL IT: Horizon IA
The second edition of FEEL IT, the report on technological perception that evercom prepares every year and which on this occasion has been developed in collaboration with UDIT, was created with the aim of understanding how AI has been integrated into the lives of citizens, both in the personal and professional spheres. The aim is not only to analyse the patterns of use of these resources, but also to understand the doubts and expectations that they generate in people and companies.
The data collection for this report has been carried out through two online surveys to different population groups: citizens and representatives of the business sector. The fieldwork was carried out between 2 and 25 September 2024 and responses were collected from 700 people and 100 profiles of interest who hold senior management positions or are owners of companies operating in Spain.
More information
90% of Spanish companies already use AI tools.
Why study a Master in Artificial Intelligence?
How much does an AI expert earn?
