Human-IA kicks off at UDIT with a first session on the invisible power of data
On 18 November, the Campus of Technology, Innovation and Applied Sciences of UDIT, University of Design, Innovation and Technology, held the first session of Human-IA, a series of technological meetings organised by the Master in Artificial Intelligence, which was created with the aim of connecting innovative companies, experts from the sector and the university community in a space for open dialogue on the real impact of artificial intelligence and the essential role of people in its development.
The first session, entitled "The invisible power of data", addressed how data management and quality have become the basis on which the future of Artificial Intelligence is built.
Presented by Fernando Blázquez, director of the Degree in Data Science and Artificial Intelligence at UDIT, the event was attended byIñaki Tabernero, CIO of IndesIA, Diego Morales Cordera, Business & Data Strategy Advisor at Turing Challenge and Neus Alcolea, lawyer specialising in intellectual property and artificial intelligence at Loyra Abogados.
The event was opened by Iñaki Tabernero, who explained that the Spanish industry is at a turning point in terms of the adoption of Artificial Intelligence solutions and how IndesIA is configured as a space in which to accompany organisations to accelerate this strategic leap.
In this regard, he stressed that industrial competitiveness increasingly depends on automation, energy efficiency and circular economy, and that IndesIA wants to lead this change by offering use cases, STEM training, mechanisms to speed up digitisation and an environment that reduces the risks of adopting new technologies.
"It is time for Spanish companies: we have the knowledge, we have the talent and we have an ecosystem capable of turning Artificial Intelligence into a real lever for growth," he said.
Why do Artificial Intelligence projects fail in companies?
After the speech by IndesIA's CIO, Diego Morales Cordera analysed the factors that determine the success or failure of AI projects, highlighting the need to align the technological strategy with the real business objectives.
Diego explained that the success of any AI initiative is based on three pillars: "having the raw data, establishing solid governance over it, and having the business knowledge to interpret and apply it correctly".
However, adoption faces a harsh reality: while more than 80% of organisations have explored or developed a pilot project based on Generative AI, "an alarming 95% of these pilots fail to generate a measurable return on investment (ROI)". This figure highlights a profound adoption gap, where the speed of vendor innovation outpaces the ability of customers to deliver tangible business results.
As the Turing expert emphasised, the main obstacle to crossing this gap and generating real value is the "MVP Wall" or the barrier that separates experiments and proof of concepts (PoCs) from implementation and impact at scale. Failure in many cases lies not so much in technical factors, but in organisational and strategic ones.
Thus, in many companies, AI projects are driven by the top layers of the organisation through a top-down strategy, often being seen as "a solution in search of a problem, disconnected from business needs and lacking a clear link to real problems". This lack of purpose and clear accountability between business and technology teams means that pilots remain in the demo phase, with unrealistic expectations and without the usability or business value needed to scale. So what to do?
Diego Morales says that for AI to stop being an experiment and become a viable product, companies must adopt an AI Master Planthat addresses specialisation, deep domain understanding and clarity on ROI.
The solution lies in a paradigm shift to a strategy where "leadership supports initiatives based on validated business needs". This plan is based on six key pillars ranging from aStrategic Blueprint that links AI to business priorities, to the implementation ofAI Governance and the development of the Culture required for adoption. "Only by ensuring that business and technology go hand in hand, with defined PoCs to scale and clear goals," concludes Diego Morales, "will organisations be able to builddata flowsand redesign processes in an AI First way to gain a sustainable competitive advantage.
From data to model: legal foundations of AI
In the last of the presentations, Neus Alcolea addressed some of the main legal aspects involved in the training of artificial intelligence systems, from the collection and use of data to copyright and privacy protection.
Throughout his presentation, he explained that the real "fuel" of Artificial Intelligence, especially Generative AI, lies in the "quality and origin of its training data", and furthermore, he stressed that the validity and sustainability of any AI system depends directly on the legality of the data that feeds it.
This has brought AI to the epicentre of the legal debate with the filing of significant lawsuits, such as Getty Images v Stability AI or The New York Times v OpenAI, challenging the massive use of Intellectual Property protected content through web scraping to train models. "The central question that emerges is clear: how does AI feed itself, under what licenses and what legal limits are there to its use?
To address this question, Neus Alcolea explained that the legal framework establishes a fundamental distinction between the protection of the work and the protection of the person. On the one hand, Intellectual Property is the mechanism that protects the form and expression (the software, the model and the output), protecting the creations.
On the other hand, the protection of personal data is governed by the RGPD and the LOPD, and is activated with the key of the identifiability of a natural person, requiring a basis of legitimacy and the supervision of the AEPD. Thus personal data and non-personal data (irreversibly anonymised or aggregated) are subject to different regulatory frameworks, the latter being more flexible, while the concept of "identifiability" is "the crucial boundary that differentiates the two legal regimes".
Faced with the need to establish order in this new ecosystem, the European Regulation on AI (RIA/AI Act) introduces a new paradigm of transparency and accountability. Although the regulation does not directly amend IP law, it does impose obligations that force compliance. Among the key measures, model developers are required to comply with the Transparency Obligation, which involves documenting the datasets used for training, allowing rights holders to verify whether their works have been used.
"Ultimately," she concludes, "the future of AI requires developers to move beyond indiscriminate web scraping, opting for rigorous data governance and full documentation to ensure legal validity and public trust in their systems.
As became evident at the end of the session, data is that new oil that fuels (increasingly), the decision flows of algorithms and therefore, that of organisations that need to make informed decisions to remain competitive. Its "invisible" power (although increasingly visible) is what sets the pace of transformation in any sector, conditioning how to innovate, how to compete and how to anticipate the changes to come.
