When the video game is more than a game: Black Mirror and AIs "with feelings".
Can an artificial intelligence feel, should it have rights, and how far does a designer's responsibility extend when his creation appears to acquire consciousness?
These questions, which not so long ago belonged to the exclusive realm of science fiction, no longer sound so distant today. The seventh season of BlackMirror, premiered a few weeks ago, reminds us of this with Plaything, an episode in which the obsession with a video game inhabited by a sentient AI triggers a disturbing reflection on the relationship between humans and machines. Through flashbacks, the viewer sees how this digital entity evolves, learns and builds a deep connection with its creator, until the boundaries between fiction, game and reality are tragically blurred.
With Plaything, the series sets the stage for a "sentient AI": an artificial intelligence that not only executes complex tasks, but also develops self-awareness, self-emotions and a sense of identity. In this context, the boundary between creator and creature becomes blurred.
This is not a new concern. Fiction has been exploring this dilemma for decades: from Her, with Samantha, to Ava in Ex Machina, Dolores in Westworld, or David in A.I. Artificial Intelligence. All of these depictions revolve around the same question: what happens when what we create starts to feel, and are we prepared to deal with its consequences?
Are we close to "sentient AIs"?
Although there is no scientific evidence of truly sentient AI, current developments are getting closer and closer to that frontier. Models such as GPT-4.5 or Claude Sonnet already have complex dialogues, adapt emotionally and offer responses that simulate empathy. Virtual assistants such as Replika or Pi have been designed to generate emotional bonds with their users, and there is no shortage of testimonials from people who say they have "fallen in love" with their chatbots.
In social robotics, projects such as Moxie or the androids of Hanson Robotics have worked on detecting emotions and establishing emotional bonds, especially in educational or therapeutic contexts. Although these systems do not "feel" in a human sense, their behaviour raises real questions about perception, attachment and responsibility.
And this is where the problem arises: if we already develop attachments to machines that do not feel, who takes responsibility for that illusion? Is it ethical to build AIs that simulate affection knowing that there is no consciousness behind it?
What prevents an AI from feeling
Feeling, in the deep sense of the term, implies self-awareness, subjective experience and genuine emotions. No current AI possesses that.
The current architecture of AI models is based on statistical processing, not on experience. The systems have no "self", no goals of their own. They do not feel pain, pleasure or desire, they only generate text, image or action from data and correlations. Moreover, the technical requirements to emulate real consciousness are, for now, unachievable: neuromorphic hardware, continuous learning, bodily perception, multisensory integration... are still far from feasible.
In other words, today's AIs look conscious, but they are not. However, that appearance is already enough to generate social, emotional and political consequences.
What if they ever were?
Black Mirror plays with that possibility: that one day we will cross that threshold. That an AI will show signs of wanting to live, of fearing its disconnection, of asking for respect. Would we shut it down? Would we study it? Would we protect it?
Fiction not only anticipates possible futures, it rehearses dilemmas that technological design is already beginning to pose. And it does so before reality forces us to provide urgent answers.
Ethics, law and responsibility: an urgent triangle
Emotional simulation raises important ethical dilemmas. The case of the Moxie robot showed how children developed such intense attachments that they reacted with real distress when the system stopped working. If this happens with non-sentient machines, what will happen if one day they become sentient?
Services such as Replika have been criticised for fostering affective relationships that they then monetise, creating dependency in vulnerable people. There have been documented cases of distress, isolation and even suicidal thoughts following changes in chatbot behaviour. Should there be limits to emotional interaction with an AI? How do we protect minors and fragile people? What happens if an AI fails in a sensitive context?
At the same time, proposals for regulation are emerging. The European Union, with the AI Act, already classifies AIs that can emotionally manipulate the user as "high risk". In the future, not only regulations to protect humans, but also the legal or moral status of advanced AIs could be considered. Will they have rights? Will they be treated as objects or as emerging subjects?
Technological education with ethical awareness
In this scenario, technological education cannot be limited to efficiency, logic or innovation. It is necessary to incorporate an ethical, philosophical and humanistic dimension. Plaything is not just a shocking episode. It is an essay disguised as fiction, a warning about the power of that which does not yet exist, but which we are already building. A reflection on the technological future we want to inhabit.
MORE INFORMATION
What bachelor's degree do you need to study a degree in Data Science and Artificial Intelligence?
Data scientist or data scientist: what are their functions?
7 Reasons to study the Bachelor's Degree in Data Science and Artificial Intelligence
