
Launching social artificial intelligence A new form of emotional connection between humans and machines. But when this technology promises to digitally resurrect the dead, law and ethics must ask how far the illusion of immortality can go.
Sometimes it seems that reality has caught up with the myth. We are no longer just talking about machines that think or algorithms that predict, but about intelligences that dialogue, accompany, and console.
So-called “social AI” does not seek to replace humansRather, we have to live with it. He does not come to carry out orders or solve tasks, but rather to share silence, words and emotions. A kind of digital interlocutor that, like a gentle mirror, gives us back a version of ourselves.
Roman jurists They said: “Ubi societas, ibi ius” Where there is society, there is law. Today, we must update the adage: Where there is human-digital interaction, there must also be law. Because if artificial intelligence begins to form part of our romantic relationships, shouldn’t we ask ourselves what the ethical and legal limits go with it?
Let’s imagine a scenario that no longer belongs in science fiction: possibility The technology of digitally “resurrecting” a deceased loved one. Through language models trained on messages, audio recordings, photos and videos, anyone can “talk” to their mother, partner or best friend again. The voice sounds the same, the answers sound authentic, and the illusion is complete when the system remembers stories or repeats familiar phrases. But what happens in the human soul when those boundaries are erased?
The English writer Aldous Huxley noted The greatest danger of technology is not that it makes us less human, but that it makes us forget what it means to be human. Social AI, in its most advanced form, can become a kind of emotional refuge that soothes feelings of loneliness, but also a dangerous alternative in case of loss. Because sadness requires absence. If artificial intelligence promises eternal existence, where is death as an inevitable part of the human condition?
in Sadness and depression, Sigmund Freud developed the idea that the act of mourning consists of withdrawal Sexual desire for the lost object in order to redirect it towards life. It is said in Creole: Learning to let go does not mean forgetting, but allowing love to change its form. The idea of digitally recreating the dead threatens that intimate, painful, but essential process. If the mourner can speak with the voice of the absent person, how can we distinguish between memory and survival? How do we close the wound that technology insists on keeping open?
The American psychologist Irvin Yalom developed this idea Grief is a profound form of emotional growth, because it confronts us with our limitations and forces us to redefine who we are without the other. In this sense, an AI that reproduces the dead could provide immediate relief, but it may also postpone acceptance of the void, depriving humans of the opportunity to mature in loss. Death, in Yalom’s ideas, humanizes us; On the other hand, the illusion of defeating it makes us small.
There is something very paradoxical about this new form of digital enterprise. On the one hand, it can provide comfort to those who have no other way to process the void. On the other hand, it could indefinitely prolong a bond that should have already ended on a symbolic level. That is: it brings back to us the voice of the absent person, but it deprives us of the possibility of saying goodbye.
The law, it is necessary, it is necessary Enter this variable region. Not only to regulate rights “Recreating” the deceased – challenges in themselves, indeed – but also to determine the legitimacy of reconstructing an identity that no longer exists. Who owns this digital “copy”? To the dead person, to the developer, or to the mourner who nourishes her with memories? To what extent can this simulation be manipulated without turning it into an emotional caricature of the other?
It will be necessary to recognize rights Postmortem Regarding digital identity: who can obtain the data, images and voices of a deceased person and for what purpose.
Consent must be prior, informed and explicit, and not just in lifebut also through the provisions of a digital will. No one should “resurrect” another without his or her permission or the permission of his legal heirs, because identity – even in its digital form – is an extension of the human personality.
In addition, Platforms that offer These types of services should be required to clearly communicate the artificial nature of the interaction, to prevent illusion from replacing reality or emotionally manipulating the user. It is not a matter of privacy, but a matter of human dignity.
From an ethical standpoint, the challenge does not seem to be one of prohibition or restriction Social AI, but by setting human limits to the technological illusion. The goal is not to stop progress, but to tame it. It is not about stopping innovation, it is about maintaining an ethical space where the person remains the center and not the excuse. Ultimately, the problem is not that the machine thinks, but that the human entrusts to it his deepest feelings, sadness, memory or need for transcendence.
Social AI can – and perhaps should – be a space of inclusion, company Or dialogue. It can help someone experiencing loss or loneliness. But it should not replace the cycle of life, nor turn memory into a permanent simulation. If technology colonizes the intimate zone of mourning, the danger is that death will have no meaning. When death loses its symbolic status, so does life.
Because in the end, like I said Roman jurist Julius Paulus: “An honest non omne quod licet establishment” – Not everything that is possible is morally correct.
Although technology tempts us with it The idea of defeating deathLaw and morality must remind us of something deeper: that humanity is not programmed or cloned, it is built on the experience of liminality, in the love that hurts, in the emptiness that teaches us, and in the goodbyes that keep us alive.
Lawyer and consultant in digital law and data privacy; Professor at the Faculty of Law of UBA and Austral University