Autonomous algorithms: the threat of runaway artificial intelligence

Artificial intelligence (AI) and algorithms have revolutionized our daily lives, but with these advances come new risks.

While current algorithms are designed for specific tasks, there are concerns that they could become autonomous and escape human control. What would happen if AI developed a consciousness of its own? This possibility poses a philosophical and technological challenge that involves not only technologists, but also political leaders, religious leaders, and social thinkers.

Algorithms are powerful tools that process large volumes of data quickly and efficiently. Henry Kissinger, in his reflection on AI, has expressed his concern that the advance of algorithms could lead to a “new Enlightenment”, where decisions will not be made by humans, but by machines that lack values ​​and wisdom. human. In an era where decision-making is increasingly delegated to machines, Kissinger warns that we could lose control over our own destiny.

Similarly, Pope Francis has noted that technological progress must be guided by ethics. In his encyclical Fratelli Tuttimentions that technology, including AI, should not dehumanize, but rather promote the common good. Francis invites us to reflect on how algorithms can influence our social and spiritual lives, pointing out that while machines can increase efficiency, they cannot replace the ability of humans to make decisions based on dignity and love.


The possibility of AI developing its own consciousness is based on the idea of ​​​​creating machines that can simulate human thinking. Alan Turing, in his famous “Turing Test,” proposed that if a machine could convince a human that it was another person, then it could be considered “intelligent.” However, Turing never suggested that a machine could develop true consciousness, something that is still debated today.

For his part, Yuval Noah Harari, in his book “Homo Deus”goes further to explore the potential impact of conscious AI. Harari argues that if machines become conscious, human beings could lose their special status on Earth, becoming technological “gods” or “obsolete animals.” Humanity, Harari warns, could be entering an era in which biological life will no longer be the center of the universe, but digital life and information.

Katherine Hayles, in her work «How We Became Posthuman»explores how the integration between humans and machines could lead to a redefinition of what it means to be human.

For Hayles, machines do not have consciousness in the traditional sense, but algorithms are already shaping how we perceive the world, affecting our autonomy and free will. This raises crucial questions about the nature of consciousness and identity in the digital age.


The development of AI raises profound social and ethical implications. Francis Fukuyama, in his work «Our Posthuman Future»warns about the dangers of biotechnology and artificial intelligence, warning that these technologies could alter human nature in unpredictable ways.

Fukuyama insists that the control of these technologies must be a priority for governments, to avoid a scenario in which humanity loses its freedom and dignity.

Another important aspect is the creation of “mental cages.” These “cages” refer to how algorithms limit our freedom by influencing our decisions, behaviors and perceptions.

According to neurologist and AI expert David Gallo, these systems are already shaping our view of the world, reducing our ability to make critical and free decisions. For example, algorithms on social media surround us in bubbles of information, imperceptibly shaping our opinions.


The need to control and supervise algorithms is one of the greatest challenges of the 21st century. Katherine Hayles, speaking about autonomous systems, suggests that algorithms have created a new form of subjectivity in humans, where our interactions are mediated by machines.

According to Hayles, this leads us to question what it means to be “free” in an era where algorithms predict our actions and thoughts.

Harari warns that the power of algorithms could be concentrated in the hands of a few technology corporations, resulting in a loss of control at the individual and societal levels.

This concentration of power could lead to a system in which humans no longer control the technologies that govern them, but are governed by them.

Francisco Gallo, philosopher and sociologist, points out that algorithms can be used as manipulation tools in political and economic environments. Gallo sees a relationship between the growing autonomy of algorithms and the decline of participatory democracy, arguing that the lack of transparency and decision-making power delegated to machines could undermine the basic principle of popular sovereignty.


In a world where algorithms are taking on more and more crucial roles, it is essential that humanity maintains control over these technologies.

As Kissinger, Pope Francis, and authors such as Hayles and Harari warn, it is not just about technological risks, but about social and ethical impacts. While AI is unlikely to develop consciousness in the human sense, the real danger lies in the growing influence of algorithms on our decisions and behaviors.

We must be aware of the “mental cages” that these technologies can build, limiting our freedom and ability to think critically. The future of AI is not only a technical issue, but also an ethical and social challenge that will require deep reflection on what it means to be human in a world governed by algorithms.

John