Prejudices in the human and artificial mind

The human mind like AI can be powerful tools if trained well. AI is not inherently good or bad; Its value will depend on how we use it to improve society and not to replicate its defects.

Conflicts that exclude people are rooted in deeply rooted prejudices. These preconceived ideas marginalize the “other” based on characteristics such as race or social group, without critical analysis. Prejudice leads to stigmatization without evidence, avoiding debate and preferring aggression to reasoning.

One of the main problems is that many are not aware of their own prejudices. The “Halo effect,” for example, leads us to attribute positive or negative qualities based on superficial characteristics, while the “illusion of validity” leads us to rely on unfounded predictions.

The human brain is designed to jump to quick conclusions, which facilitates fanaticism and dogmatic security. Socrates taught that “an unexamined life is not worth living,” reminding us of the importance of questioning our own prejudices.

There are several strategies to mitigate the impact of prejudice:

  1. The devil’s advocate method: Actively question prejudices to recognize their dangers.
  2. The dialectical inquiry: Evaluate a plan along with its counterplan to get a complete picture.
  3. The external perspective: Consider opposing approaches to understanding the limitations of our beliefs.

These techniques help reduce impulsive decisions and allow for deeper analysis.

Dr. Isaac Asimov, famous science fiction writer, related an anecdote in his memoirs that highlights how intelligence is a function of situation and context. Despite his high score on an intellectual aptitude test, Asimov found himself doing cooking duties in the army. His mechanic, whom he considered intellectually inferior by academic standards, was capable of solving problems that Asimov did not understand.

The mechanic once told the story of a deaf who asked for nails in a hardware store using gestures, and then asked how Asimov thought a blind man would have asked for scissors. Asimov, believing he was right, made the cutting gesture with his fingers. The mechanic laughed and explained that the blind man had simply asked for scissors with his voice.

I was sure he wouldn’t get it right. “Why? I asked him. ´´Because he is so educated that he knew he couldn’t be very intelligent.

This example highlights how intelligence is neither absolute nor universal, but rather situational and adaptive, which highlights the importance of not falling into simplistic prejudices about the intellectual capacity of others.

One is fast and automatic, giving answers with what is in memory, and the other is rational and deals with complex tasks. System 1 is automatically activated and tracks the information that fits the issue, according to the Law of Least Effort.

System 2 is slower and more cautious. Observe and control thought and suggested actions, letting them act or repressing or modifying their suggestions.

Many actions do not result from analysis. The consequences are hasty decisions, frequent mistakes, biased opinions, subjective judgments, and intuitive responses.

Only when System 2 comes into play, which defers the suggestions of the emotional system and invests in cognitive effort, can we solve complex problems. It is advisable to analyze rationality through its errors rather than its triumphs.

Like the human mind, artificial intelligence (AI) can also be subject to biases. Since AI learns from human-provided data, it can replicate errors and biases present in that data. A clear example was Microsoft’s chatbot, “Tay,” which in less than 24 hours of interaction on Twitter became racist due to exposure to biased comments.

What can be done to prevent this? Algorithm auditing should be interdisciplinary, integrating both skepticism and social sciences. Biases in AI can become automated if not detected and corrected, which can amplify problems rather than solve them. Therefore, it is vital to train these systems with balanced and critical data.

As Plato mentioned almost 2,500 years ago, reading and writing were already facing criticism for reducing knowledge to data or eroding memory. Today, with artificial intelligence, we face a similar challenge: preventing technology from taking us away from critical and creative thinking. In short, both the human mind and AI can be powerful tools if trained well. AI is not inherently good or bad; Its value will depend on how we use it to improve society and not to replicate its defects.

John