Yeyati, Eduardo Levy. “The HAL dilemma: why AI obedience may be more dangerous than AI rebellion.” The Brookings Institution, 13 Oct. 2025, www.brookings.edu/articles/the-hal-dilemma-why-ai-obedience-may-be-more-dangerous-than-ai-rebellion.
Yeyeti’s article from The Brookings Institution offers a logical commentary stemming from the often illogical fears over AI sentience. He refers to the consciousness debate as a “false comfort”, misdirecting our anxieties towards the robotic self-awareness we see in science fiction and fantasy, while the real dangers of AI lie within “optimization without alignment”; which promises to become worse as AI continues to advance. This puts the consciousness debate into context with a risk of over-automation and obedience, with just as catastrophic implications. Yeyati also briefly explains why existing safeguards similar to FDA-style approval and nuclear safety protocols aren’t applicable or adaptable to what is needed for AI, because AI is making autonomous decisions too fast for humans to intervene.
This article is really interesting to me because it uses those more extreme, sci-fi-fantasy based fears of AI developing human consciousness to highlight a similarly extreme, yet also vastly reasonable underlying implication that we don’t seem to be properly wary of: As AI continues to become more advanced, the concern shouldn’t be that it replaces humanity but that it becomes even better at doing what it is told. Yeyati distinguishes between the anxieties around “AI rebellion”; in which we fear these bots evolving into self-aware human replacements; and the more realistic and less spoken of fears concerning “AI obedience”; in which the bots do exactly as instructed to through human destruction (for example: a bot being instructed to save the environment could find the solution requested in the destruction of all humans). His comparison to a genie and the common lessons around those mythical stories about how the wisher tends to get what he asks for to his own detriment presented another context to think of AI, similar to the stories of Narcissus and the Hollywood science-fiction scene.

Leave a Reply