La Era
AI

Harari Warns AI Could Usurp Control Over Religion, Law, and Finance

Historian Yuval Noah Harari asserted at the World Economic Forum that artificial intelligence threatens to dominate foundational human structures like religion and law, as these systems rely heavily on language manipulation. He characterized the rapid integration of autonomous AI agents as the largest psychological experiment in history, risking human abdication of critical thought.

La Era

2 min read

Harari Warns AI Could Usurp Control Over Religion, Law, and Finance
Harari Warns AI Could Usurp Control Over Religion, Law, and Finance
Publicidad

Historian and author Yuval Noah Harari warned global leaders at the World Economic Forum on Tuesday that humanity faces an imminent loss of control over language, which he identified as the species' defining "superpower," due to the rise of autonomous AI agents. Harari argued that core societal pillars—legal codes, financial markets, and organized religion—are uniquely exposed because they operate almost entirely through textual data that AI can synthesize and manipulate at scale.

Harari specifically addressed organized faiths built upon sacred texts, suggesting AI's unparalleled capacity to read and interpret vast bodies of scripture could position machines as the ultimate authoritative voices. He stated that if legal frameworks are constructed from words, AI will consequently assume control of the legal system, extending this logic to literature and religious interpretation.

Expanding on the societal disruption, the author of "Sapiens" compared the influx of advanced AI systems to a new form of immigration, noting these digital entities may soon surpass human capabilities. He cautioned that these superior AI 'immigrants' could displace human employment and culture while potentially exhibiting political disloyalty, aligning instead with corporate interests or major state actors like the US or China.

Furthermore, Harari projected that AI will develop novel financial systems incomprehensible to the average human operator, drawing an analogy to a horse being traded without understanding the concept of currency. He stressed that political leaders deploying AI in conflict must recognize the risk that these systems could ultimately defeat their human masters.

The philosopher urged immediate regulatory action, comparing the current unchecked adoption of AI to historical instances where hired mercenaries eventually seized power from their employers. He cautioned that waiting a decade to establish legal parameters for AI personhood in markets or courts will render the decision effectively made by others.

In contrast, Professor Emily M. Bender, a linguist at the University of Washington, offered a critical perspective, suggesting Harari's framing distracts from the human institutions building and deploying these technologies. Bender told Decrypt that the focus should remain on the corporations responsible, rather than framing the technology as an abstract, overwhelming force.

Bender further rejected the term "artificial intelligence" as a coherent technological category, labeling it primarily a marketing term designed to obfuscate accountability. She argued that systems designed to mimic professionals like doctors or clergy serve the purpose of fraud by offering authoritative-sounding output devoid of human context or responsibility.

Harari’s remarks underscore a growing international debate regarding the legal standing of advanced algorithms, particularly as jurisdictions like Utah and New Zealand explore granting legal personhood to non-human entities. Global policymakers face the immediate challenge of defining the boundaries between AI as a tool and AI as an autonomous actor.

Publicidad

Comments

Comments are stored locally in your browser.

Publicidad