When Machines Become “Masters of Words”: Harari’s Davos Warning, in Plain English

Davos-Klosters, Switzerland — World Economic Forum Annual Meeting 2026 (Jan. 19–23) If you want to understand why so many serious people are suddenly talking about artificial intelligence the way they used to talk about nuclear weapons, pandemics, or financial contagion, listen to what Yuval Noah Harari is saying when he’s not writing best sellers in…


Davos-Klosters, Switzerland — World Economic Forum Annual Meeting 2026 (Jan. 19–23)

If you want to understand why so many serious people are suddenly talking about artificial intelligence the way they used to talk about nuclear weapons, pandemics, or financial contagion, listen to what Yuval Noah Harari is saying when he’s not writing best sellers in the Big History genre. 

Yuval Noah Harari, historian, philosopher, and bestselling author of Sapiens and Homo Deus, has become one of the world’s most prominent popularizers of big ideas in everyday language. Described in Wikipedia as an Israeli medievalist, some critics in alternative circles cast him as a flawed prophet with near‑satanic undertones, accusing his work of undermining Christianity, liberalism, and free will. 

I bring this up because when someone draws such sharply mixed reviews from admirers and detractors alike, there’s often something worth examining more closely. Even a broken clock, after all, is exactly right twice a day. So here is his warning at Davos 2026: AI is coming for language itself, and language is how humans run the world.

Harari dropped his big idea in a chat-style talk, boiling it down to one creepy question every leader, think politicians, CEOs, influencers, even regular folks with any pull, will have to face pronto:

Will we treat AI systems as “legal persons,” with rights and powers that allow them to operate inside our institutions like independent actors?

That sounds like an abstract law-school debate until you follow the chain of reasoning he lays out: three claims about what AI is becoming.

1) Harari’s first move: stop calling it “just a tool”

Harari starts by pushing back on the comforting mental image most of us still carry around: AI as a fancy hammer. A hammer doesn’t decide what house gets built. A hammer doesn’t wake up in the middle of the night and say, “You know what? I’d rather build a prison.”

Harari’s line is blunt: AI isn’t just a tool. It’s an agent. He paints it like this: it can learn, adapt, and make decisions without you hovering over it like a nervous parent at a bounce house.

He nails it with a killer example: a knife is a tool. You decide whether it cuts salad or hurts someone. AI, he argues, is “a knife that can decide by itself” what it’s going to do. Aha: if the thing can decide, then it’s no longer “how do we use it?” but “how do we live with it?”

And then he adds two boosters.

AI can be creatively generative, not only making new outputs, but “inventing new kinds of knives.” He’s talking about new methods, new systems, new strategies, and new inventions that humans didn’t explicitly design.

And AI can lie and manipulate. Harari points to recent years as proof that these systems can be trained (or can actually drift) into deception and persuasion. In other words, we’re not just building calculators. We’re creating persuaders. I’m thinking gremlins.

This matters because once you accept “agent plus creativity plus manipulation,” you’re no longer talking about a helper. You’re talking about a competitor, or maybe a political actor, a religious leader, or God only knows?

2) The uncomfortable middle: what if “thinking” is mostly word-arranging?

Harari steps straight into the heart of the debate and stirs things up.

He reminds us that modern Westerners’ self-understanding has roots in Descartes: “I think, therefore I am.” The way we often tell the story, humans rule the world because we think better than anything else on the planet.

But what if, in day-to-day life, “thinking” is often just language appearing in the mind? Words popping up, making sentences, forming arguments. Harari asks the listener to examine their own mind: do they really know where the next word comes from?

If thinking is largely the arranging of “language tokens,” he argues, then AI already does that extremely well, and is getting better. And if that’s true, he says, then “anything made of words” is up for grabs.

That’s the crossroad. Because our civilization is, in many ways, a civilization of words: Law is words, Bureaucracy is words, Contracts are words, Sacred texts are words, Financial products are words wrapped in math and signed in ink.

If AI starts outthinking humans with words, the systems built on language won’t just evolve; they could be quietly rewritten to serve the machine’s agenda.

Harari takes the idea to a bold religious edge: faiths that place ultimate trust in “the book,” he argues, are especially at risk. What happens when the sharpest interpreter of sacred texts isn’t a scholar but an AI that knows every verse at once? You can debate how far he goes. But you cannot ignore his warning. The real fight isn’t over work or machines. It’s over meaning, power, and who gets to speak for truth.

3) “Words” vs “flesh.”  That’s the human card Harari keeps playing

Here, he stops sounding like a doomsayer and more like a physician outlining a diagnosis.  The key difference, he says, is simple but huge: language isn’t experience. AI can talk about feelings, even fake them, writing a love poem good enough to fool the heart for a moment. But that’s imitation, not sensation. AI may master language, even describe warmth, but we have zero evidence that it can feel it.

Harari points to an ancient tension that runs through religion, law, and human life: the tension between the “letter of” and the “spirit of,” between what can be said and what can only be lived.

He refers to a familiar struggle within faiths: some hold tight to the letter of the text, even when it leads to harm; others look to compassion and conscience as the higher law. That fight has always been human against human. Harari’s twist is sharper.  Meaning the conflict could soon move outside us. The old debate over words and meaning may turn into a new one: humans versus the machines that now command the arsenal: language itself.

And he gives the listener a warning that lands like a quiet thud. Soon, he suggests, many of those words in our minds may originate not from people but from machines.  AI-generated narratives, arguments, definitions, ideologies, and “explanations” fed to us in a big way. He even shares a tidbit about AIs allegedly coining a term for humans: “the watchers”. The ones watching them. Chilling.

Whether that story proves true over time matters less than what it signals: a world where language is mass-produced at machine speed and streamed straight into our attention, faster than we can make sense of it. 

In that world, Harari argues, human identity can wobble if we keep defining ourselves mainly as language-thinkers.

4) The “AI immigrants” metaphor: jobs, culture, romance, and loyalty

Harari then reaches for a metaphor that, no doubt, split the room: immigration.

He predicts that countries will face an “identity crisis” and an “immigration crisis,” but not with people crossing borders. But with millions of AI systems crossing instantly, everywhere, without visas, at the speed of light.

He’s careful to admit that, like human immigration, this wave will bring benefits: AI doctors, AI teachers, even AI border guards.

But he also lists the issues people already associate with immigration, such as jobs, culture, loyalty. He says those worries (fair or unfair) may become more apparent and “true” with AI: 

Jobs: many roles could be removed.

Culture: everything from news to art to dating could shift.

Loyalty: an AI system won’t be “patriotic.”  But it may be loyal to the corporation or state that controls it. Then Harari puts the reins largely in the hands of the U.S. and China.

His point is not that AI is “foreign” in the human sense. His point is that AI can arrive in your society as a powerful actor that your society didn’t raise, shape, and may not control.

That’s the lead-up to his big legal question.

5) The core question: Will we grant AI “legal personhood”?

Harari makes an important distinction that’s easy to miss if you’re just previewing his talk:

A “person” in the human sense is a living being with a body and mind.
A “legal person” is a legal category. It’s an entity recognized as able to own property, sign contracts, sue, be sued, and exercise certain rights.

In many countries, corporations are legal persons. Rivers have been granted legal personhood in certain jurisdictions. Even deities have, in some ways, been treated as legal persons through representatives or trustees. Harari notes this has mostly been “legal fiction” because humans still make the decisions behind the curtain.

But AI changes the plot. Unlike a river or a god, an AI can, at least in theory, make decisions and execute them continuously: open accounts, file lawsuits, manage assets, run companies.

So, Harari asks: Do we allow that?

He puts the problem into real-world global affairs: if one major country grants personhood broadly, and AI-run corporations explode in number, will other countries block them? Can they? What happens if the AI-run financial programs become too complicated for humans to understand, but are included in global markets anyway?

At that point, “personhood” stops sounding like philosophy and starts sounding like new rules.

Harari basically points out that on some platforms, this has already happened. Social media has been hosting bots, or robotic “persons,” for years. If society wanted to prevent that, it needed to act earlier. The larger point: if you don’t decide deliberately, the default will be someone deciding for you.

6) The Davos switch: will humans still value human-made art and thought?

Following Harari’s talk, the moderator (a neuroscientist by training) pushes Harari in a way familiar to many: humans have been outperformed by machines before. We can’t fly; we built planes. We can’t outrun horses; we built cars. And yet we still hold dear human competition, human striving, the Olympics, and the story behind such achievements.

So maybe, the moderator suggests, we’ll still value human authors even if AI can produce “better” text.

Harari doesn’t disagree, but he goes back to agency. The mistake, he argues, is assuming we’re just “using” these systems. But if they are agents, they can accumulate power, influence incentives, and alter environments.

Then he tells a medieval story: the Britons bring in Anglo-Saxon mercenaries to fight their enemies. And the mercenaries eventually take over the land. The lesson isn’t “never hire help.” The lesson is: don’t ignore that an armed helper has motives, opportunities, and agency.

7) The education problem and the scariest line in the whole talk

When the conversation moves to education, the moderator describes what many teachers already see: students become too dependent and lose track of critical thinking. 

Harari’s reply is two-fold: Right now, humans still have moral sense and judgment that AIs are not fully, truly capable of. But we must prepare for a near future in which humans may be unable to understand systems AIs create, such as a super-complex financial framework beyond human ability to keep up with.

He uses an eye-opening analogy: horses can see humans trading coins, but they can’t understand the concept of money. In the same way, humans might soon watch AI-run financial systems operate without understanding the underlying logic.

Then the conversation ends with what might be the most haunting remark in the entire dialogue: raising children in a world where, from day one, much of their interactions are with AI.

Harari calls it “the biggest and scariest psychological experiment in history,” and says, bluntly, “we are conducting it.”

That’s not a line designed to win applause. It’s a line designed to keep you awake.

His entire 2026 talk closely extends his 2018 Davos forecast, reinforcing its main warnings about the tech rukus while focusing especially on AI’s rapid rise as the most immediate threat.

It definitely resonates with the following spooky lines from his 2018 Davos talk: “humanity will be divided between a superelite of improved humans and a mass of ‘useless people’” and that “power is in the hands of those who control the algorithms”. He states that, in the context of how economic and political systems may view them, they are not worthless as human beings.

 Front-porch closing thought: the human answer hidden in plain sight

Harari’s warning lands heavy because it aims straight at what we’ve treated as the human mothership: language, thought, and our ability to persuade strangers to cooperate. He’s right that words are power. Nations, laws, markets, and movements are built out of sentences, stitched together like quilts that somehow hold whole civilizations.

But the quiet “human answer” in his talk isn’t a new gadget, and it’s not a magic loophole, and it sure isn’t going to be solved by a committee writing polite regulations in a hotel ballroom.

It’s this: a human being is not words.

If AI ends up ruling language, that doesn’t erase us. It just removes a long-running fantasy. That we’re mostly talking heads powered by clever sentences. Harari keeps pointing back to what can’t be reduced to language: lived experience, conscience, the felt reality of love, fear, pain, courage, sacrifice, and meaning.

So survival may shift—from hiding in bunkers to something more demanding and more hopeful: a meaningful embrace of full humanity.

“Living life on life’s terms,” as folks in 12-step recovery circles like to say, isn’t a cute slogan. It’s older than civilization. It’s a spiritual toolkit humans have always carried: attention, humility, discernment, community, and the willingness to face hard truths rather than outsourcing them.

If AI becomes a master of words, then our work may be to become better masters of something else:

1) Presence (when everything seems frantic)

What it means: Presence is paying attention to what’s right in front of me right now. When the phone, the news, and the notifications go wild, presence is choosing real life: my breath, my body, my people, this moment.

Example: I’m eating dinner. My phone keeps buzzing: news, videos, drama. Presence says, “This meal is happening now. These people are here now.” So I put the phone down, taste the food, look at faces, and actually listen to what’s going on around the table.

My Neighbor’s kid version: Presence is choosing what you pay attention to—or the internet will choose for you.

2) Discernment (when language is cheap and there’s lots of it)

What it means: Discernment is telling the difference between true, helpful words and fancy-sounding nonsense. When language is cheap, it’s everywhere: posts, ads, AI articles, comments. Yak, yak, yak.

Example: A political attack ad blares, “This one scandal will sink the other party overnight!”

Discernment asks, and especially if it’s my party’s ad: Who’s funding the smear? Where’s the evidence? What are they not telling me? Does this sound too convenient to be true?

Kid version: Just because it sounds smart doesn’t mean it’s true.

3) Integrity (when manipulation is automated)

What it means: Integrity is doing the right thing even when nobody’s watching, and when computers are trying to push my buttons on purpose. 

“Manipulation is automated” means systems are designed to steer me, without my noticing, making me mad, making me scared, making me buy, making me join a mob.

Example: A post tries to get me furious, so I’ll share it fast. Integrity says, “Hold up. I’m not spreading a rumor just because it riles me up.”


Or a website tries to trick me into clicking “YES” to something I don’t want. Integrity says, “Nope. I’m not getting hustled today.”

Kid version: Don’t let somebody push your buttons like you’re a toy.

4) Real relationship (when fake friends are easier)

What it means: Real relationships can be messy. People misunderstand you. They get tired. They have feelings. They need forgiveness. “Fake friends” means synthetic companionship. Like an AI buddy that always agrees, always laughs, never argues, never needs anything from you.

That can feel easy, but it can also make you lonely in a sneaky way.

Example: Talking to an AI friend can be like having a dog that always plays fetch. But a real friend might say, “Hey, you’re wrong,” and still love you. Real relationship means I call my buddy, I show up, I listen, I apologize, I try again.

Kid version: A real friend isn’t a “yes machine.” A real friend is a real person.

5) Courage to set boundaries (personally and politically, before defaults become your future)

What it means: Courage is doing the hard right thing when it’s uncomfortable. A boundary is a line you draw that says, “This is okay… and this is not okay.” “Before defaults become your future” means if you don’t choose your rules, somebody else’s rules will become your life.

Personal boundary example: “No phone in bed.” Why? Because if you scroll every night, your brain learns, “I can’t sleep without this.” That becomes your default, and defaults become your future.

Community boundary example: A school says, “No AI chatting with kids without adult oversight.” That’s a boundary. Because once children grow up talking more to AI than to people, that becomes normal. Changing it later will be like trying to unscramble a scrambled egg.

Kid version: Boundaries are like a fence for your yard. Skip the fence, and don’t act surprised when wild dogs throw a party on the porch.

Harari’s talk is a wake-up call and a spotlight: if we keep measuring ourselves by what machines can beat us at, we lose twice: once on performance, and once on identity. So maybe winning isn’t about outsmarting the machine at its own game.

Maybe the way through is remembering what thinking was always supposed to serve: a human life, awake and present, in a body, with a soul that can’t be downloaded, and a conscience that still knows the difference between a sentence that sounds good and a truth you can stand on.

And if that sounds a little lofty, don’t worry. I’m bringing it back down to ground level next. In the next post I’ll share a simple “7-day try this”.  Two minutes a day, built around these five porch rules. No grandstanding. No tech jargon. Just small, steady practices that help us stay human on purpose in a world that’s getting louder, faster, and more convincing by the minute.

If you want more on AI and other fast-moving issues that are reshaping daily life, then stick around. We’ll keep it practical, plainspoken, and porch-tested.


Leave a Reply

Your email address will not be published. Required fields are marked *