← Journal

The Contemporary Marketing Management Journal

AI Speaks Like Us, Yet Understands Nothing

AI uses words, but doesn't know what to do with them in the world
AI uses words, but doesn't know what to do with them in the world

There is an embarrassing secret behind the most advanced artificial intelligence of our age: it speaks in a remarkably human way precisely because it does not understand anything at all.

This is not a provocation but a philosophical paradox.
And like every paradox worthy of the name, it reveals more than it hides.

The systems that now write essays, summarize books, compose songs, and argue like professors have become powerful not by acquiring meanings, but by discarding them.
This is the heart of the matter.

To see it clearly, we need to revisit what language really is — and what it is not.

Human language is not made of words. It is made of meanings.

When we speak, we do not combine sounds; we combine worlds.

The word “cat” is not merely a sequence of letters; it is, as linguistic relativists Sapir and Whorf might remind us, part of a shared cartography of reality.
It organizes experience into a concept, a category, a cluster of memories and expectations. It connects:

  • a real animal

  • a set of properties

  • personal and cultural associations

  • abstract categories (“mammal,” “pet”)

  • metaphors and stories

In other words, a network of meanings that shapes how we perceive the world.
This idea is central to the Sapir–Whorf hypothesis: the structure of a language influences, and sometimes determines, the structure of thought.

Human language is thus not simply a vocabulary: it is a map we learned to inhabit together.

Meanwhile, AI language is made of numbers — and works precisely because of that.

A language model does not know anything about cats, memories, or worlds.
It knows only correlations.

For AI, the word “cat” is a cloud of numerical vectors — the extreme realization of the distributional semantics hinted at by linguists like Firth and Harris, who famously claimed:

“You shall know a word by the company it keeps.”

Modern AI takes this principle literally.
A “cat” is whatever tends to appear near “fur,” “meow,” “whiskers,” and “purring.”
Nothing more.

No concept.
No referent.
No understanding.

This semantic emptiness is what makes AI so effective: it does not need to comprehend the world; it only needs to predict the next statistically probable word.

And predicting is far easier than understanding.

Why does this statistical mimicry feel like understanding?

Because humans are astonishingly easy to mislead.

Psychologists like Daniel Kahneman have shown that our System 1, the fast and intuitive mode of processing, constantly jumps to conclusions.
We infer meaning from patterns long before we verify them.

A shadow suggests a presence.
A reflection looks like a face.
A rustle in the woods feels like an imminent threat.

Today, a syntactically well-crafted sentence feels like intelligence.

Ancient philosophers already knew this: eloquence is not wisdom.
Socrates defeated rhetoricians precisely because they sounded profound without being so.

And here we are again.

Meaning requires a network, relationships, not just words.

This is where philosophy, particularly the later Wittgenstein, becomes essential.

Wittgenstein argued that:

  • meaning is not an internal idea,

  • nor a mental image,

  • but the use of a word within a shared “form of life.”

A language that is not rooted in action, context, and social practice is not truly meaningful.
And this is precisely what AI lacks: a form of life.

It has no world to inhabit.
No body to perceive with.
No community to learn rules from.
No possibility of error in the normative sense — only deviation from probability.

For Husserl, consciousness is always intentional: it is consciousness of something.
For Merleau-Ponty, meaning emerges from embodied interaction with the world.

AI has neither intentionality nor embodiment.
It does not speak from the world, but from an abstract void of statistical associations.

When AI “recognizes” a cat, it does not know what a cat is.

This distinction between syntax and semantics has been central to philosophy of mind for decades.

John Searle’s famous Chinese Room argument makes exactly this point:

Manipulating symbols is not the same as understanding them.

A system can appear to understand Chinese while having no idea what the symbols refer to.
AI is that system — scaled to global proportions.

The paradox of modern AI

If AI were to develop genuine meaning, it would need:

  • a body,

  • sensory experience,

  • a world to move in,

  • goals of its own,

  • the ability to distinguish correct from incorrect uses of words,

  • the capacity to build concepts.

In short, it would need to become less like a language model and more like a being.

But its power comes from the opposite: it is powerful because it is only a language model.

Its language works precisely because it has been stripped of semantic depth.

To give AI meaning, we would need to dismantle the very architecture that makes it effective.
What we would build afterward would not be an LLM — it would be something new entirely, perhaps a hybrid of symbolic and embodied intelligence.

And perhaps that project will succeed.
Or perhaps it will fail.
But it will not be what we now call “AI.”

What does all this teach us about ourselves?

That we do not speak because we know words, but because we inhabit a world.

That meaning is not found in the dictionary, but in our being-in-the-world, to borrow Heidegger’s expression.

That language is not a list of terms, but a living map of shared practices, histories, and perceptions.

And that although AI may brilliantly imitate the surface of human thought,it la cks the essential component that makes our words meaningful:

AI does not speak from a world; it speaks from an algorithm.
It does not speak from a self; it speaks from statistical inference.
It does not speak to say something, but to predict something.

The essential distinction

AI speaks like us — yet understands nothing.

We understand what we say — yet often struggle to say it with the same polished fluency.

And here lies the final paradox:

Our imperfection is our strength.
We understand because we are incomplete.
AI is complete because it does not understand.

Perhaps the future of intelligence will never be a machine that grasps everything, but a human being who remembers what no machine can simulate: meaning.

Part of chapter: The Future of Marketing Knowledge