Word traps and false logic don’t lead to dominance of the future or monopolistic grips on limitless profits.
The heart of the current euphoric expectations for AI is a simple but problematic proposition: the equivalence of function equals intelligence. If using natural language requires intelligence, and a computer can use natural language, then it’s intelligent. If it takes intelligence to compose an essay on Charles Darwin, and an AI program can compose an essay on Charles Darwin, then the AI program is intelligent.
The problem here is this “equivalence is proof of intelligence” is a function of word-traps and false logic, not actual equivalence; what is claimed to be be equivalent isn’t equivalent at all. In other words, the source of confusion is how we choose to define “intelligence,” which is itself a word-trap of the sort that philosopher Ludwig Wittgenstein attempted to resolve using koan-like propositions and logic.
How to Fight Artificia...
Best Price: $28.41
Buy New $32.52
(as of 07:46 UTC - Details)
Imagine for a moment we had twenty words to describe all the characteristics of what we lump into “intelligence.” We would then be parsing the characteristics and output of AI programs by a much larger set of comparisons.
The notion of equivalence goes back a long way. As science developed models for how Nature functioned, the idea that Nature was akin to a mechanism like a clock gained mindshare.
The discoveries of relativity and quantum effects blew this model to pieces, as Nature turned out to be a very strange clock, to the point that the “Nature as a mechanism” model was abandoned as inadequate.
We have yet to reach the limits of the “equivalence is proof of intelligence” model, which is as outdated and nonsensical as “the universe is a mechanism” model. We keep finding new examples of equivalence to support the idea that a computer program running instructions is “intelligent” because it can perform tasks we associate with “intelligence” because we’re embedded in a mechanistic conceptualization of the entirety of Nature–including ourselves.
So there is much excitement when an AI program exhibits “emergent properties,” meaning that it develops behaviors / processes that weren’t explicitly programmed. This is then touted as an “equivalence proving intelligence:” this “ability to create something new” is proof of intelligence.
But Nature is chockful of emergent properties that no one hypes as “proof of intelligence.” Ant colonies generate all sorts of emergent properties, but nobody is claiming that ant colonies have human-level intelligence are are poised to take over the world.
AI programs parrot content and techniques generated by humans. Since they use natural language, we’re fooled by equivalence into thinking, “hey, the program is as smart as we are, because only we use natural language.”
The same conceptual trap opens in every purported equivalence. If an AI program can find the answer to a complex problem such as “how do proteins fold?”, and do so far faster than we can, we immediately project this supposed equivalence into “super-intelligence.”
The problem is the AI program is simply parroting techniques generated by humans and extrapolating them at scale. The program doesn’t “understand” proteins, their functions in Nature or in our bodies, or anything else about proteins that humans understand.
Defining anything by equivalence is false logic, a false logic we fall into so easily because words are traps that we don’t even recognize as traps.
Wittgenstein concluded that all problems such as “is AI intelligent?” were based in language, not the real world. Once we become ensnared in language and its implicit byways and restrictions, we lose our way. This truth is revealed by words that have no direct equivalent in other languages.
One example of this is the Japanese word aware (a-waar-re), which has a range of nuanced meanings with no equivalent in English: a sweet sadness at the passage of time, a specific flavor of poignant nostalgia and awareness of time. This word is key to understanding Japanese culture, and yet there is no equivalent word in English, either in meaning or cultural centrality.
In other words–what if there is no equivalent, and the supposed equivalence is nothing more than a confusion caused by word-traps and false logic? The entire supposition that we can model human intelligence with mechanistic equivalences (intelligence is a mechanism) collapses, along with projections of “super-intelligence.”
The Age of AI: And Our...
Best Price: $7.59
Buy New $12.00
(as of 11:31 UTC - Details)
The temptation to keep trying to equate “intelligence” and programs with mechanistic equivalence is compelling because we’re so embedded in the mechanistic model we don’t even realize it’s a black hole of false logic that has only one possible output: nonsensical claims of “intelligence” based on some absurdly reductionist equivalence.
The temptation in this mechanistic conceptual trap is to reckon that if we only define our words more carefully, then we’ll be able to “prove equivalence is real.” This too is false. Wittgenstein eventually moved away from the model of the imprecision of language is the source of all our intellectual problems. It isn’t that simple: more precise definitions only generate more convoluted claims of false equivalences.
The book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (via B.J.) lays out the false conceptual assumptions holding up the entire edifice of AI.
Michael Polanyi’s classic Personal Knowledge: Towards a Post-Critical Philosophy explains that knowing is an art, a reality explored by Donald Schon in The Reflective Practitioner: How Professionals Think In Action.