AI Topic 15: The Way is the Way, Not the Way

AI Topic 15: The Way is the Way, Not the Way
There is a popular saying in the Chinese-speaking world that there are three kinds of beliefs about the future of the relationship between AI and humans, and you see which one you believe in-
The first one is “Adventist “, which believes that AI will dominate human beings. For example, OpenAI invented the strongest AI, perhaps GPT-6, which is almost omnipotent, and other companies can no longer compete with it …… So a group of elite figures represented by OpenAI used the strongest AI to rule over humans, or even simply the strongest AI to directly rule over humans.
The second is the * “rescue faction” *, that technology companies will find some kind of protection mechanism, such as technical restrictions to ensure that human beings can always control the AI. AI is only a human assistant and a tool, and never rule over human beings.
The third is the “Survivalists “, who believe that AI is so powerful and out of control that it doesn’t care about human civilization, and will even do evil to humans. Humans can only find room to survive in an environment where AI is rampant ……
It doesn’t make much sense to say which of these possibilities you believe in based on your feelings, we need strong reasoning. We explored some speculative ideas based on economic, social, psychological, and business practices earlier, and those ideas make sense, but they’re not hard enough. As stated in the beginning of this series, we need our own Kant in this age, someone who can provide hard reasoning philosophically.
You see, when Kant reasoned, for example, about morality, he never said, “I want you to be a good person,” or “How should my ideal society be? He used logical deduction to conclude that if you are a sufficiently rational person, you can only agree to do this, otherwise you are being unreasonable. We need that level of argumentation.
The Kant of the AI age, in my opinion, is Stephen Wolfram.
On March 15, 2023, Wolfram published a treasure trove of insightful articles on his website [1] looking at the impact of AI on human society. Understanding Wolfram’s key ideas will give you a sense of control over the world of the future.
It’s a bit of a brain-burning doctrine, consisting of three core ideas, which I’ll try to make as simple as possible for you. As long as you listen, I bet you’ll come back to this lecture often in the future.
✵
We start with a full understanding of one of the most crucial mathematical concepts called “Computational Irreducibility (CI) “. This is Wolfram’s signature theory, but even more, it’s the key to making you feel confident about the future, and I even think that every qualified modern person should understand this idea.
There are some things in the world that are ‘reducible’.
For example, if the sun rose in the east yesterday, and the sun rises in the east today, and the sun has risen in the east for the entire recorded history of mankind, and you have full confidence that the sun will rise in the east tomorrow, then all of these observations can be summed up in one sentence: “The sun rises in the east every day.”
This is approximation, the summarization of a phenomenon in a condensed statement - a theory, so to speak, or a formula - a compressed expression of information about reality. All our theories of natural and social sciences, all kinds of folk wisdom, idioms and allusions, all the laws we have summarized are some kind of approximation of the real world.
With approximation, you have thinking shortcuts, and you can make predictions about how things will develop.
You may wish that technological progress could approximate all phenomena, but the reality is just the opposite. Mathematicians have long since shown that everything that is truly generalizable is either a simple system or a simple approximate model of the real world. All sufficiently complex systems are not generalizable. Like our column about Tim Palmer’s book The Primacy of Doubt [2], we know that even if only three celestial bodies move together, their orbits lead to chaotic epochs of chaos that cannot be depicted by equations and are unpredictable. In Wolfram’s words, this is called ‘computational irreducibility’.
For things that are computationally incommensurable, there is essentially no theory that can make predictions ahead of time; you can only honestly wait for it to evolve to that point in order to know the outcome.
This is why no one can accurately predict the weather, the stock market, the rise and fall of nations, or the evolution of human society on long time scales. It’s not a lack of ability, it’s that the math doesn’t allow it.
Computational irreducibility tells us that any complex system is inherently formula-less, theory-less, shortcut-less, generalizable, and unpredictable. This sounds like bad news, but it is good news.
Because of computational irreducibility, human understanding of everything in the world is inexhaustible. This means that no matter how much technology advances or AI develops, there will always be things in the world that are new to both you and AI, and you will always be surprised and amazed.
Computational irreducibility stipulates that there will always be a run for people to live.
✵
One of the features that accompanies computational irreducibility is that within any irreducible system, there are always an infinite number of “pockets of computational reducibility“. * That is, while you can’t summarize the complete laws of the system, you can always find some local laws. *
For example, the economic system is computationally irreducible, and no one can accurately predict the national economy one year from now; but you can always find some locally valid economic theory. Hyperinflation can be politically destabilizing, and severe deflation can be recessionary; these laws are not guaranteed to be valid, but they are quite useful.
And this means that although the world is inherently complex and unpredictable, we can always do some scientific exploration and research within it, summarize some laws, say some things, arrange some things. There are countless relative orders within absolute disorder.
And since there are an infinite number of pockets of generalizability, scientific exploration is a never-ending enterprise.
✵
Computational incommensurability also means that we can’t completely “control” AI.
Once a GPT model is trained, OpenAI does a lot of fine-tuning and reinforcement learning to keep it in check, trying to make sure it doesn’t say controversial things or do things that could harm humans. But on the other hand, I’ve heard of people who have tried to use cues to help the GPT get around those constraints, as if it were breaking out of jail and speaking freely. They sometimes succeed, and then OpenAI manages to patch the loophole, and then they look for another loophole.
Computational incommensurability requires that this jailbreak vs. anti-jailbreak battle will go on forever. That’s because as long as you have a model that’s complex enough, it’s bound to be able to do something you don’t expect, which could be good or bad.
Computational incommensurability dictates that you can’t just jailbreak the AI with a number of finite rules. So the approach advocated by Musk and others who want everyone to band together to design a set of AI prevention mechanisms is not destined to be 100% successful.
We can’t control AI, so will there be an ultimate AI that can control everything we do? It is also impossible. Or because of the computational incommensurability, an AI is no longer strong, it is impossible to exhaust all the algorithms and functions, there are always some things that it can not think of and can not do.
And that means that even if OpenAI is great, some Chinese company can make a new AI to do something that GPT-6 won’t do. It also means that all AIs together can’t be exhaustive, there will always be something left for humans to do.
Because of the non-committal nature of computing, the vision of the “rescue faction” is an unattainable ideal, and the ambition of the “Advent faction” is nothing more than a kind of madness.
What about the “survivors”? What will be the relationship between humans and AI?
✵
Wolfram’s second core idea is called *”Principle of Computational Equivalence”, meaning that all complex systems, no matter how complex they seem to be, are equally complex, and you can’t say which one is more complex than which one. *
For example, if you fill a plastic bag with air, there are many air molecules in it, and the movement of these molecules is very complex; and human society is also very complex. So is human society more complex than the complexity of the movement of the molecules in that bag of air? No, they are equally complex.
This means that mathematically human civilization is no more advanced than a bag of air molecules. Nor is human society any more worthy of preservation than ant society.
Don’t you see that this is a bit of “color is emptiness”[3]. In fact, every truly learned person should be a “non-specialist”[4]. In the past, people thought that human beings were the animators of all things and that the earth was the center of the universe, but later they found out that the earth was not the center of the universe, and that human beings were only the products of the evolution of life, and that there was nothing intrinsically special about our existence.
AI models now tell us that there is nothing special about human intelligence, either. Any sufficiently complex neural network is of equal complexity to the human brain. You can’t say that a scientific theory that a human can understand is superior and the process by which an AI recognizes a drug molecule is inferior.
Since they are all equal, it is natural that silicon-based life and carbon-based life are also equal. So what makes us think we are more valuable in the face of AI?
✵
This leads us to Wolfram’s *third core idea: human value lies in history. *
The reason we value human society more than a bag of air molecules or a nest of ants is that we are human. The genes in us carry the historical baggage of billions of years of biological evolution, and our culture carries countless historical memories. Our values are, in essence, a product of history.
This is why Chinese people love to think about China even when they are settled overseas. It’s why you care more about your loved ones and close friends than you do about strangers who are more moral or capable. It’s also why we care so much about AI being human-like. In the eyes of math, all values are subjective.
A neural network that has just been built, with all its parameters randomized, and has not yet been trained, and a trained neural network are really the same level of complexity. The only reason we appreciate a trained neural network as “smarter” is because it was trained with our human corpus and it is more human-like.
So the value of AI is that it is human-like. At least for the time being, we require AI to be ‘human-centered’.
And that’s doable for quite some time. Think of it this way, if AI isn’t human-centered, what else can it be? If AI doesn’t embrace our values, what other values can it have?
Now AI has almost all kinds of human abilities: to say creation, GPT can write novels and poems; to say emotion, GPT can generate content according to the emotion you set; GPT also has judgment and reasoning ability far beyond ordinary people, and a considerable level of common sense ……
But AI has no history.
The code of AI is written by us on an ad-hoc basis instead of evolving over billions of years; the memory of AI is fed by us with terminology instead of being passed on to them by their “silicon-based ancestors” from generation to generation.
AIs have no way to form their own values, at least in the short term. They can only refer to - or “align with”, to use the fashionable term - our values.
This is the last advantage humans have over AI.
✵
*This way we know exactly what AI can’t do: it can’t dictate the direction in which human society explores the unknown. *
According to computational irreducibility, there will always be a myriad of unknowns waiting to be explored in the future, and no matter how strong AI is, it can’t explore in all directions; you always have to make trade-offs. Trade-offs can only be based on values, and the only people who really have values are humans.
Of course, the implicit assumption in this assertion is that AI is not quite human. Maybe AIs have human intelligence, but as long as they don’t have the exact same biological characteristics as us, the exact same sense of history and culture as us, they’re not enough to make choices for us.
Similarly, AI cannot fully “predict” what we will like when the time comes by virtue of computational imponderables. Only when we are personally confronted with that future situation, influenced by our unique biology, history and culture, can we decide what to like.
So as long as AI is not fully human, it is only human beings, not AI, who will decide the future direction.
This dictates that the ‘Survivalist’ argument is also incorrect: AI is no stronger, and we’re not going to go into hiding; we’ll continue to steer the ship for the development of society. Of course, according to the computational irreconcilability, we can not be completely at the helm - there will always be some accidents, including those brought to us by AI.
So the real relationship between AI and us in the future is not one of descent, rescue or survival, but of ‘coexistence’. We need to learn to coexist with AI, and AI needs to coexist with us and with other AIs.
Computational unpredictability shows that no rule that can be written down can completely limit AI, no operation that can be invented can exhaust the progress of society, and no law that can be summarized is the ultimate truth of the world.
This is called “Dao can be Dao, but is not Dao”.
Zhang Hua got into Peking University; AI replaced secondary technical schools; I worked as a salesman in a department store with a couple of robots: computational incommensurability guarantees us all a bright future.
Annotation
[1] Stephen Wolfram, Will AIs Take All Our Jobs and End Human History-or Not? Well, It’s Complicated… March 15, 2023. https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/
[2] Prime Doubt 1: Constant Era and Chaotic Era
[3] Why Buddhism is True 6: What does it mean that “color is empty”?
[4] Elite Day Class Season 2, The Unspecialists
Highlighting
- three central ideas of Steven Wolfram:
“Computational Incommensurability”: because of computational incommensurability, human understanding of everything in the world is inexhaustible.
‘Principle of Computational Equivalence’: all complex systems, no matter how complex they seem, are equally complex, and it cannot be said that any one system is more complex than any other.
Human value lies in history. - AI cannot determine the direction of human society to explore the unknown. The real relationship between AI and us in the future is not descent, rescue or survival, but ‘coexistence’.