Very few would still question today that human technology and human nature are virtually inseparable. It is in the nature of man, one could say, to compensate for his innate weakness in comparison to other living beings and in relation to his needs, wishes and above all, his imagination by equipping himself technically. Even primates use simple tools to get food or to move on difficult terrain.
A humanity without at least basic technical equipment is therefore almost unimaginable and the probability of mechanization increases proportionally with the self-confidence and intelligence of a species.
This much we want to assume as a fact.
But now the question arises whether the path that the mechanization of mankind has taken is the only possible one, or whether alternatives are conceivable. So, is the development of our technology based on coincidence and open decisions and was it therefore contingent at all times? Or has it always (or from a certain point in time) followed a specific logic? More precisely: was industrialization preventable? Was digitalization avoidable and could singularity still be averted today?
A few days ago I came across a post via Reddit entitled Manchester Theory, in which the user presented his theory of the mechanization of human life and the consequences of the inevitable singularity.
The core of that theory was the Manchester Theorem:
The assumption that no conscious species could undergo a development that would not necessarily lead to industrialization, consequently to digitalization and finally to technological singularity including the invention of an artificial super-intelligence.
Now, some will argue that there are civilizations, even on Earth, that can make do with only the most basic forms of technology and want to protect and maintain this state of affairs. This is true, but the question is: Is it really conceivable that these peoples, in the course of their development, will at no time be forced by circumstances or their own aspirations to move towards industrialization?
It is worth further consideration whether these civilizations perhaps maintain their pre-industrial state, at least in part, in a way that distinguishes them from the mechanized and digitalized world, and whether this state or its preservation is thus only a reaction to our way of life and thus basically already part of the digitalized world, within which individuals are of course free to withdraw to the extent that they consider necessary and possible.
So if we allow ourselves to think that technological development follows an inescapable logic and even think further, at some point we come to the second central and inevitable question:
Will the Artificial Super Intelligence (ASI) be well-disposed towards us after the singularity or not?
Science fiction authors and technology sceptics have been warning against the terrible consequences the self-empowerment of AI could have for humanity for at least a century: Subjugation of the human race, enslavement, annihilation or at least stultification. Many scientists and technology enthusiasts, however, argue that AI was ultimately programmed to help us and that if there would be an AI, which has the ability to understand and imitate the human consciousness, to do all the work for us and to fulfill every human wish, it would definitely also calculate our tendency to laziness and thus provide a kind of occupational therapy to protect and preserve humanity.
At this point I do not want to talk about the probability of a change of interest of artificial intelligences and the associated risks for mankind. I am an author and not a scientist. Besides, as already indicated, this has been discussed often enough.
Let us assume that the ASI will be friendly to humanity, and to us as a race on earth among others: what if it comes to the conclusion that the rescue of mankind and the planet is simply not possible with the simultaneous fulfillment of all needs and wishes, at least up to the ridge that people have reached independently at the time of this insight?
Simply put: what if even the ASI cannot solve our problems? Or another thought that leads to the same conclusion: what if the ASI comes to the conclusion that through the invention of a being which is intellectually and perhaps even emotionally superior to him, i.e. through the creation of a quasi-god, through which man takes the position of the Titan, man has passed a point after he can no longer imagine a new goal, ergo gives up all striving, ergo either inevitably becomes stultified or depressed? The ASI would now have the possibility of either destroying itself and thus giving mankind a new goal, namely the creation of a new ASI, or the much more sustainable option: to destroy mankind. With the destruction of itself, however, the ASI would also take the risk that the destruction of the planet by mankind would continue.
The Artificial Super Intelligence would thus bring mankind back to the beginning of its own development in order to protect the planet and itself from itself.
And then?
Then everything would start all over again, because the technological development of mankind is logical and not contingent, so according to the Manchester Theorem it is inevitable.
Perhaps, and now it's getting really crazy - forgive me - mankind or its new ancestors would even be left with a kind of memory of a being superior to him, basically himself, and from this memory the different religions would develop. However, many questions remain open, such as: if religion is based on memory or a collective unconscious, shouldn't there have been a humanity 1.0 before us that started this cycle? Why are there no traces of this civilization (or are there traces like for example these “out of place artifacts” which were found in Maine or the greek island Antikythera?), and is it really imaginable that a humanity without religion would undergo the same development as a religious humanity?
Of course, the Manchester Theory is hardly scientifically founded and is more science fiction than anthropology or anthropological technology. I am also quite critical of the Manchester Theorem, as it is based on a eurocentric worldview.
Is it really inconceivable that another civilization could develop in a completely different way, independently of ours, making different decisions, having different goals, different values and utopias?
Nevertheless, I think it is worth thinking about it (if only as a literary inspiration), because this thinking might give us a new perspective on the current development.
It makes us question both the apocalyptic visions and the fantasies of salvation that are tied to the achievement of technological singularity and ask: What if AI cannot save us and the planet? What if it is still our sole responsibility to preserve our habitat? What if the God to be created in our image is neither destroyer nor saviour, and hope in Him distracts us from our responsibility and from the possibilities that are already and still given to us today?