The human brains slowly adapt to the AI

Intelligent world

In our new series "Inside KI" we talk to the university lecturer and AI expert Dr. Leon R. Tsvasman on various aspects of Artificial Intelligence. Focus of our first episode: How do the term and the concept of "artificial intelligence", human consciousness and human self-image fit together?

Lead picture: Gerd Altmann on Pixabay

Dr. Leon R. Tsvasman, born in 1968, works as a university lecturer in communication and media studies as well as philosophical and ethical topics. He teaches at several universities and distance learning universities such as the Wilhelm Büchner University Darmstadt, the IUBH International University, the Deutsche Welle Academy, the Macromedia University, the Heilbronn University, the TH Ingolstadt, the AI ​​Business School Zurich and others.

One of his focal points is the connection between technology and society. He has also written various scientific and popular science non-fiction books, such as “The Great Lexicon Media and Communication” in collaboration with the founder of radical constructivism Ernst von Glasersfeld or together with his co-author, the AI ​​entrepreneur Florian Schild “AI-Thinking : Dialogue between a thought leader and a practitioner about the importance of artificial intelligence ”.

Dr. Tsvasman conducts research in the field of cybernetic epistemology, anthropological systems theory and information psychology. He also pursues numerous other interests in a wide variety of disciplines.

In our new "Inside KI" series, we talk to him about various aspects and dimensions of the trendy topic of AI.

Table of contents of the article

Fist or hammer - which tool is better to drive a nail into the wall?

Intelligent world: Dr. Tsvasman, perhaps from a general perspective - what can AI systems do better than humans, in which areas do humans remain stronger?

Tsvasman:If I can “defuse” this question a little: The comparison of humans and machines would only be okay if we didn't regard the two as competitors. A human would not use his fist to hammer a nail into the wall. We also do not call for a fist-hammer competition to solve this particular problem. Nor does a runner compete against a car. The relationship between humans and artificial intelligence is no different.

Our history has forced us to live off industrial work as highly specialized workers. But that does not mean that we are predestined for this role. Such human activities were almost always a substitute performance until a technical solution was found: muscle power was replaced by motors or mental arithmetic by electronic computers.

A tool - no matter how universal - always requires a human decision about its use. From an economic point of view, a tool must above all be efficient, i.e. Task right take care of. Human decisions, on the other hand, should be effective - that is, the right job take care of. A person can drive nails in more efficiently with a hammer than with his or someone else's fist. With the hammer, for example, the unevenness of the surface of the nail head cannot be felt. Such a sensor system would be pointless for the purpose of driving nails. However, humans can do this with their fingers - and many other things as well.

But for every targeted task there is always a more efficient technical solution. The human body is generally there for life, a tool with a specific purpose. The person decides when to use which tool. This principle does not change, even if the tool is called “AI”. Then a chatbot can, for example, automate routine conversations - and do precisely this task more efficiently than a human.

People remain creators and clients

Technology has never automated the whole person - it has automated processes based on the division of labor. It is no different with the human expertise of the post-industrial era. Here too, only the routine based on the division of labor is to be automated. That distracts us anyway from being there consciously as a human being or from making decisions. In such tasks, AI should and will do much better than humans. It will enable targeted, rational problem solving: from simple office activities to the independent management of the entire technical infrastructure of our civilization.

However, regardless of how technology develops, people remain the sole creator of their living environment and the client. Even though people liked to use other people as tools for their personal goals, this situation has always been a temporary solution and has never been satisfactory.

So we don't compete with AI, but with its help we free ourselves from routine and mutual instrumentalization. What we “can do better” will depend on how we understand ourselves. As autonomous carriers of consciousness with our own potential, we should emancipate ourselves in what makes us unique - creativity, spontaneity, improvisation, empathy, and of course knowledge, love, art. Even subjectivity - it is extremely valuable, especially since we as humanity are slowly realizing that there can be no absolute objectivity, but only an intersubjectivity that is as little distorted as possible.

All of these things will soon be more important than technical skills because they will give us answers to the most important questions about “what” and “why”. The “how”, however, will at some point be completely automated by AI.

Why artificial intelligence is different from consciousness

Intelligent world: Hasn't the term “artificial intelligence” been chosen unhappily? Because normal people quickly associate it with artificial consciousness and “thinking” machines.

Tsvasman: AI is supposed to automate the “intelligent behavior” of people - behavior that above all requires the ability to learn. For this reason, computer scientists like to equate AI sub-areas such as "machine learning" or "deep learning" with AI. In addition to this ability to optimize itself, AI research also includes other construction sites. Like neural networks, trying to imitate the human brain, and other exciting projects.

What is automated is the expert ability of "intelligent behavior", not the whole person. This is why AI systems are also referred to as "expert systems". As assisted intelligence(“Weak AI”) it is used to automate highly focused tasks in order to perform them more efficiently.

The higher and more complex form of AI (often augmented intelligencecalled) is intended to help us make better situational decisions. If you automate assistance and consulting expertise with the help of a huge amount of data (big data), that is definitely a remarkable data processing achievement. But it is still not a mental achievement of a conscious being. To explain the difference, I have to make a brief philosophical digression.

As a conscious person you are an individual - unique and irreplaceable. In the role of an expert, the person solves problems that are not part of the work and can be replaced in this role by the owner of a comparable profile. Consciousness enables a human individual to live among other individuals in a society. A conscious individual remains largely autonomous as a whole, has free will, can judge and is responsible for his or her actions. From an evolutionary point of view, this autonomy is the most important prerequisite for consciousness, and the following rule of thumb applies: the higher the consciousness, the more autonomy. "Lower" animals such as insects, for example, are less autonomous. They are massively controlled by instincts and reflexes and can hardly overcome their behavioral patterns - in the behavior of an individual organism - when the environmental conditions change. But they are fascinating in their efficiency and often develop amazing swarm intelligence.

Differences between machine and human

In cybernetics we say that consciousness is "informationally closed". At the same time, it is structurally linked to other subjects because all human brains look back on the same evolution, and each is located in an autonomous body. Therefore, a subject is fundamentally unable to make valid statements about its environment without having to constantly experiment with it. From an evolutionary point of view, humans have achieved the highest possible degree of autonomy - with all privileges and disadvantages. One of the privileges is thinking - being able to weight experiences internally in order to act appropriately in a changing environment. The typical disadvantages include the dependency on languages ​​or media.

Autonomous learning software, on the other hand, remains just an expert system. Although she can answer questions, she does not have to ask questions about the meaning of a knowledge. The property of data transmission makes such AI efficient and precise. But it is doomed to remain a tool. Such tools can solve job problems more efficiently than a person in his role as an expert. But its ability to transfer data makes AI “informationally open” - it remains a “trivial machine” without consciousness.

Technically, AI could only achieve awareness under one condition: if autonomy, which it does not need to have due to its nature, is simulated, and if media-mediated communication with structural coupling takes the place of data access. That would then correspond to the idea of autonomous intelligenceor a "strong AI".

The planetary consciousness

Intelligent world: So does it just need enough computing power, enough complex neural networks - and then a machine will develop consciousness at some point? Would we humans even realize this?

Tsvasman:This question reminds me of the idea of ​​the “noosphere”. It was set up in the early 20th century in parallel by the Russian philosopher and geologist Wladimir Wernadski and the French natural scientist and theologian Pierre Teilhard de Chardin, probably independently of each other. Transferred to today's technology: If all the AIs in the world are networked with one another, they could develop into technical “world consciousness”.

Whereby our civilization - the actual human environment - is socio-technical-cultural and does not represent a direct continuation of biological evolution. In fact, a “planetary consciousness” would probably arise from the symbiosis of the “technical brain of the world”, which is somewhat comparable to the limbic system of our brain, and human consciousness.

The computing power of countless quantum computers and highly complex neural networks networked over undreamt-of bandwidths is of no use without data that it can process. And data processed by such a “strong AI” only becomes information or knowledge when people can understand it in their own human way. This is already the problem with big data and business intelligence (BI). It is about applied science, in order to gain valid knowledge from data and to be able to carry out economically target-oriented activities on this basis or to be able to make strategic decisions, for example.

If networked quantum computers with neural networks were to record everything that is only approximately relevant in the world - every movement in the macro and microcosm, the body data of all people and so on - in real time, then everything that can be imagined in principle would be feasible in this world. So it depends on us what we can imagine.

Artificial intelligence as a tool: We imagined flying carpets and there were planes

But first we have to learn to keep control in the sense of the potential of all people in this world - and that is the most important cybernetic problem of the current time: We can deal with the complexity of a global world with its exponential developments of big data via the corona virus up to climate change and the population explosion, we cannot cope without global AI. The reduction of complexity that we have always been forced to pursue - so that we can achieve something together at all - is just reaching its limits. So we should first build the right tool, namely a global AI, and learn to master it without having to slow down its efficiency. In addition to the cybernetic problem, this also poses an ethical problem. So we have to develop an ethical imperative for AI - the fictional three robot laws are not sufficient for this.

By the way, it has always been the case that technology realized our ideas, even if these ideas were a bit idiosyncratic before. We imagined flying carpets and there were planes.

In a virtual augmented digital reality with global AI, there would be no more limits. The greatest challenge will be to develop a workable vision for this. This challenge for humanity is more difficult and exciting than anything we have had to tackle together up to now.

 

AwarenessBig DataDeep LearningKIArtificial IntelligenceMachine Learning