Skip to main content Skip to main navigation

Machine Intelligence and Human Uniqueness

| Data Management & Analysis | Machine Learning & Deep Learning | Human-Machine Interaction | Saarbrücken

Radiologists benefit from Artificial Intelligence in diagnostic support. Employees in multilingual, transnational companies are delighted with the quality achieved by machine text translation. But there is no wisdom ex machina. And one shouldn't pretend there is one.

Guest article by Reinhard Karger, Frankfurter Allgemeine Zeitung, 02.01.2024

For thousands of years, humans have been inventing tools that make their lives easier or enable them to survive. The concept of tool autonomy or the idea of human-tool communication dates back to the ancient world. Aristotle discussed the self-acting tool as early as 2350 years ago, which "could carry out its task in response to instructions received, or even by guessing the commands in advance." For Aristotle, automation, in the sense of self-activity, is linked to an egalitarian, albeit elitist, socio-political utopia because "then the masters didn't need journeymen and the servants didn't need servants" (Aristotle, "Politics,” Book 1, Chapter 4).

Tools expand the human scope for action, increase the degree of freedom in execution, and open up more efficient ways of achieving goals. The work is made easier but not eliminated. The performance is identifiable, the tools are recognizable. With AI, people are challenging themselves and their self-image in a new and fundamental way. There is no reason for self-deprecation, but reason enough to take a critical look again at what is human, what is taken for granted, and what machines can do. We should be more modest and more demanding at the same time. There may be worrying news, but the prospects are predominantly good.

AI means the digitalization of human knowledge skills. But the arc of suspense becomes more obvious with the term "machine intelligence." This is because it is only secondarily about "natural" versus "artificial" and primarily about humans and machines. The numerous human knowledge skills include reading, writing, and arithmetic, which we characterize as cultural techniques. Speaking, of course, in which we as socialized language subjects know what we can achieve pragmatically and personally through our choice of words, speaking speed, and sentence melody, with a powerful or restrained emphasis. But actually, it's about thinking, and AI is about enhancing people's abilities.

In order to be able to axiomatically narrow down the machine portfolio of opportunities, the fundamental differences from human capabilities must be identified. What can humans do? And what can't a machine do? Evolutionary anthropology, which deals with the differences between non-human primates and Homo sapiens, provides an empirical anchor as an answer to the first question. The working hypothesis is that the species-specific difference can be read from the ontogenesis of the individual. Although chimpanzee and human newborns develop similarly in the first few weeks of life, Michael Tomasello (2002) sees the decisive socio-cognitive switch at the end of the first year of life. He calls it the "nine-month revolution."

From the ninth month onwards, the human infant, together with its closest caregivers, begins to participate and act in situations, or as Tomasello puts it, in "scenes of joint attention." The nine-month-old starts to follow the gaze of the mother or father and learns that an action is directed towards an object. In such a "scene of joint attention," the participants are triadically related to the other person, to themselves, and simultaneously and jointly to the same person, object, or event.

The infant experiences its own intentions physiologically directly and perceives the behavior of its mother or father. He understands that the mimic, gestural, or vocal expressions of his closest caregivers refer to the same object and has the astonishing transfer ability to conclude that the expressions of others correspond to his own reaction because the intentions, wishes, and motives are similar. Starting from this pre-linguistic experience, a process begins that enables humans, but not apes, to adopt each other's perspectives throughout their lives. For Tomasello, this crossroad is constitutive: "The importance of scenes of joint attention cannot be overemphasized." (Tomasello, 2002, p. 132). The ability to take on each other's perspectives is the prerequisite for social intelligence and a human monopoly that "is not found in any other species on this planet" (Tomasello 2002, Habermas 2012). This is crucial. But still, the second step before the first.

Humans are confronted with their own arbitrariness, their inner world, and the over-complex natural and social environment. As Edmund Husserl put it in 1936, humans find themselves in their "bodily selfhood." The actual presence of desire or the experience of fear is fundamental. In 1867, Charles S. Peirce categorized the actual subjective sensory content experienced as a "firstness" and coined the term "qualia" for it. Qualia is the matter of the ability to feel; qualia are mediated by the inner or outer senses and are physically experienced by humans. Qualia can be accessed subjectively but not objectively, and although this may sometimes give the wrong impression, Brain-Computer Interfaces (BCI) cannot read thoughts; they can only localize or identify neuronal activity regions or patterns.

Qualia are the second necessary prerequisite for social intelligence. People can claim the first-person perspective and thus the veracity of a personal experience. They can attribute the actions of others to intentions, assume goals, construct hypothetical plans, and predict next steps because they can assume that the probability of a possible next action would correspond to their own actions if they had the same goal. Guided by their experience of the world and oriented by the emotions they have experienced themselves (joy, interest, surprise, fear, anger, sadness, disgust), linguistically competent people can justify predictions about the expected behavior of others, which are constantly incorporated into upcoming decisions. In this way, people move in the space of socially, culturally, and institutionally networked reasons, can provide information, explain preconditions, and refer descriptively to real-world facts that serve as a reliable basis for conclusions appropriate to the situation.

The second question was, what can't machines do? In order to constructively build up the dimension of the human-machine difference and starting with the last point: machines cannot feel qualia, and they are not subject to them either. As of today, there is no approach for a promising psychophysical reduction. Concepts such as desire or lack, hope, fear, pleasure, or mood are not comprehensible to machines and, therefore, cannot be applied to them. Machines can use eye-tracking when processing a pointing gesture and can identify probable targets but are not participants or actors in "scenes of joint attention." They have no intentions or plans, no self-imposed goals, no will to strive for them, and no reenactment to infer the causally responsible motives from the phenomenological surface. Machines have no first-person perspective and cannot adopt a perspective. They have no access to the human monopoly of social intelligence and can only generate a weighting when selecting alternative courses of action.

Visual and auditory environmental stimuli are sensed receptively, evaluated, and classified using AI. Technical sensors convert a signal stream into a data stream, patterns are identified, information is extracted, the probability of a subsequent action is determined, but qualia are not perceived. They can be described as a self-learning system. Still, this technical learning concept does not correspond to human learning in terms of content, form, procedure, and results, for which self-experienced intention, social community, and conceptual understanding of language are necessary: "First and foremost, it is the interplay of intentional relationship to the world, mutual adoption of perspectives, use of a propositionally differentiated language, instrumental action and cooperation that enables the learning processes of a socialized intelligence." (Habermas, 2012, p. 52)

The importance of these differences cannot be emphasized enough. For they have consequences for the realistic, practical expectations of the upper limit of the machine's performance and functional capabilities that can be achieved in principle. The decisive factor is that machines cannot be the target of moral demands and that there can be no machine morality because "ethics is a restriction of the drives," wrote Sigmund Freud in 1939 in his last publication, "The Man Moses and the Monotheistic Religion.” Machines have no instincts and need no instinct control. And David Hume as early as 1751: "Extinguish all the warm feelings and prepossessions in favour of virtue, and all disgust or aversion to vice: Render men totally indifferent towards these distinctions; and morality is no longer a practical study, nor has any tendency to regulate our lives and actions" (Hume, 1751, M 1.8, SBN 172). Without emotion, pleasure is merely a word. A welcome added value of these statements is the knowledge-oriented emancipation from interest-driven marketing promises, liberation from hubris, verbose, and visually stunning dystopia. The insentience of machines also means that they cannot suffer and therefore have no rights of their own, e.g. not the right to electricity. We can continue to regard them as things or objects, use them, recycle or upcycle them, break them down into components, melt them down, and then reuse them. When ethical questions are raised in the legitimate discussion about applications of AI technology, this is directed at developers, providers, users, and regulators, but not at any kind of moral machine subroutine.

The function of human morality is the prosocial self-regulation of action, which is driven by the egocentric needs, desires, and goals of the individual actor. Acting out greed or possible satisfaction is limited by the internalized resistance of the group. The point of human morality is that the generalization of interests on the basis of self-other equivalence is an extremely useful test tool for sensing whether an action is to be regarded as just, desirable, or even desirable (Tomasello, 2016, Chapter 3.2). But machines feel nothing, cannot adopt perspectives, have no intentions of their own, no goals, never suffer, and are therefore not possible addressees for any form of moral self-control. Hume puts it in a nutshell: morality cannot regulate our actions without heartfelt feelings for virtue and abhorrence of vice.

However, machines should be able to work hand-in-hand with people. It must, therefore, be ensured that actions are equally goal-oriented and appropriate. Since machine morality is not a possible control concept, specifications, rules, or laws, i.e., high-resolution positive legality, must constructively fill the gap. If the legal principle of general human freedom of action (Basic Law Art. 2 para. 1) is applied to machines as a shortcut – everything is permitted that is not prohibited – one loses sight of the very different scope of action of humans and machines, think of strength, endurance or speed, but this is crucial so that a singular optimization criterion does not lead to a social disaster.

To ensure application legality in decision-making contexts, robust AI systems are needed that fulfill formal explainability requirements because they enable strong guarantees and certificates. And thus, we have reached the eye of a scientific hurricane. Since the beginning of AI research almost 70 years ago, there has been a camp-building paradigm dispute about "symbolic" versus "sub-symbolic" processing. What is meant is that systems are built that either process symbolically oriented signs according to rules and derive the meaning of a whole from that of its parts (and the way they are connected). These systems can deliver comprehensible and falsifiable results, can be regarded as instances of cognitive intelligence, and allow conclusions to be drawn. Or that a sub-symbolic approach is pursued, which is data-driven, massively parallel, and network-based, without identifiable cognitive intermediate steps. Results are only possibly correct, whereby the quality of the result can be evaluated, but the way in which the result was produced cannot be reconstructed, the result can be accepted or validated, but not verified. When we talk about self-learning systems, artificial neural networks or deep learning today, we are talking about this approach.

The two research communities are competing for scientific recognition, academic careers, social esteem, and financial and human resources. They are also motivated by the understandable need to be right and by the fascinating idea of realizing all applications monistically with just one approach. The symbolic systems are still unbeaten in the construction of conceptually consistent knowledge graphs and logical reasoning so that a result is derived step by step and comprehensibly from first principles. The sub-symbolic and currently very successful artificial neural networks and large language models (LLM) can claim to have enabled AI solutions that, among other things, can recognize spoken language better, translate or generate texts better, and identify objects better than has ever been possible with rule-based approaches. But: there is no explicit context or symbol understanding on the side of sub-symbolic solutions. As machine text translation shows, this is not always necessary to realize a high-performance language technology application. The success of deep learning is breathtaking, and many applications are practical. However, this is only the case if a potentially correct result is sufficient, and this often requires a human being to determine its suitability as a "human in the loop" before it is used. This means, on the one hand, that the lack of reliability makes the non-trivial use of autonomous systems impossible and, on the other hand, that (result) explainability and (consequence) responsibility are outsourced to humans.

For the use of machine intelligence to be comprehensively meaningful and necessary for humanity, the technical systems must migrate into the "space of reasons", as Habermas would put it. The space of reasons is inherently linguistic and, therefore, symbolic: "Developed linguistic communication can be described as the kind of communication that opens up a common objective world in the horizon of an intersubjectively shared lifeworld through the meaning-identical use of symbols" (Habermas, 1999/2022, vol. 1, p. 240). Symbolic processing is necessary for success if we do not want to and must not limit the application classes of AI solutions to problems in which explainability as a contradiction-free argumentative derivation from upstream principles does not play a role. A spoken word is correctly recognized when it has been spoken. However, a conclusion is not correct just because the probability of a word sequence occurring is high.

The explainability of machine recommendations and the reliability of machine decisions has opened up a new field of research known as TrustedAI or trustworthy AI, the future results of which will be of significant importance for the productive use of AI systems. Although actual social intelligence is unattainable for machines, the development of cognitive machine intelligence could succeed. It is to be hoped that TrustedAI will be endowed with the necessary intellectual seriousness, and with sufficient financial and human resources in an effort of public research funding and private sector investment. Research questions are: Will it be possible to have assertoric judgments, i.e., those claiming agreement, and problematic statements, i.e., those based only on probability, refer to each other in a chain of reasoning without jeopardizing the validity of a conclusion? Will it be possible to create integrated AI systems that combine the advantages of symbolic deductive and sub-symbolic neural approaches in a hybrid approach, also known as neuro-symbolic, neuro-explicit, or neuro-mechanistic, and overcome the disadvantages that both have?

Success is mission-critical, the scientific will is there, and the successful achievement of the goal is open. But why do we as a society need AI systems that combine the strengths of symbolic and sub-symbolic processing? Because technical solutions to which we can attribute machine autonomy and reliability are objectively necessary in order to win the upcoming technological, demographic, and cultural transformations. It is not illusory to hope for an AI dividend that will make decisive contributions to solutions in the areas of education, energy, logistics, health, mobility, recycling, or resource use, enable a sustainable circular economy, and, ideally, help to stabilize cultural peace and globalize social justice.

Reinhard Karger studied theoretical linguistics; he has been a DFKI employee since 1993, the company spokesperson since 2011, and a member of the Supervisory Board of the German Research Center for Artificial Intelligence (DFKI) since 2022.

Published in the Frankfurter Allgemeine Zeitung, January 2, 2024

Online: https://www.faz.net/aktuell/wirtschaft/unternehmen/was-ki-nicht-kann-wo-die-maschine-zum-mensch-nicht-aufholen-wird-19419488.html

Sources
Sigmund Freud, The Man Moses and the Monotheistic Religion, London, 1939
David Hume, An Inquiry into the Principles of Morals, 1751, Meiner Verlag, 2003, p. 6,
Jürgen Habermas, Post-Metaphysical Thinking II, Suhrkamp, 2012
Jürgen Habermas, Auch eine Geschichte der Philosophie, Suhrkamp, 2019, with a new afterword 2022
Edmund Husserl, The Crisis of European Science, 1936, Meiner Verlag, Hamburg, 2012
Charles S. Peirce, On a New List of Categories, Proceedings of the American Academy of Arts and Sciences (582nd session), May 14, 1867
Michael Tomasello, The Cultural Development of Human Thought, Suhrkamp, 2002
Michael Tomasello, A Natural History of Human Morality, Suhrkamp, 2016

Contact:

Reinhard Karger, M.A.

Corporate Spokesperson , DFKI