Generative AI, especially ChatGPT, brought artificial intelligence into the public sphere and sparked a lot of highly speculative claims about machine intelligence. I’m open for discussions and unafraid of confronting uncomfortable truths. Indeed, our imagination and fearless thinking should pave the way for new possibilities. Dreams and speculations are valuable, as long as they’re presented as such. However, I find it concerning when public figures speak with undue certainty, particularly when making anthropological comparisons between humans and machines.

The Fall from Human Greatness

What really hit me was watching Doug Hofstadter expressing his despair about the eclipse of humanity. As a student, I had been influenced by his renowned book Gödel, Escher, Bach: an Eternal Golden Braid, often referred to as GEB (Hofstadter, 1979). The book delves into how cognition emerges from underlying neurological processes. Let’s examine Hofstadter’s comments on the state of AI which I gathered from an interview:

I never imagined that computer systems would rival or even surpass human intelligence. It seemed like a goal so far away. My entire belief system was shaken; it’s a truly traumatic experience when some of your most fundamental beliefs about the world start to collapse. Particularly, the idea that human beings are soon going to be eclipsed. It felt as if not only my belief system was collapsing, but also as if the entire human race was about to be eclipsed and left in the dust soon. The accelerating progress has been so unexpected, it stirs a certain kind of terror of an impending tsunami that’s going to catch all of humanity off guard. It’s unclear whether this signifies the end of humanity, in the sense that the systems we created could destroy us, but it’s certainly conceivable. If not, it relegates humanity to a relatively minor phenomenon compared to something else that is far more intelligent and will eventually become as incomprehensible to us as we are to cockroaches. I find that terrifying. I hate it! I think about it almost every single day. And it overwhelms and depresses me in ways I haven’t experienced in a very long time. […] It makes me feel diminished; it makes me feel, in some sense, like a very imperfect, flawed structure. Compared with these computational systems which have a million or billion times more knowledge than I have, and are a billion times faster, it makes me feel extremely inferior. It almost feels like we deserve to be eclipsed. Unbeknownst to us, all we humans are soon going to be eclipsed and rightly so, because we are so imperfect and fallible. – Doug Hofstadter

He passionately conveys a sentiment many intuitively feel: The essence of humanism is under siege. We are fallen from greatness. Our unique skills are being surpassed, leading to concerns about our relevance.

I perceive Hofstadter’s view as human-centric, stemming from a longstanding tradition where humans are seen as central figures, akin to being God’s creation. This view encompasses our confidence in determining our fate; the idea of an individual separate from its environment; humans domination over objects; a hierarchy with humans at the pinnacle; the notions of free will, and rational, independent beings arriving at a consensus in public discourse. Paradoxically, I will try to attack this human-centric view to save myself from his despair.

One could argue that machines are not intelligent since they are just doing statistics by computing some high-dimensional probability distribution (Bender et al., 2021) and that there is a difference between language processing and language understanding (Bender & Koller, 2020). However, these arguments seem not convincing for many people. There always looms a counter argument: Maybe humans do the same?

Instead of going down the technical rabbit hole, I will try to approach Hofstadter’s comments from a distinct and somewhat radical angle by using my observation of Luhmann’s social system theory which builds on radical constructivism. Along the way, I will not only discuss machine intelligence but also touch on the relation between machines, society and human beings.

I’m not asserting this as the absolute truth but rather as an interesting story that might be useful in some aspects. In fact, when I started reading Luhmann, I hated it! It made so much sense but also felt cruel, cold and depressing. However, like reading Nietzsche’s On the Genealogy of Morality, it has the potential to destroy some deep seeded belief only to bring something new and exciting into existence. While this theory is nothing more than a theory, it does challenge the confidence behind many claims, including those regarding machine intelligence.

Niklas Luhmann

Niklas Luhmann (1927-1998) was a largely self-taught sociologist. Like postmodern thinkers, he believed that pursuing metaphysics was no longer productive, as there are no ultimate grand narratives that can explain everything. Rather than delve into metaphysics, he meticulously developed a comprehensive theory of modern society—–a supertheory that even encompassed itself and its creator.

Luhmann was an avid reader and writer, and he wasn’t hesitant to incorporate valuable concepts from fields like mathematics (Spencer-Brown’s), cybernetics (Wiener and others), and biology (Maturana and Varela). To encapsulate everything, he employed a high level abstraction and a technical terminology, which can make his writings appear dry, cold and dense. Because his work is primarily descriptive—–explaining things as they are and exploring potential reasons for their status—some categorize him as conservative. However, I perceive him as an incredibly well-read, sensitive, and discerning observer who wanted a new theory that can help us to transit into a new form of stability, which is a rather progressive attitude.

Even though Luhmann tried to keep a large distance to philosophy, he was well read in it and certainly influenced by it. In his introductory book From Souls to Systems, Hans-Georg Moeller (Möller, 2006) highlights that Luhmann’s work is influenced by several philosophical giants:

  • Kant: Luhmann shifts Kant’s focus on cognition to a constructivist perspective (Luhmann, 1988).
  • Hegel: Luhmann transitions from Hegel’s ideas of unity and dialectic to concepts of multiplicity and from identity to difference. He argues against any essential unity of systems and any general type of cognition, such as Hegel’s spirit.
  • Marx: Luhmann borrows Marx’s view that society isn’t just a byproduct of spirituality, but he disagrees with the idea of one foundational system. Marx focus on economy is too much of a simplification.
  • Husserl: Luhmann adapts Husserl’s work towards constructivism and incorporates many of his terms and ideas.
  • Habermas: Luhmann disputes Habermas’s mission of completing the enlightenment.
  • Postmodern thinkers: Luhmann draws from Deleuze’s radical differentiation, Derrida’s deconstruction, and Lyotard’s rejection of overarching narratives.

With respect to his media theory, Luhmann is quite close to the French philosopher Baudrillard but far less dramatic. While Baudrillard tends to express himself in dramatic metaphors and focuses solely on the media, Luhmann presents a supertheory of society where the mass media is only one of many systems, all administered by their respective codes. While Baudrillard’s texts are almost poetic, reading Luhmann can cause boredom.

Luhmann believed that the distinction between modernity and postmodernity is largely semantic. He argued that the last significant structural shift in society occurred in Europe between the sixteenth and eighteenth centuries, transitioning from stratified to functional differentiation. To Luhmann, labeling a functionally differentiated society as either ‘modern’ or ‘postmodern’ is inconsequential.

Although his theory can be unsettling, Luhmann was optimistic about the future. He agreed side-by-side with the postmodern assertion that traditional philosophy had reached its end. However, he saw this as an opportunity for a rejuvenated, coherent self-description of society and a fresh theoretical framework for a new societal era:

Is this, after all, a postmodern theory? Maybe, but then the adherents of postmodern conceptions will finally know what they are talking about. The deconstruction of our metaphysical tradition pursued by Nietzsche, Heidegger, and Derrida can be seen as a part of a much larger movement that looses the binding force of tradition and replaces unity with difference. The deconstruction of the ontological presupposition of metaphysics uproots our historical semantics in a most radical way. This seems to correspond to what I have called the catastrophe of modernity, the transition of one form of stability to another. – (Luhmann, 1993; Luhmann, 2000)

Social Systems Theory

So let me try to give you my incomplete and surface level understanding of his theory:

Luhmann recognised the particular complexity that human beings present for social analysis because they are the bearers of three autopoietic systems: systems of life (cells, brains, organisms), systems of consciousness (mind), and systems of communication (social systems). As a sociologist he acknowledges but leaves aside the biological systems of human beings and instead focuses on the interactive relationship between their consciousness or psychic system and the social systems with which they interact. All psychic systems (minds) are in the environment of social systems and vice versa.

He famously argued that communication between psychic systems happens not between (whole) persons or individuals. This seems counterintuitive but if we spent a little more thought into his claim and clarify some termonology, it makes sense. Hans-Georg Moeller put it the following way:

You cannot communicate with me with your mind or brain, you will have to perform another communicative operation such as writing or speaking. – (Möller, 2006)

Luhmann described the mind (the psychic system) as well as social systems as operational closed, structurally coupled, autopoetic systems. That are a bunch of important terms right away which require some explanation.

In Luhmann’s view, a system is defined by its differentiation with its environment—differentiation plays one of the most important roles in his work. This differentiation is esablished and obtained through the operations of the system. The system differentiates itself from its environment thus it defines itself. In other words, the system creates its own functions by its operations (self-creation and self-preservation). Thinking leads to more thinking and perception leads to more perception. The economy creates itself by doing economics, the mass media creates itself by its operation of differentiating between information and non-information.

Another theme in Luhmann’s writing is the use of paradoxes which is certainly inspired by Spencer-Brown’s logic presented in Law of Forms (Spence-Brown, 1969). Inspired by imaginary numbers, Spencer-Brown introduced imaginare truth values which are paradoxically in space but the paradox is resolved in time. Therefore, a paradoxical systems becomes genererative in time. One prime example is our mind which is capable of self-observation (reentry). Paradoxically, the oberservation is part of what is observed. However, in time this gets resolved. While I am observing myself (second-order observation) I cannot observe my environment (first-order observation). I can switch to the first-order observation but then I lose track of my oberservation of myself. In a sense, this back and forth observation and self-observation—this paradox—generates myself.

A psychic or social system is operational closed because its operations can not leave the system. Mental operations such as thoughts and emotions cannot leave the mind. An economic transaction, e.g. paying for goods, cannot ‘leave’ the economy. No mind can interfere with the operations of another mind. One cannot continue someone else’s mental activities by thinking or feeling for him or her. It is also impossible to immediatly think what someone else is thinking.

However, systems can observe their environment and act on their terms. We can hear what others say, see what they express and read what they have written. Our mind can think about it (using its operations) and we can answer, i.e. communication happens. We can also see pain or joy on other’s faces, but we cannot literally think or feel what they do. The economy observes politics, the media, science and gets irritated. How it will adapt is up to itself and its operations.

Other than allopoietic systems, which produce something other than the system itself, autopoietic systems reproduce themselves. They are more dynamic than allopoietic systems because they deal with an excess of complicating noise from their environment (too much information that cannot be processed) by changing their structure (increasing internal complexity) to allow in more communications: they have a built-in learning capacity. In contrast, allopoietic systems theory leads the observer to seek constancy and stability in system functioning because they are intrinsically conservative.

Social systems, like the media, become so efficient because they ‘feed’ the outside into their ‘body’. A crisis, like a natural disaster or a war, feed the autopoiesis of the media. It can report on the event and discuss different opinions on the matter. Strictly speaking, the ‘goal’ of the media is not really to inform or to persuade but to continue its own self-production.

It is impossible to understand the reality of the mass media if you assume it is their job to provide correct information on the world and then assess how they fail, distort reality, and manipulate opinion—as if they could do otherwise. – Niklas Luhmann

Therefore, attention is key. Informing people or persuading might help to ‘get enough food’ but it is not the media’s ‘goal’ or ‘will’. The same goes for the economy which trys to commodify everything to further commodify things. Politics politicizes anything and sciences produces papers with ‘facts’ to get more funding for more papers with ‘facts’.

Even though Luhmann’s termology is close to the termonology of computer science, we have to be careful. It is more helpful to think of these systems as interdependent organisms feeding on each other and equipped with the will to live than to think of hierarchical or well-structured computer or network systems.

Aside from being autopoetic, social and psychic systems are also symbiotic, that is, their co-evolution is interdependent. Just as the trees in the forest need water and animals to survive, politics needs money from the economy, attention from the media, and ‘facts’ from science. Media needs politics or science to produce news, and money to operate. The economy uses media, politics, and science to make profits. Science ‘sells’ truth to the economy, politics, and the media. Academia needs money, attention and power.

Luhmann insists in putting human beings in the environment of social systems and not inside them. In other words, social systems do not consist of humans but of communication! This is sometimes seen as an anti-humanistic tendency which is framed negatively. But one might argue that human beings are better off if their processes are not determined by society.

Luhmann’s theory provokes an amoral view on the state of affairs but it also gives power to the object (systems/processes) thus attacks the domination of objects by subjects. There are no evil people doing or planing insidious things, instead systems (objects) act on behalf their systemic rational by making sense of their environment on their terms. What we often identify as hypocratical in a person’s action is a mixture of the operations of different systems or the communication between systems. As a reminder, the person is not part of the system. Individuals, or better psychic systems, are a necessary condition that social systems can exist (like air has to exist to hear sound) but they belong to the environment of the social system (they do not produce the sound). If a politician acts immoral and accepts a lot of money for his party to give a certain company an advantage over its competitors, the politician is the mere medium through which the economy communicates with politics. If a politician of the Green Party goes on vacation by plane and, at the same time, speaks out against air traffic, two different systems are operating: the family and politics. And the operation of the first does not interfere with the operations of the second. However, the media make news out of this contradicting behaviour which will irritate politics and probably the family life of the politician. What the media observes as ‘corruption’ happens if system boundaries are crossed. It is ok to buy talented football players, but it is not ok to buy goals, i.e. it is not ok that the economy directly operates within the system of a football game.

Psychic and social systems are operationally closed but cognitively open. They have clear boundaries demacrating them from other systems. They reproduce themselves by adapting and learning how to cope with external noise by only selecting communication which the system can actively and creatively interpret and understand or make sense of. Psychic and social systems reduce complexity of their environment through recourse to meaning. These systems increase inner complexity to deal with the complexity in their environment.

The boundaries of these systems are not defined physically, but by the border of what is meaningful and what is not. Consequently, each system has its own systemic rationality and view of the world—there is always a blind spot. If I give a cashier money it is assumed that I paid for something. This follows from the systemic rationality of the economic system. It deals with money but it cannot, for example, deal with love or passion which are part of the systemic rationality of relationships. The cashier does not suspect me that I show him my love with this gesture and if I do, this act is not an operation of the economic system.

The functional differentiation of each system makes it so that only parts of a person is acknowledged by the system. There is no indivudual—indivisible being—in a system. The health system understands a person as a patient. The legal system understands a person as a potential criminal, victim or witness. In that sense, Marx’s alienation is not limited to the economy. This differentiation makes systems extremly efficient with respect to their function. But what is ‘good’ for one system is not necessarily ‘good’ for the other system.

Luhmann thinks that this differentiation (Ausdifferenzierung) is a feature of modern society, i.e. it is historical and is an ongoing process. One example might be the creation of new subjects to study. Instead of studying computer science, students can enroll in scientific computing, data science, game engineering, information engineering, and more. One can say computer science is further differentiated. At the same time, we acknowledge problems stemming from this differentiation and try to find ways to look at problems and society more holistically. Marignal note: If we follow Luhmann’s theory and we want efficiency (with respect to a systemic rational) we find a strong argument to avoid introducing an interdisciplinary subject such as bioinformatics by simply combining biology with informatics.

The functional differentiation of systems, its effects and our gut reaction to it is nicely depicted in the movie Don’t Loop Up. What the movie does well is showing us that society consists of functional differientiated systems that follow their own systemic rational. The main message of the movie is that scientists, who discover a meteor, are unable to communicate this truth to the world. The movie shows mostly four social systems: politics, media, economy, and science. It shows how each of these systems functions differently while still being structurally coupled with one another via a shared medium (language), as explained above. But inspite of being coupled, or because of it, they cannot act unitedly.

The effectiveness of functional differentiation to deal with complexity comes at a cost: anarchy, that is, there is no controlling system or governing system—no single rationality that is in charge. From a reductionist standpoint, the actors of the movie seem completely irrational. By seeing modern society through the eyes of controllable cause and effect chains one can only come to the conclusion that our society is dysfunctional or worse: immoral. In the movie, only the scientist, who also represent the perspective of the audience, seem to do the right thing. But from a systemic view the actions of all actors make sense.

In the end, the narrative of the movie is however a contradiction to the systemic view. The movie suggest that there is some sort of scientific technological solution that can be used if everyone is thinking and acting properly—if only the government takes proper control, and the media informs everyone correctly then the meteor can simple be nuked. The movie suggest that there can be some sort of rational self-control if only we would be enlightened enough. In a sense, it is not much better than the movie Idiocracy. The dream is that enlightened science can control nature through rational technology, enlightened politics can control society through rational self-government of the people, and enlightened media disseminates knowledge and makes everyone an informed and rational citizen. Therefore, the movie presents an individualistic solution. Big tech is greedy, politicians are stupid and hypocritical, and scientists are incapable of being live on TV. If we fix those issues, we are fine. If only we ‘look up’ (individually), we will be enlightened and stop being stupid and ignorant and we will solve all our modern problems. The problem becomes a moral problem of personal responsibility thus it becomes polarizing. From a systems theory perspective, this individualistic solution is not (or no longer) possible. Systems function according to their functional differentiation on their terms and individuals are part of their environment.

However, anarchy does not imply the absence of strata or the presence of equality. It’s evident that systems like the economy can create significant differences between the rich and the poor. Luhmann recognized that modern society inherently produces many differences, including those we might disapprove of. Every system does this, not just the economy. However, contrary to a Marxist perspective, Luhmann believed that these societal disparities result more from the operations of multiple systems than from the stratum or class into which people are born. That said, an individual’s socioeconomic background, such as whether their parents are rich or poor, does matter. For example, in the education system, the ability of one’s parents to afford tuition at prestigious institutions like Stanford plays a significant role due to the interconnectedness of the economic and educational systems. However, to genuinely understand how systems like education or academia function, one must grasp their unique differentiators. The education system is defined by distinctions like good grades versus bad grades, while the academic system differentiates between peer-reviewed and non-peer-reviewed papers. These differentiations are intrinsic to their respective systems and not solely based on economic factors. According to Luhmann, while wealth can certainly influence educational outcomes, avoid legal troubles, or facilitate a scientific career, it’s overly simplistic to reduce all systemic distinctions to just ‘money’. But again, of course it helps a lot if you have money if you want good grades, stay out of prison, or become a scientist.

Luhmann argues that all social systems operate on a binary code determined by their sphere of interest which structures their communication with other systems. Communication with the legal system is organised by the code legal/illegal through the medium of law; with the political system by the code government/opposition through the medium of legitimate power; with the economy through money with the code pay/not pay; with science by the code true/false through the medium of evidential truth; with the mass media system by the code information/non-information through the medium of public opinion; and with the welfare benefits system by the code eligible/not eligible through the medium of citizenship status.

Without their environment systems would cease to exist. They are structurally coupled with one another. For example, psychic systems are structurally coupled with social systems—without bodys and minds there is no political system, no economy, no relationship and no family. Without the economy, the political system would collapse. However, there is no causal relationship between the two; society does not cause consciousness to occur, neither do people consciously create and manage society. The relationship between the two is rather one of constant irritation (which may be also translated to confusion) with the one reacting to the other, but always on its own terms. The dynamics are non-linear and tend to be chaotic.

If the political system enact a new law to steer the economy, it can only try to do so via irritation. How the economy will react is not up to politics. If climate activists glue themselves to the ground to generate awareness, they may achive their goal or they may not. What happens is quite difficult to predict. How does the media report on the issue if its rational is to further differentiate between information and non-information? How will politics react based on the assumption that it ‘wants’ to make more politics? From this point of view, it is hard to see how ‘we’ can ‘make’ cooperations (or individuals) sustainable by referring to morals and virtues. Moral outbursts and frustrations about the destruction of our livelihood are completely understandable (for my psychic system) but how they irritate the different systems is quite uncertain.

Artificial Communication

Niklas Luhmann’s concept of communication offers a useful framework for sidestepping (at least for a moment) the ongoing debate about machine intelligence. While I personally do not ascribe human-like thinking or understanding to machines, I argue that this does not preclude their participation in communication processes. Therefore, I agree with (Esposito, 2022).

I think Esposito’s term artificial communication is a very useful contribution to make the discussion of AI (especially of machine learning) more reasonable. Note that she was a student of Niklas Luhmann. In an interview she explains why she came up with the term:

These algorithms became more and more opaque—not understandable for the users—the idea spread that the activities of machines are not trying to be intelligent; that they are not trying to reproduce, in an artificial way, the process of human thought; they are doing something different. This is rarely said explicetly but one can find it in many different contexts. And if we switch away from the idea of intelligence, what can we refer to? Do we have another metaphor that would fit better into the current situation? – Elena Esposito

Why do people think that ChatGPT is intelligent? Well, if we interact with machines, we get information we would not get otherwise and the information cannot be attributed to any human being. The machine processes the data and produces some information which not only did not exist before but is also a sort of reaction to our request. The machine does exactly what we do when we communicate with a human being. We ask something and we get some information we did not had before. Importantly, this information is contingent (the response could be different).

Esposito clarifies that, as a sociologist, it is understandable that we think of these machines as artificial intelligence because we have been communicating with human beings for thousands of years and these beings were ‘intelligent’. And because of this feature of being able to think, humans were able to produce something which allows us to get new information. It is our prejudice that lead us to the conclusion that machines are so similar to human beings, i.e., psychic systems. Therefore, to follow Esposito’s proposal, a more interesting and probably healthier question to ask (also for us computer scientist) is:

Why are these machines able to communicate with us inspite of the absence of their intelligence?

Esposito’s answer is that they are parasitical. My understanding of her work is that machines make heavy use of second-order observation, i.e., the observation of an observer. The starting point is some mental activity but not the machine’s activity. The user produces contingent behaviour which the machine can process (or observe) to become itself contingent.

A modern example that might no longer be considered AI is Google’s search algorithm. Google’s success stems not from trying to evaluate or calculate the quality of a webpage directly. They didn’t design an intelligent machine for that purpose. Instead, they leveraged second-order observation, essentially tapping into the collective intelligence of their users. Rather than determining the value of a webpage themselves, their algorithm observes how users interact with webpages. A webpage ranks high if users deem it valuable which can create a feedback loop because highly ranked pages are ranked highly. The primary task becomes observing users’ observations.

Even if Esposito speaks of switching the metaphor, her work goes deeper. The word intelligence has two different usage in language which are often confused. On the one hand, we refer to the operational mode of the mind. But we have almost no idea what this intelligence exaclty is. On the other hand, we think of information processing. Take for example the term ‘Central Intelligence Agency’. Therefore, the metaphor of artificial intelligence is so problematic which makes it hard to theorize about AI and its impact which leads to these hyper-speculative predictions. It’s like referring to airplanes as artificial birds. Just as airplanes succeeded when engineers stopped trying to mimic birds, AI has advanced when researchers moved away from replicating human thought. By using Luhmann’s theory of communication, we might clear the smoke and find more effective ways to talk about artificial intelligence.

Esposito argues that we—the preachers of machine learning—do not reproduce human intelligence but rather social communication. Intelligence that emerges from conscious beings might not be needed or might even be an obstacle for the establishment of communication. Artificial communication (coupling machines and psychic or social systems via language) can be more effective than intelligent communcation (coupling psychic and social systems via language) but it can not be intelligent (referring to the first use of the word). In other words: That which makes society more intelligent might not necessarily be intelligent. As described above, social systems have their own systemic rational and we might call them intelligent.

Systems theory is useful because it focusses on communication itself. Again, Luhmann claims that humans do not communicate, only communication communicates. Of course, similar to air, humans are a necessary condition for communication but, like air, they do not communicate themselves—we can only hear the ticking of a clock because the air does not tick.

Luhmann diverges from traditional sender-receiver models of communication, such as Shannon’s (Shannon, 1948) where the focus is on the transmission of information. Instead, Luhmann conceptualizes communication as comprising three essential moments: announcement, information, and understanding. Each component has its unique role in facilitating communication. An announcement initiates the process. Whether verbalized, written, or visualized, it serves as the catalyst that triggers communication. Absence of an announcement, be it from a human or an algorithm, results in the absence of communication altogether. This announcement must bear some form of informational value, imbuing the text, image, or utterance with meaning. The final moment, understanding, underscores the necessity of a recipient comprehending the conveyed information. The efficacy of communication is not solely predicated on accurate understanding, but rather on the act of understanding itself—even if what is understood is incorrect. As Luhmann notes, understanding is often replete with misunderstandings, but the very act of engaging in a selection of understanding is vital. Understanding is typically misunderstanding without understanding the ‘mis’ (similarily, misinformation is still information).

In summary, Luhmann’s perspective underscores that effective communication doesn’t necessarily require partners to achieve mutual understanding in the way their respective psychic systems might operate. I think we can make the same observation in our day to day life. Partners can perfectly live together even though their understanding is not mutual which, of course, can cause problems in relationships. However, it can also be a useful feature. If a third party observe a tense conversation between a couple, the content of the conversation might be quite ordinary but what is communicated can be a conflict within the relationship. The couple understands the communication much better than the third party. The conflict, however, is likly caused by the problem of different previous (mis-)understandings. For the third party, it is like listening to some encrypted communication. Whether executed by humans or algorithms, the value lies in the process and its constituent parts: announcement, information, and understanding.

In the context of artificial intelligence we can look at the communication of a person and a machine—of ChatGPT and Doug Hofstadter—and we can ask: Why is it so effective or attractive? But also: Why is it (probably) not the product of an intelligent thinking system but rather produced social intelligence?

With my shallow understanding of Luhmann’s theory, I imagine that the prerequisites for communication—whether artificial or otherwise—involve contingency and connectivity. In social systems theory, the generation of information is not an isolated act; it is attributed to an interactive partner. While traditionally this partner is human, in the realm of artificial communication, it can very well be a machine. The focus should be on the nature of the interaction itself: does it exhibit the characteristics of a contingent, autonomous relationship? And does this interaction spur further communication?

Traditional machines that produce unpredictable outcomes are usually considered faulty rather than creative or original. Take a pocket calculator, for instance; its primary virtue lies in its predictability. We do not regard it as a communicative entity because it operates as expected which is desirable. The calculator is not contingent. Conversely, when interacting with image-generating algorithms like Stable Diffusion or Midjourney the appeal, I argue, is precisely in the unpredictability of the results. Chatting with a bot can be exciting preceisly because we do not know the output of the bot or, in general, the outcome of this interaction. Of course, this does not mean a completely random output would have the same effect. Contingency should not be confused with randomness or arbitrariness. The information provided by the bot has to be understandable, in the sense that it can also be misunderstood (like any ‘good’ communication can be).

Despite thinking of this feature as a flaw, this ambiguity is desirable for communication. Of course, not all possibilities of misunderstanding are desirable. A chatbot that provides patients with medical or organizational information should give precise and unambiguous answers. However, in this case, the bot is more like a tool than a real communication partner. And here we land at an important distinction: While many argue that irritation caused by generative AI is similar to the invention of photography, I think there is a difference.

Cameras do not communicate!

This does not mean that generative AI can not act as a mere tool in the process of, for example, the production of images. The more predictable the more tool-like generative AI are and the less communicative they become. There outputs by themselves become less interesting but at the same time, they are more useful to realize a specific vision of the user or artist.

As Esposito noted: Viewed through the lens of Luhmann’s social system theory, the development of compelling communication partners presents a unique dilemma: The challenge lies in engineering machines that exhibit both creativity and control, balancing the production of unexpected outcomes with predictability. This tension is especially relevant in the field of AI art. In essence, the paradox that governs the programming of ‘intelligent’ algorithms is the pursuit of controlled unpredictability.

The ultimate objective is to achieve a controlled lack of control. – (Esposito, 2022)

From a philosophical standpoint, Luhmann transforms the mind-body problem into the mind-communication problem—communication defined by Luhmann as “the operation that society consists of” (Möller, 2006). If the mind does not communicate but is only in the environment of society (communication)—is merely involved—how does this all work? Similarily, I think, the question of how artificial communication emerges even if machines are also only in the environment of communicating systems, is one of the most important question to ask if one wants to understand the current state of AI, society and where we are heading at.

Now, if one looks closely to Luhmann’s definition of communication, we find that it is not compatible with Esposito’s concept of artificial communication and she is aware of that. We might think of communication being really picky, that is, it has a lot of requirenments to occur—there is a lot of structural coupling going on.

Communication is improbable. – Luhmann

Let’s look at some requirements: the physical requirements like temperature and gravity at a certain level, water, air, but also a medium and, according to Luhmann, at least two consciousness entities. In a sense, the coupling of these two conscious entities is more strict. It is an equal operation of the psychic system that has to be devoted to the actual operation of the social system. That is why they coincide in this event at which communication happens.

Empirically, I propse [the concept] because what is going on in the interaction with algorithms is so close to communication that we have to try to find a way to extend [Luhmann’s] concept of communication to include what is going on—it is not exactly the same. The technique of the communication is similar: production of information which irritates other systems but the algorithm itself is not thinking, is not producing any new communication. It just sort of conveys something that can produce information somewhere else. My background is Luhmanian but what I am proposing, without wanting to amend Luhmann, is something different from the standard case of communication – Elena Esposito

Profilicity Machines

The philosopher Hans-Georg Moeller adds an interesting point to the machine learning discourse. He posits that algorithms nowadays are used for profile building—he coined the term profilicity as a new identity technology, which is different from previous modes of identity building, i.e., sincerity and authenticy (Möller & D’Ambrosio, 2021).

Following his thesis, people taking pictures, not (primarily) to preserve memories, but to curate a profile on Facebook, Instagram or LinkedIn. In that sense, I too build my own profile by writing this text and by curating a personal website, a GitHub repository, and many more profiles. AI helps us to evaluate our profile(s) within McLuhan’s Global Village (McLuhan, 1992). It makes it possible to get feedback from our peers and to present this evaluation back to the village. It enables second-order observation which reduces complexity.

Moeller admits that he is—as many of us coming from an age where authenticy was the primary technology to build identity—annoyed by this picture frenzy. But he stays true to Luhmann and refrains from judging. He trys not to moralize this phenomenon or classify it as being ‘worse’ or ‘better’ because, in his eye, authenticy was never real in the first place. Like the other forms of identity building, profilicity comes with its own problems. Each mode brings its own set of challenges.

He intriguingly describes profile creation as genuinely pretending. Observing younger generations, this resonates. Their digital avatars often exude a postmodern irony; they knowingly embrace its constructed nature. They are fully aware that it is all ‘fake’. Contrarily, older generations may need reminding that these online images are meticulously curated and often manipulated. Advising younger folks about the ‘deceptions’ of online portrayals might seem naive. Their approach is more playful, even inventive, using multiple layers of meta-references to distance themselves from reality as far away as possible.

As Moeller notes, the real tension might arise from the mismatched expectations of older authority figures. We—and I include myself here—expect authenticity while most parts of the world of young people operate in the mode of profilicity. This leads to a contradiction and, because it is about identity building, this contradiction might be psychologically problematic. Misaligned expectations can cloud the path to self-realization. Hence, while it’s tempting to solely blame social media for rising mental health issues, the underlying causes, as Luhmann would argue, are multifaceted.

The Revenge of Objects

With the description of society handed over by systems theory, I might have lured you, the reader, into an even greater despair. My assertion is that we don’t necessarily need AI to challenge Hofstadter’s vision of human greatness; the deconstruction might already be underway. Luhmann’s system theory goes against the honorable belief of Hofstadter which is also expressed by figures like David Graeber or Noam Chomsky.

The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently. – David Graeber

I really like the sentiment expressed in the quote. I want it to be true and to work! And I admire personalities that keep it alive.

However, objects seem to regain agency and power over us. When certain philosophers discuss subjects and objects, they often reference commonplace items like chairs and desks. For instance, a chair might seem like a benign example. Here the case seems trivial: Of course a chair has no agency! We make chairs to sit on them. We dominate chairs. They are completely in our control.

But we do not have to look further than Heidegger’s hammer (Heidegger, 1927) to see that things can get tricky very quickly. Heidegger proposes that before we ponder the essence of a hammer, we use it. Before we question its existence, we recognize its utility. The hammer, in this context, prompts us to act—it, indirectly, has agency.

Or consider more potent examples like opioids, smartphones, the internet, algorithms, images or even movies. These items influence our behavior, decisions, and perceptions. The inception of video technology, for example, started with the simple goal of determining if a galloping horse ever had all its hooves off the ground simultaneously. Now reflect on the vast implications and transformations that this technology has since undergone. Did we solely shape these inventions, or should we attribute some credit to the inventions themselves?

For Baudrillard there is an uninterrupted production of positivity that has terrifying consequences. Applying systems theory terminology, he speaks of runaway positive feedback loops.

Any structure that hunts down, expels or exorcizes its negative elements risks a catastrophe caused by a thoroughgoing backlash, just as any organism that hunts down and eliminates its germs, bacteria, parasites, or other biological antagonists risks metastasis and cancer—in other words, it is threatened by a voracious posivity of its own cells, or, in the viral context, by the prospect of being devoured by its own antibodies. – (Baudrillard, 1990)

In other words, runaway positive feedback loops that have no negative, will eventually cause a catastrophe. A simple technical example of such a feedback is a microphone that picks up the amplified sound output of loudspeakers in the same circuit, then howling and screeching sounds of audio feedback.

Baudrillard thought that the production of images is such a feedback loop. They are out of control and take over, for example, free democratic politics. Although we can take a moral stance against certain imagery, such as pornographic content, Baudrillard suggests that positive feedback will ultimately subvert any moral code. Trump, as an example, may be critiqued from a moral perspective, but because he’s a potent subject for image production, the media engages with him regardless of whether they criticize or praise him. This cycle of image production can’t be halted simply by creating more images.

Luhmann may have a less bleak view. For instance, laws that regulate AI-generated images, can prompt changes in a system’s behavior (indirectly via irritation). However, reining in runaway positive feedback is challenging, as issues can escalate exponentially.

It might sound unconventional, but to foster our understanding of our world it could be beneficial to acknowledge external influences like reality TV, staged political photos, or conspiratorial content as agents. Not only in a sense that they affect us but that they have an inner life; a will of their own so to say. It could be valuable to treat objects like oil with the same reverence and respect as ancient civilizations treated strom and thunder. Isn’t it the case that in our times oil is more powerful than any ancient god ever was?

Predictive machines can sometimes inadvertently create self-fulfilling prophecies. For example, I might be more inclined to buy items from Amazon that appear at the top of a list because of their prominent placement. These items are ranked by an algorithm aiming to maximize Amazon’s profits, predicting which items I’m most likely to purchase. Since I’m inclined to buy items higher up on the list, the algorithm’s prediction is validated, influencing its future predictions and creating a positive feedback loop which consists of me and the algorithm. Because I rely on algorithms to find items to buy, I believe it’s fair to say that they have influence over me and a sort of agency of their own.

If we don’t attribute agency to objects, then the explanation for the failures of climate agreements likely rests on a dysfunction of the system or on individual failures, e.g. ‘corrupt’ or incompetent politicians. The idea that our modern society renders us freer and more independent is misconstrued. A better way to put it is: We are (more) out of control. While we engage in broader dialogues, express ourselves diversely, and view the world through varied lenses, increasing complexity often amplifies dependency and chaos. There is more differentiation going on but this does not mean that we are less dependent. Instead systems are less controlable. The age-old dynamic of subjects dominating objects could very well be shifting, placing objects alongside us in terms of influence.

Artificial Systems?

Systems theory doesn’t assert that society’s evolution is innate or that it will remain unchanged forever. It simply aims to provide a thorough depiction of modern society. Yet, it’s worth exploring the connections between systems theory and artificial intelligence. This exploration might hint at the concept of an artificial system—–essentially an AI that functions as a system, potentially approaching the capabilities of general artificial ‘intelligence’.

To venture a speculative idea, machines might eventually evolve into these artificial systems. To thrive in a highly complex modern environment, they might need to exhibit traits similar to psychic and social systems: being autopoietic, structurally coupled, operationally closed, and functionally differentiated. If we assume this to be true, what implications might it has?

While minds create themselves through mental operations, an artificial system would self-create and self-preserve through computation. This does not mean that artificial systems have to be in control of their requirements to operate. The opposite is the case. The required hardware, programmers, and users of such systems would belong to their environment. But, as Luhmann writes:

The observing and describing itself (cognition), however, always has to be an operation that is capable of autopoiesis, i.e., of the performing of life or actual consciousness or communication, because otherwise it could not reproduce the closure and difference of the cognizing system; it could not take place “in” the system. (Luhmann, 1988)

Computation would probably drive more computation; the primary purpose being to continue its computational operations.

Due to their operational closure, a synthesis of thinking and computation—a transhumanist vision—seems unlikely. Also, their structural coupling suggests they’d be interdependent on psychic systems and other social systems. Natural language processing might be an important aspect of AI because it potentially makes this coupling possible via the shared medium of language. This implies that the idea of AI overthrowing humanity is also improbable. It is more likely that our mind, i.e. our thinking/feeling co-evolves with computation/text generation. Machines will not think for us but they might irritate our thinking because we might need different skills to survive in an environment consisting of such machines. However, it’s worth noting that psychic systems may not have control over these artificial systems.

The primary function of the psychic system, in Luhmann’s conceptualization, is the processing of meaning. Psychic systems generate thoughts, emotions, perceptions, and other mental phenomena. They observe, process information, and produce decisions. While social systems use communication as their primary medium, psychic systems use consciousness. Every individual has their own psychic system, which means their own consciousness and their own way of processing and understanding the world.

Artificial systems, on the other hand, may primarily focus on the production of predictions. They would observe and process abstract data and project potential futures. But what would ‘motivate’ them? To function or to survive, they have to compute, i.e., they have to process abstract data. Consequently, their systemic rational might prioritize producing predictions that lead to more predictions, rather than producing the most accurate or useful predictions. While we can see traces of this effect in recommendation algorithms that suggest polarizing content, attributing this behavior to artificial systems and not, or only partly, to the mass media system might be an overreach.

I must admit, I need a deeper understanding of systems theory to consider AI as potential systems in Luhmann’s view. However, I believe it’s a valuable pursuit.

Difference Makes the Difference

I agree with Hofstadter on the limitations of human greatness, albeit for different reasons and perspectives. While discussing systems theory, I aimed to challenge his belief that AI diminishes this human greatness. I argued that this so-called greatness, or perhaps rational control that comes out of enlightenment, was an illusion from the outset. Additionally, I’ve raised questions regarding the feasibility of ‘complete’ enlightenment as proposed by the modern project.

Simultaneously, I presented reasons:

  1. for perceiving machines as intelligent, and
  2. for potentially being misled by this perception.

The phenomenon of artificial intelligence is not only interesting because of impressive accomplishments but because it pushes discussions about the ‘human nature’ into many psychic and social systems. This question leads immediatly to the question of cognition, thinking/feeling, and consciousness because these concepts are so central to the question of machine intelligence. So, let me elaborate a little bit more on these concepts.

The Mystery of Consciousness

First of all, debating if machines possess consciousness or human-like intelligence is premature without a clear understanding of intelligence and consciousness. Perhaps ‘intelligence’ is just a collective term for several unexplained phenomena. We do not know yet.

We can identify correlations between mental events and brain activities, implying there’s a physical aspect to consciousness—a sort of carrier. However, as neuroscientist Giulio Tononi notes, this doesn’t necessarily mean we can pinpoint consciousness within the brain (Tononi & Koch, 2015).

[…] what I think I know about my body, about other people, dogs, trees, mountains, and stars, is inferential. It is a reasonable inference, corroborated first by the beliefs of my fellow humans and then by the intersubjective methods of science. Yet consciousness—the central fact of existence—still demands a rational explanation. – (Tononi & Koch, 2015)

Still physicalism, the view that everything is physical and can be explained by physics, seems to be the natural position, especially in science. However, this view can be challenged. In the famous and rather short article What Is It Like to Be a Bat? (Nagel, 1974), the philosopher Thomas Nagel trys to argue against it. He explains that we can not know what it is like to be a bat. He chooses bats because they are mammals like us and, presumably, conscious, yet their primary sensory experience—echolocation—is profoundly different from any human sensory experience. In Luhmann’s words: their environment is very different from ours. We can imagine flapping our arms like a bat or eating insects like a bat, but we cannot truly imagine what it is like to ‘experience’ the world primarily through echolocation. Nagel suggests that even if we knew all the physical facts about a bat’s brain while it echolocates, we’d still be missing the subjective experience—the “what it’s like”–—of being a bat.

In the philosopyh of mind there is a broad spectrum of dealing with the hard problem of consciousness ranging from new forms of idealism (the essence of reality is consciousness), naive realism (we have direct awareness of objects as they really are), new realism (accepts that science is not systematically the ultimate measure of truth but realities are first given, not constructed) to panpsychismus (the mind or a mindlike aspect is a fundamental and ubiquitous feature of reality). There is no agreement on the hard problem of consciousness. Consequently, consciousness remains enigmatic, with little guidance on where to begin our inquiries.

Cognition as Construction

Sometimes, to gain a fresh perspective on the world, one must take a completely opposite stance. For millennia, philosophers have pondered how we can perceive something if we lack direct access to reality. Luhmann flipped this idea on its head. He argued that it’s precisely because we don’t have direct access (due to system/environment distinction and operational closure) that we can perceive. Cognition can only happen if it is not interrupted which requires operational closure.

The tradition of epistemological idealism was about the question of the unity within the difference of cognition and the real object. The question was: how can cognition take notice of an object outside of itself? Or: how can it realize that something exists independently of it while anything which it realizes already presupposes cognition and cannot be realized by cognition independently of cognition? No matter if one preferred solutions of transcendental theory (Kant) or dialectics (Hegel), the problem was: how is cognition possible in spite of having no independent access to reality outside of it. Radical constructivism, however, begins with the empirical assertion: Cognition is only possible because it has no access to the reality external to it. A brain, for instance, can only produce information because it is coded indifferently in regard to its environment, i.e. it operates enclosed within the recursive network of its own operations. Similarly one would have to say: Communication systems (social systems) are only able to produce information because the environment does not interrupt them. And following all this, the same should be self-evident with respect to the classical “seat” (subject) of epistemology: to consciousness. – (Luhmann, 1988)

I should emphasis and explain how radical this turn is.

Luhmann builds on Kant’s Copernican Turn. He turned the question of ‘what is the general structure of the world’ into a question of the general structure of cognition: Under which conditions is cognition allowed to operate to conceive the world? Kant tried to bridge the gap between Hume’s empiricism and German idealism. He also accepted the skeptical argument that nothing could be realized by cognition independently of cognition. Kant brought in constructivism into epistemology but he was, in Luhmann’s view, not radical enough because he hold on to some sort of relation to reality. Luhmann assumes that the realization of reality is not a relating to reality, but that reality basically consists of its own realization and that the key is differentiation. Therefore, Luhmann drops the idea of a reality that is independent of cognition, i.e. Kant’s Ding an sich.

According to Luhmann, cognition itself becomes a construction based on distinction: Reality emerges as cognition. This sounds like Hegel’s idealims. However, the emergence of reality does not mean that cognition is ideal—it does not rely on an essence in the form of consiousness. Cognitive systems construct cognition which is based on the system/environment distinction. There is no single rule how this can be done. Cognition can operate materially in the form of biological life, mentally in the form of thoughts, or socially in the form of communication.

Moeller summerizes this shift from Kantian idealism to radical constructivism nicely (Möller, 2006):

  1. Cognition is not per se an act of consciousness. It can take on any operational mode.
  2. There is no a priori, transscendental structure of cognition; cognition constructs itself on the basis of operational closure and is an empirical process, which varies from system to system.
  3. No complete description of cognitive structures is possible because these structures are continuously evolving.
  4. Reality is not singular—there is not one specific reality, but a complex multiplicity of system/environment constellations.
  5. A description of reality is itself a contingent construction within a system/environment relation.

In Luhmann, the subject/object distinction gets replaced with the system/environment distinction and the premise of a common world (the unity of system and environment) gets replaced by a theory of the observation of observing systems (second-order cybernetics). However, getting to ‘the root’ of cognition—which would lead to some clues about a reality—seems impossible because any such investigations require distinction which is the operation of cognition. Furthermore, there is no justification for assuming that any adaptation of cognition to reality is happening. Confronted with this problem how can one start developing a theory of cognition?

In Cognition as Construction, Luhmann recognizes and somewhat addresses this issue by an assumption akin to Decartes and Husserl:

We assume that all cognizing systems are real systems within a real environment, or in other words: that they exist. This is naive—as it is often objected. But how should one begin if not naively? A reflection on the beginning cannot be performed at the beginning, but only by the help of a theory that has already established sufficient complexity. – (Luhmann, 1988)

Starting from this assumption, Luhmann reformulates Kant’s question of the possibility of cognition to the question of how systems can uncouple themselves from their environment. For him, closure, i.e. uncoupling, is only possible by a systems’s production of its own operations and by its reproduction within the network of its recursive anticipations and resources. Thus, cognition is manufactured and is a self-referential process. It deals with an external world that remains unknown thus cognition has to come to see that it cannot see what it cannot see. One could say that reality remains as an ineradicable blind spot. While reality remains unkown, Luhmann speculates that there is some ground for the belief that if reality would be totally entropic, it could not enable any knowledge. In other words, reality cannot be the object of the knowledge that it makes possible, it serves knowledge merely as a presupposition. Knowledge can only know itself but it cannot know anything about what it constructs by way of the manipulation of distinctions.

All observing systems are cognitive systems. They are operationally closed but cognitively open and they make sense of their environment as they experience it. The mind makes sense in such a way that its construction is valid or functional. It does not matter if it resembles reality. In this way, making sense is like finding the right key to open a lock. The key is useful and functionional but it does not tell us anything about the lock, expect that it fits the lock. However, how we see the world depends on the cognitive capacities of our eyes and brains to produce images. Different operations lead to further different operations. For example, bats are another system/environment distinction than we are. Therefore, their world most certainly ‘looks’ nothing like ours.

To recognize a table and say “This is a table”, I don’t need to have the letters T, A, B, L, E in my brain, nor does a tiny representation of a table (or even the “idea” of the table) need to exist inside me. However, I do need a structure that calculates the various manifestations of a description for me. – (Foerster, 1985)

What our psychic system ‘sees’ does not have anything to do with reality. In fact, the psychic system cannot ‘see’ reality with ‘its’ eyes because our eyes are in the environment of our psychic system. Cognition is always a construction by an observing system and there is only irritation and no direct causal relation.

Now, one might ask: If every system constructs itself and reality emerges out of its cognition, how can we find any common ground? Or, how did we end up believing in an universal and objective reality we actually can access?

Well, only because there is no objective reality does not mean that humans cannot ‘achieve’ social control via social systems. Importantly, psychic and social systems share a common medium, that is, sense. It is used in thought as well as in commuincation. A thought or a gesture is a specific form (strict coupling) that the medium (loose coupling) of sense can take on. Thus, thinking is like bringing sand (the medium) into a specific form; it can also be defined as a selection within a horizon of what is possible. By thinking a specific thought, I do not think any other thought that is in my horizon of the thinkable. Secondly, social and psychic systems are structurally coupled via language. Language couples psychic and social systems. And these social systems make sense in accordance to their systemic rationality irritated by observing their environment. In a sense, we create a second dimension of ‘reality’ by assuming, through the use of concepts, that our own constructions resemble those of others and by experiencing ourselves as part of a community by assuming and asserting that our own constructions largely correspond to those of others. The experience of stability and continuity of one’s constructed reality depends not only on the system’s first-order observation but also on the confirmation of this observation by other observers (second-order observation).

From a radical constructivist perspective, a child learns language not as a system of information transmission but as a form of behavioral coordination. It must learn, through trial-and-error strategies, to connect the multitude of linguistic expressions from adults with desired reactions of its own. Therefore, words like “forks/democracy” coordinate our actions with respect to what a person does when dealing with forks/democracy. Through the word “forks” and similarly through all other words, information is not transmitted but something specific is triggered in the recipient, which is determined by their structure and, indirectly, by their socialization.

Luhmann’s concept of cognition as construction does not provide an absolute, rigid foundation. My explanation of it is based on my observation of Luhmann’s observations, and in case of the reader, it relies on your observation of my observation of Luhmann’s observation. Since my, Luhmann’s and your observation are all constructed and contigent, it is inherently impossible to ‘prove’ the correctness of Luhmann’s theory from outside. Furthermore, there will be always a blind spot.

A supertheory reflects on the fact that it and its validity are its own product—and is therefore absolutely contingent. […] It is a theoretical endeavor, and there is nothing more to it. It does little outside of theory. […] With supertheory, the world does not become morally better, more rational, or spiritually complete. It only becomes more distinct. – (Möller, 2006)

Radical constructivsm is often preceived as a dangerous path because it might lead to the relativization of ‘evil’ actions. If everything is constructed, anything might be justifiable. But again, Luhmann does not claim that there is no reality or that one can construct whatever one desires. If you jump out the window, you will get hurt regardless of what you imagine. However, acccording to Luhmann (and many others), there is no absolute and I think that this uncertainty makes us more thoughtful than reckless. If absolute truth is on my side, everything is permitted.

It would be erroneous to claim that the application of social systems theory is justified because it describes society more ‘accurately’ because, according to this very same theory, such claims are impossible. What we can state is that a theory that makes more sense—culturally and personally—might be useful for effective communication, facilitating a better understanding of one another. This view aligns with Richard Rorty, another so-called postmodern ‘charlatan’:

If we can just drop the distinction between appearance and reality, we should no longer wonder whether the human mind, or human language, is capable of representing reality accurately. We would stop thinking that some parts of our culture are more in touch with reality than other parts. […] we would not say that [our ancestors] were less in touch with reality than we, but that their imaginations were more limited than ours. We would boast of being able to talk about more things than they could. – (Rorty, 2016)

We are Different

So let me ask: Why should machines (even if they are conscious) ‘live’ in a ‘world’ that is similar to ours? And why should the bat-workd, my-world, and the machine-world be more true than any other system/environment distinction? I think it is a mistake to undervalue our mental processes just because machines can communicate, calculate, and display creativity. I also think it is a mistake to believe that thinking and computing are similar operations. However, according to systems theory, viewing humans as superior rational entities is misguided as well.

A recurring theme in Luhmann’s writings is the idea of difference and differentiation. Neither do we nor does any other system hold an intrinsic superior position in society; existence is contingent. Observation necessitates selection, i.e. non-observation (a blind spot). If I observe my cup, I cannot oberserve myself at the same time. Because of differentiation there is no system that can make absolute sense of some indpendent reality. The legal system ‘looks’ at a house in legal terms and the economy in economic terms. There is no view from the top, above or the outside. A system can only observe in accordance to its systemic rationality. And as I argued above, bats, human beings, and maybe at some point in future, machines ‘live’ in, or better create, different interdependent ‘worlds’. We are differnt; neither better nor worse with respect to each other and to other beings/systems. I think, that should be enough to be fascinated about ourselves and to value us as different as we are.

This multiplicity of ‘worlds’—this functional differentiation—makes communcation so difficult. It seems like a miracle that we sometimes seem to understand each other. Luhmann believed that ecological problems are primarily problems of communication. Different social systems (e.g., politics, science, economics) have different ways of observing and communicating about the environment, which can lead to misunderstandings or contradictions. So one idea I have in mind is that via artificial communication we might be able to reduce misunderstanding between different systems, including the ecological system. From a Luhmannian perspective, we need a sort of translation from one systemic rational to the other. Furthermore, I believe that we need a more strict coupling between the ecological system and psychic as well as social systems. Maybe artificial communication can establish such a strict structural coupling.

Only because we are not in charge, does not mean that things can not get better. There seems nothing inherently wrong with biological evolution. Is social evolution any different?

References

  1. Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books.
  2. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  3. Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5185–5198. https://doi.org/10.18653/v1/2020.acl-main.463
  4. Möller, H.-G. (2006). Luhmann Explained: From Souls to Systems. Open Court.
  5. Luhmann, N. (1988). Erkenntnis als Konstruktion. Bern: Benteli.
  6. Luhmann, N. (1993). Deconstruction as second-Order observation. New Literary History.
  7. Luhmann, N. (2000). Why does society describe itself as postmodern. In W. Rasch & C. Wolfe (Eds.), Observing complexity: Systems theory and postmodernity (pp. 35–49). University of Minnesota.
  8. Spence-Brown, G. (1969). Laws of Form. London: Allen and Unwin.
  9. Esposito, E. (2022). Artificial Communication. The MIT Press. https://doi.org/10.7551/mitpress/14189.001.0001
  10. Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech. J., 27(3), 379–423.
  11. Möller, H.-G., & D’Ambrosio, P. J. (2021). You and Your Profile: Identity After Authenticity. Columbia University Press.
  12. McLuhan, M. (1992). The Global Village: Transformations in World Life and Media in the 21st Century. Oxford University Press.
  13. Heidegger, M. (1927). Sein und Zeit.
  14. Baudrillard, J. (1990). The Transparency of Evil: Essays in Extreme Phenomena. Verso.
  15. Tononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668), 20140167.
  16. Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83(October), 435–450. https://doi.org/10.2307/2183914
  17. Foerster, H. (1985). Sicht und Einsicht. Vieweg.
  18. Rorty, R. (2016). Philosophy as Poetry. University of Virginia Press.