Understanding the technical processes and hardware behind AI helps us to use it freely as a tool. Oliver Pauletto, head of the IT department at the Goetheanum, gives us a glimpse behind the scenes.
As a computer scientist working in the field for the last 25 years, I’ve observed countless times that people’s attitudes toward computer systems change once they understand how they work in detail and “behind the scenes.” Those who can understand the mechanics of a computer not only gain technical know-how but also a clearer, more differentiated, and more conscious approach to the subject. Instead of fearing a foreign “thing,” the computer becomes an instrument that they can use freely and creatively. This has been an important and significant experience for me that I’ve carried throughout my professional life, and it was one of the reasons that I decided to use and study open-source systems. For me, this is a deeply anthroposophical approach: not accepting technology as an impenetrable mystery but as something we can understand and shape. Those who know how technology works encounter it differently, not as a black box but as something one can relate to. Understanding provides security and serenity—and the freedom to actively design things ourselves. As soon as the “how” becomes clear, the nebulous feeling of dependence disappears and transforms into clarity and creative power. This attitude also informs my work at the Goetheanum: I am conscious that we rely on open-source software. Only when I understand the source code and internal processes can I develop a healthy distance from the tool I’m using while maintaining a closer connection at the same time. For me, open systems are therefore not just a technical choice but the expression of an attitude.
From a Feeling of Dependence to Curiosity and Creative Drive
With the advent of artificial intelligence (AI), we are witnessing the entrance of another powerful digital system into our everyday lives. A new, mysterious entity, seemingly capable of anything, is taking center stage and leaving many people feeling perplexed and powerless. This new technology is enigmatic, surrounded by myths, and often treated as if it had a personality of its own. A tool is perceived as a “person” who understands us and increasingly controls us. In recent months, I have therefore been intensively studying AI—technically, in detail, under the hood. And once again, I’ve observed how liberating it is to understand the mechanisms: a nebulous impression becomes a clear picture. The feeling of dependence turns into curiosity; curiosity into creative freedom. I remember hearing about artificial neural networks for the first time about ten years ago. I was deeply impressed by how a system can improve itself through mathematical learning alone, and that the central program code often consists of only a few lines. The core of every AI is astonishingly simple: information is broken down into millions of tiny units that are then modeled mathematically and related to each other as vectors. But, at the same time, the idea that this cool mathematics is supposedly derived from the way our brain works seems strange to me. Is our brain really a machine? Do we have such a mechanism within us?
I want to take a closer look at two terms whose meaning only became clear to me through the AI debate: The German term “Intelligenz” and the English “intelligence,” which do not necessarily correspond. “Intelligence” can describe an activity—it refers to information processing, analysis, and problem solving (as in “military intelligence”). In German, on the other hand, “Intelligenz” usually implies thinking, understanding, and even consciousness. When we talk about “artificial intelligence” [künstlichen Intelligenz (KI) in German], many people intuitively associate it with a system that feels, judges, or has self-awareness. In reality, since the Dartmouth Proposal of 1955/56, the idea has been linked to something much more modest: the automation of problem solving—i.e., systems that mathematically model, analyze, and derive decisions based on large amounts of data. AI recognizes correlations in data, not their meaning. It has no life, no conscience, and no will. All it “knows” are statistical probabilities within defined targets.

AI Is and Remains a Tool
Our brains process sensory impressions into orderly patterns, but we humans do more than that: we can transform these patterns into living meaning. We experience, interpret, feel, and desire. We connect experiences with intentions, meaning, and empathy. AI does not go beyond cold, albeit efficient, calculations. It may be precise, but it does not understand. Understanding how a technology works can free us from dependence. AI—as impressive as it may seem—is and remains a tool for pattern recognition and probability calculation, without its own consciousness. Humans are the ones who can not only recognize meaning but also create it through feeling and desire. This is precisely where our responsibility lies in dealing with this technology. Meaning is more than statistics or correlation—it encompasses everything that lies beyond data points. As soon as we reduce meaning to mere rows of numbers, we lose something deeply human. That is why we must ask ourselves every time we use AI: Why are we using it? What purpose are we pursuing? What are the ethical consequences of this use? If we focus on people, their creative power, and their ability to create meaning, AI can become a useful tool. If not, it threatens to alienate us from our inner world.
In the end, I return to my starting point: open, understandable systems such as open-source software create a necessary distance from our tools. The journey from programming to conscious self-observation is not only a technological development but also a philosophical one: only when we open the black box and understand the logic hidden inside can we see the world of technology—and ourselves—more clearly.
Translation Joshua Kelberman
Title image CERN data center in Bern, Switzerland. Photo: Florian Hirzinger








