What makes us special as human beings is our bodies. We experience reality through our body. AI has no body and no overarching spiritual organism that can change its habitat, as living beings do. The following is a comparative analysis from a scientific perspective.
The trend that led humans to live increasingly in a world of information did not originate in recent decades but rather stretches back centuries. With the advent of writing, a cultural technology emerged that allowed human beings to create a world—a “corpus of knowledge.” This corpus, or body, is so large that it is impossible to take in more than a tiny fraction of this vast world of knowledge and information within a single lifetime. So the question arises: on what basis do we consider ourselves capable of making judgments at all?
Let’s assume, for example, that we could read all the texts ever written on the question of how a stone falls: the texts of Aristotle, the treatises of the scholastics, the modern representatives of mechanical philosophy up to Galileo, Newton’s work, the multi-world quantum theories, and finally today’s string theories. Would all this knowledge make us capable of judgment on this question? I don’t think so. Ultimately, it is our physicality and how this physicality gives us the ability to make an assessment with reality (in the form of experiments, for example) that allows us to grasp the meaning of these writings. Disembodied beings, regardless of their intelligence, cannot comprehend the question. Whenever a question concerning our world, be it scientific, cultural, or political, is ultimately answered, it is not by way of pure logic but through a comparison with reality. The natural sciences are not based on pure knowledge but on experience that enables us to make judgments.
Emancipated Information
However, it’s becoming increasingly difficult to verify reality amid the flood of information. The globalization of the media landscape—especially through digital media—is increasingly leading readers of news to admit that they’re not able to make judgments because it is impossible to verify the reality of a situation in regard to space and time. Information is difficult to verify; it’s becoming “emancipated” from the context of our lives. As a physicist, I’m familiar with the phenomenon that almost any statement can be found in huge amounts of data and logically supported by further contexts, without it being clear if this statement has anything to do with reality. The emergence of fact checkers and terms such as “fake news” and “alternative facts” is based on the implicit admission that it’s becoming increasingly difficult to verify reality. AI is simply a continuation of this developing trend.
Large language models such as ChatGPT are among the most remarkable technical developments of recent years; the linguistic and content quality of the generated texts can be admirable. And since they are based solely on the “corpus of knowledge” and cannot establish any direct connection to reality, they provide an opportunity to study how the internal logical structure of a text can be completely independent from whether the content described has any connection with reality or is counterfactual—contradicting actual historical events, for example. In recent years, through more precise training methods, developers have successfully reduced counterfactual statements, often referred to as “hallucinations.”1 But I wonder whether the presence of these “hallucinations” should be classified less as a problem of language model programming and more as a characteristic of a gigantic “corpus of knowledge” whose content is increasingly emancipating itself from reality. Is what currently seems simply an amusing malfunction of a language model threatening to become the new norm of a society that increasingly lives outside of physical perception? Doesn’t artificial intelligence reflect phenomena that we can already study in ourselves? From this perspective, the concern that AI systems could become increasingly similar to humans in the future seems rather unfounded—if only because humans are becoming increasingly similar to AI. Waldorf education and Goetheanism, with their focus on regular and conscious sensory and physical experience, appear to be more and more indispensable, existential requirements of our present times.

Learning Without Experience
People learn through experience, which sometimes occurs as a rather painful connection to reality. Since this is impossible for a disembodied entity that only operates in digital space, the question arises as to what is meant by “learning” and other terms in the context of AI.2 “Neural networks” (the basic training of language models) are structures with a very large set of parameters that underlie computational operations. The parameters allow for adaptation; the larger the set of parameters, the greater the possibilities for adaptation. However, there are usually no criteria for determining the significance of a particular parameter or how it could be set—so how is adaptation possible? This can be achieved through optimization procedures (training), where the principle of trial and error is used to determine whether a model with a random set of parameters achieves better adaptation to its dataset. If the adaptation achieves a high level of success, the parameters are set; if low, the parameters are changed, and the training continues. This approach follows the evolutionary principle of selection and elimination, as has been described in various ways for Darwin’s theory of evolution, often simplified as it is described here. Adaptive learning in neural networks is based on this evolutionary principle—what areas of development are conceivable for AI?
As a physicist, I love learning from my colleagues in biology and the life sciences. Theories of natural selection provide a wonderful way of understanding how a species optimizes itself for a certain niche and adapts to a specific habitat or to changing environmental conditions. But it is still very difficult to understand how a new species actually forms. Innumerable coordinated changes in the organism are required (which create disadvantages in the current or previous habitat) before these changes can then result in any significant advantage in a new habitat. Natural selection explains the survival of life forms that adapt, for which Darwin (following Spencer) used the formula “survival of the fittest.” But the emergence of these life forms (first described as the “arrival of the fittest” by the Dutch botanist Jacob Gould Schurman)3 is not comprehensible using only the theory of adaptation through selection and elimination. The concept of natural selection is based on the assumption that evolutionary changes don’t arise through individual living beings learning in interaction with their environment, because this is lost in the transition to the next generation. The mechanism of evolution is considered to be the process of selection. Learning does not play a role. The inheritance of abilities and characteristics acquired through learning by an individual specimen of the species has been categorically ruled out.4 However, a species viewed as a spiritual entity “learns” through its individual specimens. Neural networks replicate this form of learning in their structures and optimization strategies. This leads to the conclusion that, on account of the structural similarity of neural networks to Darwin’s theory of evolution, the possibilities for the evolution of AI models based on neural networks are limited to optimizations in different niches (as they are for Darwin’s theory). From this perspective, it’s difficult to see how something genuinely new, a new creative creation—a new style of art, for example—could come into being from AI. In the perpetual self-reference repeated billions of times by AI within its data harvesting of the “corpus of knowledge,” the future can only be conceived as a perpetual continuation of the past.
The Importance of the Individual for Evolution
We view evolution on Earth as a process of species development; we view the adaptation strategies of neural networks in the same way. When seen as a spiritual entity, every “species” is a type of superorganism that we believe is capable of bringing about important changes to habitats on Earth. As individual living beings, we always feel insignificant. However, there are indications that even in biological evolution, sudden developments arise precisely from the learning processes of single individuals. The observation of plasticity (the adaptability and learning ability of individual specimens) and the inheritance of characteristics acquired through this has closed a gap in our understanding of the emergence of land vertebrates from lungfish.5 It may have only been a small population that initially made the enormous evolutionary leap onto land—an “arrival of the fittest.” Very similar examples can, of course, be found in cultural history. Such phenomena are unthinkable in today’s AI systems. This is not only due to the lack of comparison with reality that is inherent in these models and the principle of “evolution” of the model through selective adaptation, but also, most specifically, to the fact that the model contains only particular parameters and not individual living organisms or specimens of a species. These AI models cannot open up the scope for evolution and development that an individual is capable of as part of natural and cultural evolution.
Translation Joshua Kelberman
Title image Fiber optic connections in a server room. Photo: Albert Stoynov/Unsplash
Footnotes
- OpenAI, “Why Language Models Hallucinate,” September 5, 2025. Accessed Sept. 25, 2025.
- It has been pointed out on several occasions that anthropomorphisms exist in terms such as “learning” or “hallucination,” etc., and can be reinforced by using language assistants such as Alexa, which obscure the purely statistical nature of these phenomena and can lead to the personification of language models. See, for example, Matthias Rang, “Das verliehene Du. Elemente unseres Verhältnisses zur Technik” [The loaned “you.” Elements of our relationship with technology], Stil 45, no. 3 (2023): 7–15.
- Glenn Branch, “Whence ‘arrival of the fittest’?” National Center for Science Education (May 27, 2015).
- See, for example, Johannes Wirz, Ruth Richter, “Epigenetik und epigenetische Vererbung” [Epigenetics and epigenetic inheritance] in Reinhard Wallmann, Ylva-Maria Zimmermann Biologie in der Waldorfschule. Praxishandbuch für die Oberstufe [Biology in the Waldorf school. Practical handbook for the upper grades] (Stuttgart: Verlag Freies Geistesleben, 2019).
- E. M. Standen, T. Y. Du, and H. C. Larsson, “Developmental Plasticity and the Origin of Tetrapods,” Nature 513, no. 7516 (2014): 54–58; Johannes Wirz and Ruth Richter, “Als die Fische gehen lernten” [When fish learned to walk], Elemente der Naturwissenschaft 103 (2015): 116–19, doi:10.18756/edn.103.116.








