{"id":68242,"date":"2025-10-16T20:32:41","date_gmt":"2025-10-16T18:32:41","guid":{"rendered":"https:\/\/dasgoetheanum.com\/?p=68242"},"modified":"2025-10-17T13:53:47","modified_gmt":"2025-10-17T11:53:47","slug":"we-do-not-know","status":"publish","type":"post","link":"https:\/\/dasgoetheanum.com\/en\/we-do-not-know\/","title":{"rendered":"We Do Not Know"},"content":{"rendered":"\n<p><strong>Are we at the dawn of a new golden age\u2014one in which the promises of the Enlightenment are fulfilled, human hardship is overcome, and new freedoms take root\u2014or are we witnessing the beginning of the end, a slow displacement of humanity and its culture? This was the question moderator Simone Miller posed to the thinkers Markus Gabriel and Daniel Kehlmann at the international philosophy festival phil.cologne in Cologne.<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Markus Gabriel&#8217;s simple answer: \u201cIt&#8217;s entirely in our hands.\u201d It depends on which artificial intelligence (AI) systems we build and, in particular, who builds them, because AI is not value-neutral like other technologies. Neural networks collect data\u2014often human behavior expressed as data\u2014form patterns from this data and reinforce these patterns. Therefore, the outcome depends on the data and on how patterns are formed. While it\u2019s often said that technology changes human behavior, Gabriel reverses the argument for artificial intelligence: human behavior, condensed into data, shapes AI. He adds another thought: AI doesn\u2019t just work for us, but with every query we give it, we provide data and thereby work for these neural networks. The answers AI supplies\u2014the product we receive\u2014come about only through the questions.<\/p>\n\n\n\n<p>Daniel Kehlmann picked up on this contradiction, emphasizing that neither the users nor the developers of artificial intelligence know exactly why and how it is possible. What is important, he said, is that developers realized that it is less about improving the algorithms and more about increasing the amount of data and the speed at which it is processed. \u201cThen the results don\u2019t improve gradually but exponentially. No one knows exactly why this is the case.\u201d He summed it up in a simple image: we have built a vast collection of data on the internet over the last 40 years, and artificial intelligence enables this entire data field to be used for answering our questions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AI Makes Us Humans More Intelligent<\/h3>\n\n\n\n<p>Then moderator Simone Miller asked what AI actually is. Markus Gabriel\u2019s answer: systems built to perform behaviors that are considered intelligent when performed by a human or other living being. They are \u201csimulations of intelligent behavior.\u201d He defined intelligence as the ability to solve a given problem in a finite amount of time. If artificial intelligence can be used to solve a problem more quickly, then our own intelligence increases. He offered an intriguing perspective: what\u2019s interesting is not the intelligence of AI but rather the increase in human intelligence through the use of AI. Gabriel then asked whether the simulation of intelligent behavior is itself intelligent behavior. There are many examples of simulations that do not correspond to the original. Pointing at a Mediterranean beach on a map or on Google Maps isn\u2019t relaxing.<\/p>\n\n\n\n<p>The simulation of the trip is something completely different from the actual trip. Does this apply to AI? Gabriel suspects that the simulation of intellectual behavior is different in this case: \u201cThese things think.\u201d Daniel Kehlmann described an experiment in which an AI model was asked to speak in Cornish. This language, which is related to Breton and Welsh, is spoken by only a few hundred people in Cornwall. So, there are hardly any Cornish texts on the internet\u2014only a dictionary. Nevertheless, the AI model managed to express itself in Cornish. According to Kehlmann, this astonishing ability shows that AI cannot be derived from the data alone.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AI Has No Problems<\/h3>\n\n\n\n<p>But, asked Simone Miller, what about artificial intelligence compared to a more sophisticated concept of intelligence that includes, for example, adaptation to an environment and the ability to develop survival techniques? According to Gabriel, this is the \u201cold line of AI philosophy,\u201d and he quoted John Haugeland: \u201cComputers don&#8217;t give a damn.\u201d AI systems don&#8217;t care about anything. Intelligence is found everywhere in life and then grows gradually, from single-celled organisms to slime molds. All these forms of intelligence have the urge to survive in common. If intelligence is the ability to solve a problem, he asked, can one be intelligent if one cannot have a problem? But Gabriel didn\u2019t want to follow this line of reasoning and asked whether the processing of thoughts itself was not thinking. Can thoughts think themselves? Asked Kehlmann and Gabriel, bringing up the positions of philosophers Gottlob Frege (1848\u20131925) and Georg W. Hegel (1770\u20131831)\u2014that only living beings can think and that there is thinking in itself. Neither wanted to answer, but according to Gabriel, it\u2019s looking good for Hegel. If thinking were indeed the processing of thoughts independent of \u201cliving embodied beings,\u201d then artificial intelligence would become dangerous. But, interjected Miller, without a body, without sensory experience, no consciousness could arise, despite all the amazing data processing. Kehlmann pointed out that Google, the company, uses robots to generate experiences of corporeality and then, as with data, scales these experiences through simulation. Kehlmann asked whether it might be possible to teach the \u201cembodied being-in-the-world\u201d that is inseparable from our conception of consciousness to technical entities.<\/p>\n\n\n\n<p>The conversation between Kehlmann and Gabriel moved into questions that were more open and uncertain than before the advent of this new technology. Perhaps this leads to an answer: AI calls us to determine our thinking, to determine ourselves, and to question the freedom of thought from the physical body. Perhaps body-free thinking will gain a place in machine-subconsciousness if it also occurs in humans. Kehlmann and Gabriel asked this too when they denied AI consciousness and did not want to make a final judgment. What a Socratic moment\u2014something completely foreign to AI\u2014to pause in the moment of not knowing.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong>Translation <\/strong>Laura Liska<br><strong>Image<\/strong> <a href=\"http:\/\/phil.cologne\/\" target=\"_blank\" rel=\"noreferrer noopener\">International Philosophy Festival phil.cologne<\/a> on YouTube: \u201cMy Algorithm and Me\u201d\u2014Markus Gabriel and Daniel Kehlmann on humanity in the age of AI.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Are we at the dawn of a new golden age\u2014one in which the promises of the Enlightenment are fulfilled, human hardship is overcome, and new freedoms take root\u2014or are we witnessing the beginning of the end, a slow displacement of humanity and its culture? This was the question moderator Simone Miller posed to the thinkers Markus Gabriel and Daniel Kehlmann at the international philosophy festival phil.cologne in Cologne. Markus Gabriel&#8217;s simple answer: \u201cIt&#8217;s entirely in our hands.\u201d It depends on [&hellip;]<\/p>\n","protected":false},"author":9159,"featured_media":68030,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[11696],"tags":[11708,11709,8824],"class_list":["post-68242","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-challenges-en","tag-ausgabe-39-40-2025-en","tag-english-issue-42-2025","tag-spotlights"],"acf":[],"_links":{"self":[{"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/posts\/68242","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/users\/9159"}],"replies":[{"embeddable":true,"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/comments?post=68242"}],"version-history":[{"count":0,"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/posts\/68242\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/media\/68030"}],"wp:attachment":[{"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/media?parent=68242"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/categories?post=68242"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dasgoetheanum.com\/en\/wp-json\/wp\/v2\/tags?post=68242"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}