Ethical AI?
“[T]o understand something or someone so completely that the observer becomes a part of the observed—to merge, blend, intermarry, lose identity in group experience. It’s a deep, almost metaphysical understanding.”1 These words may well be reminiscent of some descriptions of intuitive cognition as they arose in the context of Idealism, Goethe, or anthroposophy, among others. But they also represent the ideal of the new artificial intelligence (AI) that the new company xAI announced on November 5, 2023.
“The term ‘grok’ comes from Robert A. Heinlein’s 1961 science fiction novel Stranger in a Strange Land. In the context of the book, ‘grok’ is a Martian word”2 with a meaning comparable to the above definition. In modern parlance, mainly in tech and geek culture, “Grok” continues to be used to describe a deep, intuitive understanding. The AI Grok will be accessible through a chat interface (like ChatGPT) and will later be integrated into Tesla’s cars and, more importantly, the humanoid Tesla Bot, to enable straightforward and “intuitive” interaction with human beings.
The challenge of ever-accelerating technology is often depicted as threatening to displace humanity and our ideals and to displace individual human beings. But what happens when an AI is explicitly connected to ideals of deeper human knowledge? And how is this compatible with the xAI mission statement on the website: “We are guided by our mission to advance our collective understanding of the universe.”
The effects of these kinds of projects are still unknown, but anyone interested in these developments in technology can see how some projects are also seeking to express certain higher ideals.
Source Announcing Grok; Robert Scoble on X
Translation Joshua Kelberman
Photo source xAi
Footnotes
- Robert Scoble, 2023. Twitter post. Nov 4, 2023, 5:58 AM.; see also R. O. McGiveron, “From Free Love to the Free-Fire Zone: Heinlein’s Mars, 1939–1987,” Extrapolation 42, no. 2 (2001): 137–149.
- Ibid.
Yes, exactly what happens when our deepest feelings are mimicked by a computer? I might be alone in finding such research goals creepy…so be it. Is anybody else concerned about computers simulating empathy and finding it not necessary to at least question the notion as Ahriman in Luzifer’s garb?
I am slightly mystified by this short article in the Goetheanum. I would not be surprised to read this in a geeky online magazine. Hopefully I just missed the obvious critique but for safety I am checking here. Enjoy your Tesla where you can commune with the universe shortly.
My understanding is that technology has been mimicking/optimizing human activities all along. Thinking is just the latest type of human activity that can be mimicked.
The human being is evolving through his active transformation of his environment, not least through technology. Isn’t it important that technology reflects as much as possible human ideals, and not only egoistic or short-term views?