AI Singularity: The great fusion

Futurist and AI prophet Ray Kurzweil predicts that computers will soon reach human intelligence. After that, they are expected to merge with humans to form an immortal super-intelligence. Other AI researchers are much more skeptical—although GPT-4 just excelled at the Turing test.

In late June 2024, inventor and visionary Ray Kurzweil sat at a window of the Boston Four Seasons Hotel and held up a sheet of paper. On it, he showed a New York Times reporter a steep growth curve: How much computing power could you buy for a dollar in 1938, and how much the day before yesterday?

The neon green curve in his hand was intended to illustrate why the 76-year-old has just published a new book: ‘The Singularity Is Nearer’ is the follow-up to ‘The Singularity Is Near’, his 2005 bestseller. The concept of technological singularity represents a potential point in the future when artificial intelligence (AI) surpasses human intelligence and then rapidly evolves itself.

Ray Kurzweil, Porsche Engineering, 2024, Porsche AG
Ray Kurzweil

According to this line of thinking, technological progress would be unstoppable and no longer controlled by humans. Kurzweil takes this a step further and imagines a moment in which we enhance our brains with virtual neurons in the cloud and thus “merge with AI and improve ourselves with a million times the computing power of our original biology”. Kurzweil’s belief in the singularity is based on the insight revealed by his green graphic. It is similar to the exponential growth curve of the computing power of computer chips predicted by Moore’s Law in 1965.

Kurzweil calls this the “law of accelerating returns”. It postulates that technological developments create feedback loops that accelerate innovation in other areas as well. This development has increased rapidly since he published his first singularity book in 2005: “For a dollar you get roughly 11,200 times more computer power today,” he calculates.

This will lead, he says, to enormous leaps in biotechnology and nanotechnology. And naturally in computers themselves: Cognitive scientists at the University of California recently had almost 500 participants chat with real people and various large language models in a test. 54 percent of the subjects considered the AI model GPT-4 to be a person. According to the researchers, this is the first time a machine has passed the Turing test.

Can a computer think similarly to a human being?

In 1950, computer science pioneer Alan Turing proposed using an ‘imitation game’ to measure whether a computer can think similarly to a human being. As soon as a machine passes this test, machine (artificial) intelligence can be assumed. Turing expected this for the year 2000; in the end it happened in June 2024. So in the near future, language models will become more and more similar to humans. The California-based cognitive researchers immediately warned that such bots would have “far-reaching economic and social consequences”. They’re not alone in sounding the alarm.

As early as March 2023, more than 1,000 experts in the fields of technology and research had called for a six-month development moratorium for AI models. Instead of developing programs so rapidly that even their developers can no longer understand or control them, new security standards are needed first. Although no concrete measures have emerged to date, this opened the debate. Kurzweil had obviously hit a nerve with his statements on the singularity: The development of artificial intelligence is now progressing so quickly that the consequences are scarcely foreseeable anymore.

Weak and strong AI

The devil is in the details, however. “There are two types of AI: Weak AI as an individual capability—for example, to chat, play chess or drive a car—and strong AI in the sense of Artificial General Intelligence (AGI), which uses human-like creativity to solve problems for which it has not been specially programmed,” explains Raúl Rojas. The computer science professor has long researched neural networks, robots, and self-driving cars at FU Berlin, and now teaches mathematics at the University of Nevada. Indeed, the last 20 years have seen major advances in weak AI, which has long since surpassed humans in many individual tasks.

However, Rojas still sees the transition to an AGI or super-intelligence as a distant prospect. “Kurzweil’s fusion of brain and AI is pure science fiction.” And sociologist Thomas Wagner adds: “Singularity can be understood as various things. For Kurzweil, the idea of immortality is very important, combined with the idea that humans physically merge with AI and a new super being arises.” Even if there are still no human-machine combinations, “thinking computers” could become a reality in the next two to three years, Simon Hegelich predicts.

The political data scientist from the Technical University of Munich is himself working on his own general AI. “Unfortunately, we are not prepared for this turning point in human history,” warns Hegelich. Kurzweil’s views his singularity thesis with a mixture of excitement and ambivalence. Particularly because no one in Silicon Valley knows exactly how a super AI including consciousness could even arise. “One assumes that you just have to feed it more and more data, and then the AGI suddenly emerges by itself through a magical spark—I think that’s wrong.” Passing the Turing test sheds no light either. “After all, programming a computer to outsmart humans is not real intelligence. Knowledge is not just data and learning is not just algorithms. As is well known, people do not think and decide in binary fashion in zeros and ones either, but often enough contradictorily.”

The rapid development of AI continues to raise questions

The political scientist Hegelich likes to refer back to the philosopher Georg Wilhelm Friedrich Hegel. Some 200 years ago, Hegel formulated the principle of non-contradiction in his dialectic. For example, a person can hate and love another person at the same time. A real and therefore also ethical super-intelligence would therefore have to “grow up” in a similar way to a human child. “Every baby has a predisposition to be intelligent.” According to Hegelich, one would therefore have to build a computer that is intelligent in terms of its basic algorithm—its “predisposition.”

Such an AGI would automatically learn by itself and also produce new knowledge, instead of just recombining known knowledge. It would develop a kind of consciousness similar to that of biological life forms. Whether this form of singularity will ever be realized remains open. One thing is certain: The rapid development of AI continues to raise many questions that we humans will have to answer.

Info

Text first published in the Porsche Engineering Magazine, issue 2/2024.

Text: Hilmar Poganatz

Copyright: All images, videos and audio files published in this article are subject to copyright. Reproduction in whole or in part is not permitted without the written consent of Dr. Ing. h.c. F. Porsche AG. Please contact newsroom@porsche.com for further information.

Related Content

Confident, even in borderline cases
Innovation

Confident, even in borderline cases

Driver assistance systems make road traffic safer, but so far mainly cover standard situations. Using AI, Porsche Engineering is working to make identifying traffic situations easier and reliable.