Onderstaand stukje heb ik op NewApps gezet naar aanleiding van het debat over de technologische singulariteit tijdens het Leuvense “Feest van de Filosofie” aanstaande zaterdag. Aangezien dit geen onderwerp is waar ik normaal onderzoek naar doe (hoewel het me zeker interesseert), is het deels een oproep naar (in korte tijd behapbare) leessuggesties. Die oproep geldt hier uiteraard ook. ;-)
Thinking about the technological singularity
Next Saturday, the University of Leuven is hosting an outreach event called Philosophy Festival (“Feest van de Filosofie“). This year’s theme is people & technology (“mens & techniek“). I was asked to join a panel discussion on the technological singularity (link). The introduction will be given by a computer engineer (Philip Dutré, Leuven). There will be a philosopher of technology (Peter-Paul Verbeek, Twente) and a philosopher of probability (me, Groningen); and the moderator is a philosopher, too (Filip Mattens, Leuven). So far, I have not worked on this topic, although it does combine a number of my interests: materials science, philosophy of science, and science fiction.
The idea of a technological singularity (often associated with Ray Kurzweil) originates from the observation that the rate of technological innovations seems to be speeding up. Extrapolating these past and current trends suggests that there may be a point in the future at which systems that have been built by humans (software, robots, …) will become more intelligent than humans. This is called the technological singularity. Moreover, once there are systems that are able to develop systems that are more intelligent than systems of the previous generation, there may be an intelligence explosion. The possibilities of later generations of such systems are inconceivable to humans. (This theme has been explored in many science fiction stories, including the robot stories by Isaac Asimov (1950’s and later), the television series “Battlestar Galactica” (2004-2009), and the movie “Her” (2013).)
Even this brief introduction gives us plenty of opportunity for reflection on concepts (What is intelligence?) and consequences (What will happen to humans in a post-singularity world?). I am planning to analyze a very basic assumption, by raising the following question: When are we justified to pick a particular trend that has been observed in the past (e.g., Moore’s law that describes the exponential increase in the number of transistors on a commercial chip) and extrapolate it into the future? Viewed in this way, the current topic is an example of the general problem of induction.
The hypothesis “The observed trend will continue to hold” is only one among many. Let me offer two alternative hypotheses:
Alternative hypothesis (1): Crash
The increasing rate of change will cause a breakdown of human society. Recently, I was next in line to buy a parking ticket and noticed that the elder person in front of me was having trouble to operate the brand new and fancy ticket machine. (I tried to assist him, but this only added to his embarrassment.) This small encounter made me worry about my own future: will I be able to keep-up with the rest of society if things keep changing at this speed (or even faster)? So, there might as will be a technological burnout instead of an intelligence explosion.
Alternative hypothesis (2): Stagnation
In science, there are lots of examples of processes that start with an exponential increase (of some variable) followed by saturation. (You might think of an exponential onset of growth followed by delayed growth due to limitations of space, supplies, etc.) The result is an S-shaped curve, rather than an exponential one. This reminds me of the ideas of Ivan Illich, who developed a theory of two turning points. Illich also surveyed past technological advances, but he came to a conclusion very different from Kurzweil’s. According to Illich, the first turning point is marked by a steep increase in efficiency, but at the second turning point, it is the system’s disadvantages that start to build up (counterproductivity): just think of cars vs. traffic jams, or hospitals vs. hospital-acquired infections.
At this point, we are faced with three options: (0) continued exponential increase (at least long enough to reach the singularity), (1) crash, and (2) stagnation. Which one correctly predicts our future?
I am planning to leave that as an open question, although I might add two remarks. First, to answer this question, we would need to look into the underlying processes that may sustain each of these projected trends and then determine how likely we consider each of these processes to be. Second, there may be yet many different hypotheses besides these three.
I would not be surprised to learn that none of this is very original and that all of it has been said by others in more eloquent ways. Unfortunately, I simply do not have the time to check the literature. Instead, I just read an introduction to the topic and let my mind wander. If you are willing to share your own thoughts or references to short texts (such that I can digest them in the limited time I have), that will be greatly appreciated!