Onderstaand stukje heb ik op NewApps gezet naar aanleiding van het debat over de technologische singulariteit tijdens het Leuvense “Feest van de Filosofie” aanstaande zaterdag. Aangezien dit geen onderwerp is waar ik normaal onderzoek naar doe (hoewel het me zeker interesseert), is het deels een oproep naar (in korte tijd behapbare) leessuggesties. Die oproep geldt hier uiteraard ook. ;-)
Thinking about the technological singularity
Next Saturday, the University of Leuven is hosting an outreach event called Philosophy Festival (“Feest van de Filosofie“). This year’s theme is people & technology (“mens & techniek“). I was asked to join a panel discussion on the technological singularity (link). The introduction will be given by a computer engineer (Philip Dutré, Leuven). There will be a philosopher of technology (Peter-Paul Verbeek, Twente) and a philosopher of probability (me, Groningen); and the moderator is a philosopher, too (Filip Mattens, Leuven). So far, I have not worked on this topic, although it does combine a number of my interests: materials science, philosophy of science, and science fiction.
The idea of a technological singularity (often associated with Ray Kurzweil) originates from the observation that the rate of technological innovations seems to be speeding up. Extrapolating these past and current trends suggests that there may be a point in the future at which systems that have been built by humans (software, robots, …) will become more intelligent than humans. This is called the technological singularity. Moreover, once there are systems that are able to develop systems that are more intelligent than systems of the previous generation, there may be an intelligence explosion. The possibilities of later generations of such systems are inconceivable to humans. (This theme has been explored in many science fiction stories, including the robot stories by Isaac Asimov (1950′s and later), the television series “Battlestar Galactica” (2004-2009), and the movie “Her” (2013).)
Even this brief introduction gives us plenty of opportunity for reflection on concepts (What is intelligence?) and consequences (What will happen to humans in a post-singularity world?). I am planning to analyze a very basic assumption, by raising the following question: When are we justified to pick a particular trend that has been observed in the past (e.g., Moore’s law that describes the exponential increase in the number of transistors on a commercial chip) and extrapolate it into the future? Viewed in this way, the current topic is an example of the general problem of induction.
The hypothesis “The observed trend will continue to hold” is only one among many. Let me offer two alternative hypotheses: