Superintelligence and the control problem: Real problem or pseudo-problem?

Authors

  • Jaroslav Malík

DOI:

https://doi.org/10.26806/fd.v13i2.343

Abstract

In this paper, I deal with the concept of SI (superintelligence) and the control problem. According to a group of AI theorists, we will soon experience an event that can change technological progress and human society. This event is the technological singularity associated with the emergence of the first greater than human intelligence. People like Nick Bostrom stress the SI's dangers and urge us to find methods to control this intelligence. According to Bostrom and others, the threat of SI stems from its nature. This article considers how SI can be created and judges the logic of the control problem. SI is possible only if we can create AI. For this reason, a section of the text concentrates on the arguments for its creation. It is shown how Bostrom and others base their thesis on one problematic argument and assumptions of their predecessors. Their position is subjected to the classical critique of artificial intelligence. I primarily focus my criticism on the claim that SI will have one final goal, which it will interpret. This statement is antithetical to the idea that SI will be a general intelligence. I conclude that the control problem confuses two other different "control" problems.

Published

2021-12-22