- The Mathematics of Doom.
- The Bad Effects of Silly Ideas.
- The Revolt of the Machines.
- Will intelligence develop along with processing power?.
- The problem of generality.
- How does the Ghost get into the Machine?.
- Why a Humanoid Robot Won't Be Dangerous Anyway.
- The Contraption/Creature Chasm.
- References.
Quote
``I also questioned them about the museum of old machines, and the cause of the apparent retrogression in all arts, sciences, and inventions. I learnt that about four hundred years previously, the state of mechanical knowledge was far beyond our own, and was advancing with prodigious rapidity, until one of the most learned professors of hypothetics wrote an extraordinary book (from which I propose to give extracts later on), proving that the machines were ultimately destined to supplant the race of man, and to become instinct with a vitality as different from, and superior to, that of animals, as animal to vegetable life.''
The argument of Butler's Professor of Hypothetics [Butler] is basically the same as those advanced by the modern advocates of the view that robots (or other superintelligent machines) will forcibly take over.- Computers will soon be much more powerful than human brains.
- Therefore machines with computer brains will be much more intelligent than we are.
- By that time they will also control a great deal of our technological and informational infrastructure.
- They will therefore naturally want to take over.
- We won't be able to stop them.
Will intelligence develop along with processing power?
The first step in the argument is that computers will soon be more powerful than human brains. This is debatable on grounds of technology and economics. I don't propose to deal with that here because I think the more serious and interesting flaws in the argument come later.














