Part of the problem with trying to formalize this argument is that intelligence is woefully underdefinined. There are plenty of initially reasonable sounding definitions that don't necessarily lead to the ability to improve the state of the art w.r.t. 'better' intelligence.
For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.
There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.
I'm sure there are other potential limitations, they aren't hard to come up with.
For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.
There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.
I'm sure there are other potential limitations, they aren't hard to come up with.
reply