Shift Happens
(Visited 7916 times)Jan 132009
This is the second time in a few days that someone has linked me to this presentation, so I figured I should share it even more widely. 🙂 It’s a bit Singularitarian, but it’s deeply relevant on so many fronts that I do think everyone should read it.
5 Responses to “Shift Happens”
Sorry, the comment form is closed at this time.
While I can’t speak to other aspects of the presentation, it does look like the predictions for the success of the OLPC project were pretty far off base. Wikipedia says that so far only about a million of them have been ordered since October 2007, versus the 50-100 million per year prediction cited in this presentation.
I enjoy Kurzweil, and this still scared me a bit. Let’s generalize that first comment and assume that ALL the predictions are off by two orders of magnitude: in 2049, it will cost $100,000 for a computer with the processing power of our entire species. That is still ridiculously massive, cheap power.
I need to connect to a species that is better at conceptualizing exponential growth.
I don’t think it is of much use to assume that all of the predictions are off by two orders of magnitude. Most of the predictions in the presentation are governed by forces very different from those that affect the OLPC project. In the case of having processing power that is equivalent to the processing power of our entire human species, there are a number of standard counter arguments. Obviously there is the fact that comparing the processing power of the brain to the processing power of a computer is not straight forward, and how you choose to assess the two weighs very heavily on how comparable you would consider some theoretical future computer or even a computer today, to the mind a human being.
On top of that is the argument that even with this massive amount of computer power we won’t have software available to adequately use it. We have enough trouble as it is designing systems that can effectively utilize the power we already have. The clear trend with software is towards bloat that works to maximize resource usage while only marginally increasing functionality. Will we need a computer with the power of the entire human species to run Office 2050? More seriously, will we need that much power just to run the cutting-edge face recognition and other sensory input/pattern recognition work that is being done to some degree in AI research on the super computers of today? If we need that $100,000 or even just $1,000 computer in 2049 to realize the AI goals of today or the AI goals of the next decade, can we really consider it to be as powerful as the entire human species?
Finally, what about the economic reality that all of this needs to take place in? As much as many of us would like to believe it, the basic quality of technology is not the sole driver for progress. Because it is achievable isn’t always enough reason for governments and corporations to invest in actually building machines of this power. I am not convinced that a machine with the processing power of the entire human species is necessary, and I’m not convinced that whichever corporation is at the forefront of processor development in 2049 will feel that there is a need for it, or a market for it. Obviously so far there has been no limit to the ways in which people can use the computer power we already have, and no limit to the number of things they would like to create but cannot due to current limits. But can we be sure that such a limit won’t occur by 2049? What if we find that the combined processing power of the entire human species is really more than we need for whatever task we want to do on a single machine? What if people decide that the singularity isn’t something they’re interested in or want to see achieved? What if those are the people making the decisions?
My problem with the presentation is how much of those number is new data, and how much of it is new ideas? Furthermore, how many of those ideas are long-lived, because many new ideas replace old ideas. Lastly, do I really have to be aware of all new ideas? If there’s new subspecies of bird found in Nepal, just because I can have access to that information, must I know about it?
I can easily see that we are growing in the amount of data we traffic, and I can see that we are continuously generating more and more discoveries at faster rates, but if you step back and think about it reasonably, it isn’t as overwhelming as this presentation makes it out to be.
It’s the idea that increasing feedback increases innovation. The web doesn’t quite work like that. It is creating feedback microscopically but at the macroscopic level, it is a copy/fax machine filling up disks with more of the same at a much higher and denser distribution.
That tends to produce cultural gray goo to a shallow depth. When a new innovation does occur, it may spread very fast, but it may also find itself mired in that goo being resisted by the very forces that say they stand for change. Nothing is more sticky than a radical gone conservative because now they have skinny in the game.
Change is happening faster but resistance is also increasing. Shift happens but look at the web browser and ask yourself how much HTML-as-used actually evolved. How many of the games you play are variations on shooters? The technology is getting better because it remains competitive, but so far I don’t see any sudden quantum jumps, in fact, some areas such as 3D graphics seem to be sliding backwards (are 2D systems based on Flash evolution or devolution) in terms of population.
Change is lumpy.