Wednesday, February 28, 2007

This Year’s Model II

I don’t believe that any SF writer or futurist in the 50s or 60s gets any credit for writing about how computer would be the Next Big Thing. For that matter, no one who had seen the first transistor radios replace the tubed giants could have missed that computers were also going to be the Next Little Thing, either. The increase in computing power and miniaturization of electronics were part of the “log-log paper and a straight edge” view of the future, and the only errors in that area were the slope of the line that got drawn.

Nevertheless, there was not all that good a foresight as to what those computers would be for. The (possibly apocryphal) story of the fellow who estimated that the world would need about six computers apparently comes from John Mauchly, co-developer of Eniac, which did numerical calculations for the Defense Department. Given that Mauchly eventually designed the Univac I, which ultimately sold about 40 machines, his estimate wasn’t that far off, if his original estimate was about economics, markets, and the near-term. Certainly no one was going to use a Univac for word processing.

The split between number crunching and number sorting was in computing from early on, with the split between “business” computers and “scientific” computers. By the time I came onto the scene, that split was exemplified by those working on IBM machines, and those working on CDC machines, with the latter sneering at the former at every opportunity. IBM was the domain of Cobol, while science used Fortran, or sometimes PL-1. Algol was Burroughs territory, and that land was too weird to rank.

Vast oversimplification, of course, since there were Fortran compilers for IBM machines, as well, and I’m sliding over Honeywell, and the rest of the “seven dwarves” of the computing world, but then, I’m not trying to write a history of computing here. Besides, mini-computers and then micro-computers soon made the previous status hierarchy obsolete.

From the science and engineering end of it, computers changed the equation, so to speak. The rapid fall of the cost of computing quickly meant that all sorts of calculations that were prohibitively expensive before computers became cheap and practical as the price/performance curve accelerated. It was the opening of a New World, not geographically, but conceptually, with unfathomable riches just strewn around on the ground. The question was, what part of the new landscape did you want to explore?

Before I get into some of the details of things that I personally know about, though, let me mention the real heartbreaker: Artificial Intelligence.

In a 1961 article in Analog, entitled “How to Think a Science Fiction Story,” G. Harry Stine did a bunch of extrapolations and made some predictions about the future. It’s worth noting that Stine was pushing something more extreme than the exponential extrapolation; in fact, what he was predicting was a lot like the Singularity that is all the rage. As a result, among other things, he predicted FTL travel for sometime in the 1980s, and human immortality for anyone born after 2000. Didn’t help Stine much; he died in 1997.

Stine’s extrapolation for computers was also a tad optimistic, predicting about 4 billion “circuits” in a computer by 1972. I’m not sure what he meant by “circuit” but we’ve only recently reached that number of transistors on a single CPU chip. Stine also thought that the human brain had that many “neural circuits.” Again, I’m not sure of what those were supposed to be, but the human brain has on the order of 100 billion neurons.

Of course a neuron is also a lot more complex in function than a transistor, so we’re still nowhere near a brain simulating device.

Nevertheless, Stine’s optimism was pretty well matched by the actual AI community, which thought that neurons and brains were really, really, inefficient, and that it would be possible to create intelligent devices that used many fewer circuit elements than the human brain required. Oops.

The result was an entire generation of computer science lost to AI. I’m overstating that, of course, partly because I knew more than one person who had his heart broken by the failure of the AI dream. I mention it as a cautionary tale. Not all the fruit in the New World is tasty. Not all of it even exists.

(For the record, based on some guesses about the complexity of neural function and assuming that Moore’s Law holds forever, I made an estimate in the mid-1980s that real machine intelligence would take from 80 to 100 years to come to fruition. Based on recent announcements by IBM of the “Blue Brain” project to simulate neural behavior in the neo-cortex, I’m currently estimating it will take 60-80 years. Unless Stine was right about the human longevity thing, I’ll not live to see that prediction proved right, although I could see it proved wrong if someone gets there first.)

Still and all, whatever happened in computer science stayed in computer science, for the most part. The only real hangover of the AI binge that I’ve ever had to deal with is the fact that the macro language of AutoCad is a version of Lisp. Besides, I was never much interested in Hal, Mike, or Robby the Robot. I wanted to be Harry Seldon, or someone like him.

I’ll write about that later.

No comments: