Wednesday, February 28, 2007

Next Year’s Model: Psychohistory

In writing about The Foundation Trilogy, Asimov once explained that his notion of how psychohistory would work was an analogy to chemistry, where individual, hard-to-predict atoms are nevertheless predictable when there are great numbers of them, such that statistical mechanics can explain their mass behavior. Analogies are tricky. Next to metaphor, they are the most poetic of models, and as such are prone to all the problems of poetry, sounding good but just possibly being a crock.

So I alternate between thinking that Asimov’s explanation was loopy and bizarre, or contained a simplified kernel of truth.

In any case, while I might bristle at comparing complex humans to simple atoms, we do have the example of economics, where homo economicus rules the theoretical landscape, though “cognitive economics” has become a buzzword lately in attempting to replace HE with something a bit more realistic. In any case, economics is the most numerical of the social sciences, the one most suitable to engineering style analysis, and the one that I found most attractive back when I was girding up for my intellectual assault on the world.

(Hmm, did I ever put it in such Byronic terms? Oh, hell, probably. Now at least there’s some irony in it, though there probably was back then, too.)

It wasn’t just economics, though. We systems guys were making a sweet toolkit for the engineering of “urban systems.” There was an entire field for the study of transportation networks and queues that was opening up because of cheap computing power. Also of note was the ability to bring computer analysis to maps and mapping; it’s no accident that the company Mapinfo was started as an RPI incubator project.

And there were some cool data gathering approaches becoming available from satellite imagery. I interviewed at the Earth Satellite Corporation in Berkeley when I first moved there (they weren’t hiring, or at least they weren’t hiring me). I also tried for a job in the San Francisco Urban Development agency as an “Urban Analyst,” or was it “Urban Planner?”

All perfectly respectable stuff. And all those fancy tools, plus further developments, are still being used to this day.

So why the feeling of letdown? Aren’t these the first steps toward the dream of modeling large scale urban and social systems? Doesn’t psychohistory still look like a probably future?

Well, no, not really. Not to me anyway. In some regards, this is a little like the Artificial Intelligence disappointment, only more so. At least I still think that we might someday build machines that possess sentient intelligence (just not during my lifetime). But predicting human society? Nope, no longer a believer.

First off, non-linear models (and, man, are social models non-linear) tend to slip over into chaotic behavior awfully easily. Chaotic behavior might as well be random; when some future state winds up depending upon the 37th decimal place of an initial condition, it might as well be random. So you’re going to have some intrinsic randomness in your model, as well as some random extrinsic inputs, like, say, weather. So well, okay, yes, that’s a problem.

Another problem, as I noted a little while ago, is that large scale models need a lot of initial data. In order to predict the future, you have to know the present, at the very least. How are you going to get enough information to even set the initial state of your social model? And even if you could conceivably mount a data gathering project large enough to get that much data, how is that project going to affect the system?

(Some naïve commentators say that this is the human equivalent of the quantum uncertainty principle. It isn’t. It is, however, part of a known phenomenon that is also found in the Hawthorne Effect. The Wikipedia has a pretty good, long article on the Hawthorne Effect, but be sure to read the talk section for a discussion of various disputes on the matter).

The final problem is one of expectations, self-interest, and the tendency of people to “game the system.” Simply put, if a big and useful social model did exist, its results would affect policy in such a way that some groups would immediately desire to subvert or control said model, in order to use it to further their own interests. I mean, I’ve seen this happen often enough with smog models, and the amount of money involved in environmental policy is a small fraction of what a predictive societal model would involve.

Think that’s cynical? I haven’t even started.

If you recall, I noted a few essays back that one way to make a non-linear model more tractable is to linearize it. In control theory, this is called “linearization along a trajectory.” It’s the method first used to control the Saturn Booster rockets.

But notice that I said “control theory.” This only works if what you want to do is to keep a system near a particular path. It’s not for prediction or analysis; it’s for control.

So Asimov got that part right as well. The Seldon Plan failed, as such, but the Second Foundation became a hidden group, trying to get humanity back onto a path for a Galactic Empire. And the Empire would be a good thing.

That’s where Isaac and I would disagree.

No comments: