In my essay, "The Scientific Method," I described (and bragged a bit about) some work I once did on the photochemistry of toluene, which has the unusual property of, under some very special conditions limiting the amount of ozone that is generated in a smog system. It's a weird effect, and I was bragging because I'd predicted it, then designed an experiment to show that its weirdness was real.
In a more recent essay, "PAN", I noted that there were some features of the chemistry of that compound that I'd gotten right because of a detailed analysis, a I-knew-what-I-was-doing sort of thing, which is more bragging, of course, but I noted that, science being what it is, I was only a little bit ahead of the curve. The rate constants that I'd had to adjust to make my simulations work were routinely measured as being what I'd needed only a little while after I did my work, and the ordinary workings of science would have produced models that did the right thing, even if no one was paying attention.
In "The Linear Hypothesis," I remarked that sometimes (in fact, pretty often) scientific models are used for purposes of policy and decision making, and a model is often chosen to make that task easier, because, well, that's the purpose at hand. Sometimes this is done for good reasons, like selecting a conservative model in order to observe "The Precautionary Principle," where we are dealing with asymmetric error; if an error in one direction is vastly more costly than an error in the other direction, then simple caution suggests using the more conservative model, even if there is some weight of evidence on the other side.
Anyway, I've just been talking to an old colleague, who tells me that one major smog kinetics model has been "fixed" so that it no longer shows that weird toluene behavior that we actually proved to exist. The experiment that proves it is now considered "old" (as if chemistry somehow goes bad with age), or "sloppy," or the result of experimental error. Not that anyone is bothering to replicate it, you understand.
I expect that it has to do with it just being too confusing to have models tell you that sometimes adding one pollutant can produce less of another pollutant. Or something like that. The rationalizations sound pretty sad, however.
We've been hearing a lot lately about the ways and methods that various players in the Bush Administration have been tampering with scientific reports, muzzling scientists, and twisting the system to their own ends. This is, of course, despicable. What I am saying here is that I've seen a lot of this sort of thing throughout my entire scientific career, coming from every policy quarter. Yes, the Bush Adminstration does it, and has been totally shameless about it. But they had plenty of precedent from the Tobacco Industry, the Oil Industry, the Pharmaceutical Industry, and, I will add, Environmental organizations and regulators. When people have an ax to grind, they will first grind it on the facts of the matter, or at least the theories and models that are used to codify the facts.
The "Probability Engine" that the time meddlers found in Destiny Times Three by Fritz Leiber was originally a simulation engine, developed by advanced beings to calculate the probable results of various actions, and to avoid the worst actions and their consequences. The horror of the story is that the device came into the possession of humans, who, with the best intentions (but insufferable arrogance) used it to create those dystopian worlds, rather than simply model them.
I do so hope that this is not, ultimately, a metaphor for science in the hands of human beings.