Let’s suppose you want to look at how some chemical compounds react in the atmosphere. We’ll start with butane, a pretty simple hydrocarbon, C4H10, or to give more insight into its structure, CH3CH2CH2CH3. In chem speak, that’s a methyl group (CH3) attached to a two carbon alkyl chain (CH2CH2), terminated by another methyl group. The methyl group is called “primary carbon” because it’s connected to a single other carbon atom, while the CH2 groups are “secondary carbon.”
Now suppose you have a bunch of butane molecules flying around in the air, and the air also has some hydroxyl radicals (HO) in it. Every now and then, in accordance with the laws of statistical mechanics, one of the HOs will hit a butane molecule. Then what?
Well, most of the time, they just bounce right off each other. The hydroxyl is pretty reactive, radicals often are, but unless it hits the electron cloud of the butane in the right spot, with the right energy, etc., it’s just going to bounce. But every so often, it does hit right, and it grabs one of the hydrogens. Which one?
Well again, it will be the one it hit, but some of the hydrogens are more labile than others, so the HO is more likely to bounce if it hits one of the methyl groups, which have “primary” hydrogens because they are on primary carbons, and more likely to react if it hits the alkyl chain, on a “secondary” hydrogen.
Butane is nice and symmetrical, so there are only two possible outcomes. Due to symmetry, any primary hydrogen reaction looks like every other primary hydrogen reaction, and every secondary looks like every other secondary reaction. The hydroxyl always extracts a single hydrogen from the butane, which gives water, and an alkyl radical that immediately reacts with oxygen, and under smog conditions goes through a series of reactions that lead to either buteraldehyde, if the primary carbon was involved, or methyl ethyl ketone (MEK) if the secondary carbon was involved. (Actually, I’m ignoring some other pathways that get more important as molecular weight increases, like the formation of alkyl nitrates, and the times when the molecule fractures in the middle to produce acetaldehyde and an ethyl alkoxy radical. Having read that sentence, I’m sure you can appreciate my ignoring some details).
We can write a bunch of reactions for all this, assign rate constants to the reactions, put in temperature and pressure dependencies, etc. but the thing I want to point out is this: we’re simplifying a lot of events into a small set of descriptive equations. All the bounces are ignored, except insofar as they affect the reaction rate constant. All the different ways the molecules hit each other, along with the different energies of those collisions, all lumped into a few basic equations. We’re also taking advantage of the symmetries, by saying that reactions at either end carbon are equivalent, which they are, unless we had some way of telling the difference, like if one end or the other was isotope tagged.
Anyway, we’ve put all these things together and called them “reactions of the molecule.” That’s what chemistry does.
Now suppose we want to study the reactions of a number of different molecules, say add some pentane, hexane, heptane, and octane to the mix, and put in all the possible isomers of those compounds as well (there’s only one other isomer of butane, called isobutane, but toss in some of that as well). Now, how would you write your chemical equations?
You could try to write the equations for every single molecule—provided you wanted to go crazy, blow your computing budget, and not have the rate constants for even a tenth of what you wanted. You’re going to have to estimate that last one anyway, of course, though you might cheat and get some empirical data describing the reactivity of your mix.
You could look at what you have in the way of a mix and try to come up with some idea of an “average molecule.” That can get a little strange, because you’re going to have equations that account for some fraction of a carbon, for instance, and averaging rate constants is pretty iffy anyway. The fast reacting compounds will react away most quickly, so the “average” rate constant is going to keep changing. Nevertheless, you can do it, either as a constant average rate or as a continually changing average rate. It’s been done, though most often as a constant rate.
You could take your mix and wave your hands a little bit and say that it should look like some other, simpler mix, 45% butane and 55% octane, maybe. Of something like that.
The first one of these has come to be called the “explicit mechanism” approach. The second is the “lumped parameter” method. The third is a “surrogate mechanism” which is an explicit mechanism that is used on a reduced number of “surrogate compounds” to represent a more complex mixture.
All have been used in smog chemistry models, and all have their limitations. The mechanism that I first encountered was a lumped parameter mechanism called the Hecht-Seinfeld-Dodge mechanism. At that time I was coding what is called a Lagrangian Trajectory model version of the more elaborate Eulerian Grid model that had been developed by the research/consulting firm that employed me, then named Systems Applications Inc. One of my tasks was to code up and test the HSD mechanism in the simpler model.
At the same time, Gary Whitten (later to be my boss, because he was the only one who was willing to have me in his group, me being the charmer that I am) was attempting to use the HSD mechanism in an atmospheric application. He quickly ran into the problem that he had no idea what the “average molecular weight” of an average atmospheric hydrocarbon was, and there were parameters in the mechanism that depended upon that average.
What he did have was what are called “flame ionization detector” measurements of total reactive hydrocarbon, “as carbon.” In other words, he knew about how many carbon atoms there were, just not how many molecules they comprised. There were also a few gas chromatograph measurements that could be used to estimate the molar fractions of olefins (there were no real mechanisms for aromatic hydrocarbons at that time), but the breakdown of the alkyl hydrocarbons just wasn’t there.
Then he had an idea. I still think it was brilliant.
It turns out that the reactivity of an alkyl hydrocarbon (like butane, pentane, hexane, et. al.) goes up with increasing molecular weight, primarily because there are more carbon groups. In fact, the reactivity of any given primary, secondary, or tertiary carbon group is largely constant from one hydrocarbon to another, and if you normalize the reactivity by carbon atom, it’s reasonably close (within 20-40%) to constant. (This neglects the very lightest hydrocarbons, methane, ethane, and propane, because they are anomalously unreactive, but that also means that you can ignore them, mostly).
So Whitten devised a mechanism that ignored the idea of molecules for alkyl carbon. Instead it treated each carbon atom as a single “reactive structure” and did all the chemistry from there. He called it the “Carbon Bond Mechanism,” and its descendants are still the primary photochemical air quality chemical mechanisms used in air quality management in the U.S. (and elsewhere).
It wasn’t my idea, but I took to it like a duck to water. (So much so, in fact, that some people wound up thinking it had been my idea in the first place, something I later recognized as “ageist” since I was the young ‘un of the team. So I always tried to make sure everyone knew it was Gary’s eureka moment). The CBM had exactly the sort of “thinking around the corners” style that I love. And, it was practical. It made everything easier, emissions inventories, comparisons to air quality data, coding the mechanism. It’s actually a bit difficult to conduct “mechanism comparison studies” among other kinds of kinetic mechanisms in the U.S. because practically every emissions inventory is in the form used by CBM, and a fair amount of the differences between mechanisms is how they treat the emissions inventories.
Over the next few years, we devised a lot of twiddles to make the edges work, like an “operator species” that took intra-molecular reactions (like chain breaking) into account. We also extended the mechanism to include aromatic hydrocarbons, and biogenics such as isoprene and terpenes; those wound up being closer to explicit/surrogate mechanisms. I also came up with a cute trick that involved treating very reactive olefins as if they’d already reacted to their carbonyl containing products (aldehydes and ketones) because they reacted so quickly that their products were more important than the original compound. Not to get too egomaniacal, but it was all very cool.
Now let’s take this up a few levels of abstraction.
If you’ve managed to get through all this technical verbiage, one thing you might have noticed is that this sounds more than a little bit like engineering. We were designing a kinetic mechanism, for particular purposes, based on the resources (time, knowledge, computing power) that we had. Our goal was the construction of an atmospheric chemical kinetics simulation model, a tool that could be used for both scientific and air quality management purposes. If science is devoted to the acquisition of knowledge, what do you call something that assists in environmental management? Again, a lot like engineering.
Science operates on the model of “objective reality” and scientists like to think of themselves as dealing with that reality in an impersonal way. You can see that in the way that scientific papers are written, frequently in passive voice, rarely with individual actions described, and even more rarely as anything where the “arbitrary” is even acknowledged. The idea of choices is largely absent, because choices are the product of subjective individuals.
Art, on the other hand, glories in the subjective, the experiential. Choice is part of its very nature. Art is personal, and artists have no problem with the idea that their ego is involved. That’s part of the point of it. But it’s still often the case that some artistic element “has to be that way.” The artist feels like there is no choice in the matter, because making a different choice will lead to inferior, or even bad, art.
I’ve had careers in both science and art, and for a long while I thought that the art was for personal expression and the science was for the satisfaction of my curiosity about an objective world that was entirely independent of myself. I also had the notion that engineering was where the two met, where one applied the objective knowledge of science in service of the subjective needs of human beings, and those needs included the application of artistic principles to engineering, and engineering principles to art.
It’s a good line of patter, and there’s some truth to it, but as time goes on, I see more and more holes in it. For one thing, while art may be personal and expressive, it’s often pretty generic, and it starts looking a lot like other art. No one else would have written Book of Shadows, but if I hadn’t, there might very well have been another novel of “heroic fantasy,” in that publishing slot, and many of the same people might have read it and taken the same enjoyment from it. SunSmoke is a lot less interchangeable, in my view, but that is not necessarily obvious to the reader. I myself tend toward the idiosyncratic both as writer and reader, but most fiction, most art, is average; that’s what average means. And some proportion of popular entertainment is largely interchangeable with its near equivalents.
One the other hand, a great deal of science is more idiosyncratic, less objective, more personal than most scientists would admit. What is studied, how it’s studied, what sorts of theories and models are created, what sort of notation is used, all of that betrays the human face staring at the instruments, drawing the conclusions, writing up the results. Someone has to want to know the answer to the question that is being asked. Science is a human construct, no less than any other human construct, and to deny it is to deny both one’s self, and the truth.