One of the many problems with using "animal models" for estimating human health effects of exposure to various toxins, radiation, etc., is that most animals are short-lived, so the effects of chronic exposure to low levels of the toxin of interest does not become apparent during the animal's lifetime. The method generally used to try to get around this problem is to boost the dosage, which is assumed to shorten the "induction period" of disease progression in a more-or-less linear fashion. The idea is that doubling the dosage will halve the time it takes to get a response, more or less.
That is a pretty kludgy method, of course, but all the methods are pretty kludgy. Using longer-lived animals also has problems, not least being that you have to wait much longer for results. Moreover, the lower the general exposure, the fewer responses you are likely to get, statistically speaking, so the use of realistic exposures and exposure times becomes prohibitively expensive. Also, longer-lived animals tend to be more "charismatic" in the sense that people like them more and animal rights activists pay them more attention, sometimes to the detriment of the researchers.
For the purposes of this little essay, I'm going to use radiation as the example, mostly because there are so many places to get information on the radiation/cancer debate, but also because the chemical/cancer debate gets even more arcane in spots, and I'm doing a once-over-lightly here.
The main alternative to the "linear hypothesis" is the "threshold hypothesis," the idea that a toxin or radiation does not overwhelm the body's cellular defenses until it gets above a certain level, or threshold. There are clearly many, many cases where such thresholds exist; "the dose makes the poison" as Paracelsus claimed, and there are few things that don't become poisonous at a high enough concentration.
There is a variation of the threshold hypothesis, which is called the "hormesis model." This is a somewhat more extreme version of the threshold model, and postulates that low doses of radiation are good for you. This isn't an entirely loopy suggestion; after all, radiation is used to treat some kinds of cancer, because cancer cells are more susceptible to dying from radiation than most body cells. It does, however, contain echoes of the early days of radiation, when things like radium were used as "invigorating" tonics, and that didn't work out well.
There are a variety of arguments and observations made to support both alternatives to the linear model. One of my favorite involves studies that create biological systems that lack the naturally occurring radioisotope potassium-40 and include only potassium-39. This apparently leads to birth defects. However, a high concentration of deuterium in the body (i.e. biological systems using heavy water) also produces severe-to-lethal effects, with no radiation involvement whatsoever. It's not out of the question to suggest that our bodies' enzymes are "tuned" to a particular isotopic weight of the elements involved, and that even relatively small changes in these elements can cause problems. To the best of my knowledge, the potassium isotope experiment has never been performed with some external source substituting for the missing radiation. The result would need to be normal development, obviously, for hormesis to be validated.
Other observations that seem to support various versions of threshold or hormesis include epidemiology in areas of high natural background radiation, which seem to show no excess cancers. Again, matters of the adaptation of local populations, questions of whether or not differences in infant mortality create a "harvesting effect" (where susceptible individuals die before they reach the age where cancers would present), or even simple things like actually getting the exposure levels correctly measured, become important. I've seen claims that the linear hypothesis cannot explain the epidemiology of the Japanese atom bomb survivors, for example, but I know for a fact that the actual radiation exposure to these individuals is a matter of estimation and guesswork, so the "failure" may simply be a matter of not knowing what the true exposure was.
Then there is the fact that when we talk of "radiation" we're not talking about a unitary subject. There are many different kinds of radiation, and many different ways of being exposed to it. These "hypotheses" and "models" that we are talking about are just that: models. A lot of different phenomena are being compressed onto a single, seemingly authoritative graph, but the real, underlying situation is complex and complicated. It's entirely possible that some forms of radiation show some hormesis effect, while others are linear, with no safe levels. The science necessary to make these distinctions is lacking.
Ultimately, however, these models are not some abstract scientific question, but rather, they are used to sort out issues of regulatory policy. And there is where the rubber meets the road. Advocates of threshold and hormesis models are invariably proponents of nuclear power (the reverse is not necessarily true, since there are nuclear power advocates to have no problem with the linearity regulations). There are claims that the public is "radiophobic," which may be true, but then again, that is the public's right. It is not as if there has been a consistent policy of telling the public the truth about these matters, and people tend to get a bit antsy when they know they've been lied to.
Ultimately, the linear hypothesis is the easiest to administer and produces the most clear-cut regulatory framework. It is conservative. Threshold standards tend to create situations where pollutant releases go right up to the threshold and bump against the standard, usually exceeding it from time to time. Linear standards say, "Reduce your impact to the lowest possible level." I find this to be a useful first (and usually second and third) approximation to regulation. But then I am what used to be called a conservative.