Toxicology and unknowable risks
Early in the 20th century, a number of studies began to show that certain agents could be made to elicit cancer in animals, and these observations grew into the problematic conjecture that animal tests could be used to forecast human cancer risks. As for safe levels of exposure, scientists were very much ambivalent and gave conflicting testimony to the US Congress, eventually leading to the assumption that no such levels exist. As a result, in the USA the Delaney Clause was inserted in the 1958 Food Additive Amendment to the Federal Food, Drug, and Cosmetics Act (FDCA), prohibiting the clearance of substances as food additives if they could cause cancer in humans or in animals.
Operationally, the Delaney Clause implied and still implies that substances capable of causing cancer in humans could only be defined by retrospective epidemiologic studies of natural or accidental exposures, while cancer causation in animals would be determined by experimental tests. The Clause did not specify under which conditions animal tests should be executed. The significance of this key omission derives from the principle that rule-making in a democracy is the prerogative of elected legislators, who may delegate administrative tasks to an appointed bureaucracy within restrictive guidelines. Central to these guidelines is that rules ought to reflect justifiable evidence and due process.
Categorical as it was, the US Delaney Clause forced the integration of animal carcinogen tests into the regulatory process, on the apparent assumption that the modality of such tests was a matter-of-fact procedure grounded in uncontroversial science. Most likely, legislators could not have failed to attach detailed restrictions, had they been informed of the open-ended opportunities for arbitrary modalities in animal testing. Instead, cancer risk assessment uncertainties of animal tests have been swept under the rug by adopting arbitrary assumptions of corresponding human validity that have no foundation in fact or science. These assumptions have become the central determinants of regulatory decisions, which later extended to other regulatory agencies in the USA, and were adopted with local modifications in other developed countries worldwide.
The practice has left most participants uneasy, often raising misgivings about legal and constitutional legitimacy. Nevertheless, the process has advanced virtually unchallenged, and its enforceable arbitrariness has expanded to claim an aura of validity. Taking off from dogmatic assumptions, the system has fabricated quantitative illusions and deliberately foists the pretense of being scientific. Predictably, the pretense is endorsed by most participants in the regulatory process, making it difficult to envision the possibility of self-corrections within the system itself. Remedies from outside the system also would lag, unless and until the public and elected representatives become aware that the authority of using animal tests in cancer risk assessment is likely illegitimate, and that related assurances of safety are illusory and prohibitively costly.
As for the illusion, the Environmental Protection Agency’s (EPA) 1986 cancer risk assessment guidelines warned that “[i]t should be emphasized that the linearized multistage model [the agency’s default risk assessment model in 1986 and the preferred one to this day] leads to an upper limit to the risk that….does not necessarily give a realistic prediction of risk. The true value of the risk is unknown and may be as low as zero.” 
The pervasive uncertainty of the meaning of toxicologic studies in animals has led regulators to justify their actions on the basis of precaution. Marc Lalonde, once Canada’s Minister of Health and Welfare, in 1974 first articulated what became known as the Lalonde doctrine: “…[H]ealth problems are sufficiently pressing that action has to be taken on them even if all the scientific evidence is not in.” 
Presumably, at least some scientific evidence should be at hand, but a succession of regulators and politicians have thrown restraint to the wind, leaning more and more toward the unbound “precautionary principle” mirages presently fashionable in regulatory circles around the world. Still, prudence is costly and excessive prudence is counterproductive.
With this in mind, the core question in addressing hypothetical risks is what guidelines should be adopted to ensure that prudence is balanced and not harmful. The question implies certain premises in the context of a democratic societies, basic ones being that the regulatory process should not be arbitrary, secretive, and patronizing, but transparent and fairly comprehensible to a majority of citizens. In practice, the definition of precaution has been delegated to appointed bureaucracies. The latter, have found compelling motivations to emphasize risks rather than benefits and to forecast ever expanding risks. Seeking to promote their relevance, to secure mounting appropriations, and to avoid retribution for mistakes, regulators add their own safety net to what public prudence they interpret. Transparency is the first casualty, as institutional interests are hidden in the technicalities that bureaucracies excel in making complex.
Risks that are possible but imponderable are bound to add novel obscurity to a regulatory process opaque by design, notwithstanding statutory requirements for public rule-making hearings and such. Defenders of the status quo are ready to argue that the complexity of risk regulation by itself impedes transparency, but the argument is only half true. What is demonstrably complex is the economic analysis of regulatory costs, while the complications of assessing hypothetical risks are the unscientific artifice of arbitrary assumptions that mask a foundation of ignorance. Efforts to legitimize this construct as “regulatory science” are unconvincing, for little science is discernable.