Our elite Mastership Sourcebooks for NCFCA and Stoa will release soon! Check them out here!

Pixabay.com, https://cdn.pixabay.com/photo/2015/10/30/12/24/questions-1014060_1280.jpg

The Problem

Definitional vagueness is par for the course in Lincoln Douglas debate; in fact, most resolutions are meant to be semantically malleable enough to admit of several interpretations. But this year’s NCFCA resolution is vague in an unusual way. The problem is not so much that it is open to a wide range of interpretations. The problem is that it is unclear what the distinction is, if there is one at all, between the affirmative and negative ground.

If you have competed in or judged LD this season, you have probably heard an exchange like this:

AFF: “In order to mitigate against environmental disaster, we must promote innovation in the energy sector. Valuing the proactionary principle allows us to do this.”

NEG: “Actually, promoting green energy is a precautionary measure, since it is a measure taken in order to mitigate the risk of a climate catastrophe. And the precautionary principle is all about mitigating risks.”

AFF: “That’s wrong. The precautionary principle is about restricting innovation in order to avoid risk. So the precautionary principle would oppose green energy to avoid economic risks.”

NEG: “But the precautionary principle is also sensitive to risks in the status quo. Fossil fuels pose serious environmental risks. So the precautionary principle requires that we restrict that technology, and we can only do that by promoting green energy.”

Or you might have heard one like this:

NEG: “Responsible innovation requires assessing and mitigating risks. So we should value the precautionary principle.”

AFF: “But the proactionary principle is compatible with risk-assessment. We are always free to terminate innovation if evidence arises that it will be harmful.”

NEG: “That’s wrong; the proactionary principle says we should innovate before we assess the risks, or when we don’t know what the risks are. If we terminate innovation to avoid risk, that is precautionary.”

AFF: “But Max More, who coined the proactionary principle, said that part of the principle is to revise our plans in response to new information and stop innovation when it becomes harmful.”

The content of these exchanges differs widely, but the form is the same: AFF and NEG both claim to occupy the same ground, and they sometimes appear to have equally good reasons for their respective claims. It is tempting to conclude that there is no meaningful distinction between the proactionary and precautionary principles, that any course of action could be proactionary or precautionary depending on the speaker’s perspective. Thus Adam Briggle and J. Britt Holbrook, writing for the Social Epistemology Review and Reply Collective

The cynical story suggests that although they each seem to prescribe something rather specific, the precautionary and proactionary principles are actually masks. That is to say, they are parasitic upon prior values commitments embedded in a more fundamental conceptual scheme. This means that one could easily make and justify polar opposite policy decisions by appeal to the same principle. 

J. Britt Holbrook and Adam Briggle, “Knowing and Acting: The Precautionary and Proactionary Principles in Relation to Policy Making,” Social Epistemology Review and Reply Collective 2, no. 5 (2013): 15–37.

At least part of the blame for this lies with Max More, the philosopher who coined the proactionary principle. He puts it forward as an explicit alternative to the precautionary principle and defines it thus:

People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies a range of responsibilities for those considering whether and how to develop, deploy, or restrict new technologies. Assess risks and opportunities using an objective, open, and comprehensive, yet simple decision process based on science rather than collective emotional reactions. Account for the costs of restrictions and lost opportunities as fully as direct effects. Favor measures that are proportionate to the probability and magnitude of impacts, and that have the highest payoff relative to their costs. Give a high priority to people’s freedom to learn, innovate, and advance.

Max More, “The Proactionary Principle,” The Extropy Institute (2005), accessed January  31, 2022, https://www.extropy.org/proactionaryprinciple.htm.

An advocate of the precautionary principle could affirm nearly everything in this paragraph, including that technological innovation is valuable; that policy decisions should be reasonable, scientific, and not based on collective emotion (has anyone ever said that they should be based on collective emotion?); that the costs of policies should be considered; and that freedom to learn, innovate, and advance is valuable. The precautionary principle’s central claim – roughly, that innovation should be restrained when its negative consequences are unknown – is clearly compatible with each of these maxims. The vast majority of what More says about the proactionary principle contrasts it with a caricature of the precautionary principle to which no one actually subscribes. So, unsurprisingly, the distinction between More’s principle and the actual precautionary principle is unclear. (And it does not help that More’s subsequent formulations of the proactionary principle are vague, disconnected concatenations of buzz words like “progress”, “objective,” “intelligent,” and “rational.”)

Nevertheless, there are distinctions, at least one of which can be gleaned from More’s own work. Here are three ways to clearly define and distinguish the precautionary and proactionary principles:

1. Who bears the burden of proof?

The most historically influential definition of the precautionary principle is given in the Wingspread Statement of 1998:

Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public bears the burden of proof. The process of applying the Precautionary Principle must be open, informed and democratic, and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action. (my emphasis)

“Wingspread Statement on the Precautionary Principle,” The Global Development Research Center  (1998), accessed October 23, 2021, https://www.gdrc.org/u-gov/precaution-3.html.

In short, it is incumbent on innovators to prove that their ideas do not threaten human wellbeing. Unless and until they do, they are subject to restrictions.

In contrast, More places the burden of proof squarely on the shoulders of regulators:

The freedom to innovate technologically and to engage in new forms of productive activity is valuable to humanity and essential to our future. The burden of proof therefore belongs to those who propose measures to restrict new technologies. All proposed measures should be closely scrutinized. Rather than moving forward hesitantly, this means boldly stepping ahead while being mindful of where we put our feet.

Max More, “The Proactionary Principle: Optimizing Technological Outcomes,” in The Transhumanist Reader, ed. Max More and Natasha Vita-More (West Sussex: Wiley-Blackwell, 2013), 264.

On this analysis, the resolution is really about different procedures for deciding when to restrain and when to permit innovation. AFF must hold that innovation should be permitted and encouraged unless and until it is shown to be unduly risky; NEG must hold that innovation should be restrained by default and only permitted once innovators have proven that it is not unduly risky.

Briggle and Holbrook illustrate this distinction nicely:

In [situations where the consequences of innovation are unpredictable,] we can generally attempt to prevent or restrain the activity until cause-effect relations are better understood (precaution); or we can generally promote the activity while learning more about cause-effect relations along the way (proaction). We can conceive of the technology as guilty until proven innocent (precaution, where the burden of proof lies with proponents of the activity) or as innocent until proven guilty (proaction, where the burden of proof lies with opponents of the activity).

J. Britt Holbrook and Adam Briggle, “Knowing and Acting: The Precautionary and Proactionary Principles in Relation to Policy Making,” Social Epistemology Review and Reply Collective 2, no. 5 (2013): 15–37.

(The analogy with criminal law may make the point more accessible and memorable for the judge, but be aware that Briggle and Holbrook argue against this interpretation of the principles later on in the same paper. So cite them for illustrative purposes only.)

Under this analysis of the resolution, clash occurs when we do not know what the consequences of innovation will be or if they will outweigh the associated harms. In this situation, AFF will permit innovation until it is proven harmful, while NEG will restrict it until it is proven harmless

There are fairly straightforward examples of each principle on this reading. For instance, clinical drug trials exemplify the precautionary principle, since they are a means of proving the harmlessness of medical innovations before implementing them. Most innovation in the tech industry, on the other hand, doesn’t work this way – you don’t have to consult anyone before designing an app or a smartphone. So both principles are well-represented in the status quo.

In my opinion, this is the clearest and most interesting way to delimit the affirmative and negative ground. But it is not the only way.

2. How and when should data about long-term impacts be gathered?

More states that

The Proactionary Principle stands for the proactive pursuit of progress. Being proactive involves not only anticipating before acting, but learning by acting.

Max More, “The Proactionary Principle,” The Extropy Institute (2005), accessed January  31, 2022, https://www.extropy.org/proactionaryprinciple.htm.

If “acting” here refers to innovating (which it pretty clearly does in context), then this is incompatible with the Wingspread rendition of the precautionary principle, which requires that innovation be performed only after we have arrived at an understanding of its consequences. More’s point seems to be that the best way to learn about the impacts of new technology is to implement it, and that the precautionary principle renders this impossible.

This is closely related to the burden-of-proof interpretation, but it is distinct: on this reading, precautionaries and proactionaries disagree, not about who bears the burden of proof, but about how and when the long-term impacts of innovation should be evaluated. In the interest of minimizing risk, precautionaries demand that consequences be investigated prior to innovation. Proactionaries object that this approach will halt progress: by far the best way to find out if something works is to try it, and sometimes we will never discover whether something works unless we try it. 

Under this analysis, clash occurs whenever the impacts of innovation are uncertain. For AFF, the next step is to innovate. (The next step after that may be to halt innovation if it proves to be harmful, but the only way to know whether it will be harmful is to do it.) For NEG, the next step is to gather all the data we can without innovating – however long it takes – and then proceed only if we can tell, based on that data, that the innovation will be harmless.

To borrow an example from Soren Holm and John Harris, in a world where genetically modified plants did not yet exist, precautionaries and proactionaries would disagree about whether to produce them.1 Since no data would yet exist about the impacts of GMOs, precautionaries would resist their introduction. On the other hand, proactionaries would encourage their introduction as a means of generating the data necessary to determine whether they should be produced on a larger scale. 

3. Should we avoid the worst outcomes or pursue the best outcomes?

Steve Fuller, the most vocal contemporary proponent of the proactionary principle, writes that,

In social psychological terms, the ‘regulatory focus’ of precautionary policymakers is on preventing the worst possible outcomes, of proactionary policymakers on promoting the best available opportunities.

Steve Fuller, “Precautionary and Proactionary as the New Right and the New Left of the Twenty-First Century Ideological Spectrum,” International Journal of Politics, Culture, and Society 25, no. 4 (2012): 157–74.

If this is the crux of the distinction, then proactionaries are those who will risk facing a worse worst-case scenario if it means a shot at a better best-case scenario; precautionaries are those who will guard against a worse worst-case scenario even if it means sacrificing a chance at a better best-case scenario. To illustrate, a proactionary person would go double-or-nothing on a bet he had just won, while a precautionary person would not. In practice, this means that proactionary individuals will engage in high-risk innovation when the rewards are great, while precautionaries will take only those risks that do not significantly raise the probability of catastrophe.

Take, for example, the practice of gene editing, in which a fetus’s genetic code is manipulated early on in its development in order to prevent genetic disease or otherwise improve the fetus’s health. The potential for improved quality of life through gene editing is utterly enormous – once sufficiently advanced, gene editing could be used, not only to fight disease, but to make all humans smarter, stronger, faster, and more attractive (and, insofar as moral defects are impacted by genetics, maybe even more virtuous). As best-case scenarios go, this is phenomenal, so AFF will advocate for gene-editing.

But gene editing could be abused on an absolutely massive scale. It could irreversibly worsen social stratification if only the wealthiest individuals could afford to “optimize” their offspring, since those offspring would outperform non-optimized people by a wider and wider margin each generation. It could be used to produce super-soldiers, rendering war more costly and destructive. And, if it went too far, it might erode whatever it is that makes us human, as opposed to something else. That outcome is bad enough, NEG will argue, that we ought not to risk it.


The thrust of this post is that genuine, substantive clash is possible under this resolution, despite the apparent vagueness of the precautionary and proactionary principles. Not every one of your opponents will concede to one of these three standards, but they are at least worth putting forward. (And if you can convince your judge to accept one of them, who cares what your opponent thinks?) 

Notice that, whichever of these three standards you run, you will not be able to reduce the round to a debate about whether we should be pro-innovation or anti-innovation. Precautionaries can be very pro-innovation while still insisting that certain steps be taken prior to innovating in order to minimize risk. Likewise, you will not be able to reduce the round to a debate about whether we should account for risks or take them with reckless abandon. Proactionaries might be more willing to take risks, but only certain risks for certain reasons, e.g. in order to accelerate progress or acquire data that would be unobtainable otherwise. The good news is, you can incorporate these nuances without thereby destroying the distinction between the principles.

Soren Holm and John Harris, “Precautionary Principle Stifles Innovation,” Nature, July 29, 1999.

%d bloggers like this: