Our elite Mastership Sourcebooks for NCFCA and Stoa will release soon! Check them out here!

Source: Pixabay

Editor’s Note: The following is a piece by guest writer and NSDA debater Rik Roy which explores the potential long-term effects of artificial intelligence in the debate space.  Though rules and norms surrounding AI may vary from league-to-league, Rik’s analysis still provides a well-thought-out perspective on the relationship between the invention of and articulation of ideas more broadly.  Rik originally published this article on his own site, lastditchdebate.blogspot.com, where he writes on a range of topics relating to debate and AI.  Hope you enjoy!



Debate, like many activities, has been influenced by the rapidly rising popularity of generative artificial intelligence (AI) since the start of 2023. Generative AI refers to any kind of simulated intelligence capable of producing content in response to prompts. Examples include chatbots and image generators, the most widely known being a chatbot called ChatGPT.

Generative AI

AI systems have revolutionized productivity across many sectors, allowing the rapid creation of content and accelerating workflows. This breakthrough technology has made tasks exponentially easier through its ability to interpret input and provide tailored responses at a high level. Given the constant innovation in AI technology, more progress likely awaits in the near future. However, technology is just a tool, and tools are only helpful when used correctly. Thus, a nuanced understanding of the limitations of AI must be considered. 

In the context of research and writing in general, a commonly voiced concern is the validity of the information produced. Rather than scouring the internet for sources and then providing them like a search engine, AI is revolutionary because it trains on vast quantities of data points, and then generates truly original content on the spot. However, this also means that the content created has never been seen by a human, and is simply a prediction based on the prompt and the data that the AI has seen. This can cause misleading content to be generated, as the AI references data in a mechanical way, not through “understanding” what’s being said. Generative AI programs have been known to create false data and false citations, with several reported incidents with ChatGPT in particular. 

In terms of education, the value of AI is also complex. While it allows much of the laborious work of compiling ideas to be done quickly, it also means that several essential steps in critical thinking are automatically completed for the student, resulting in less holistic learning. Additionally, tools like AI can allow students to feign competence in an area that they don’t truly understand, making accurate assessments more difficult.

Thus there are generally two schools of thought relating to this issue. The first views AI as an impediment to learning, a way for students to “cheat” or skip out on doing actual work, resulting in students who cannot actually accomplish what they claim to be able to. The second views AI as a useful tool, and the ability of a student to use it to their advantage demonstrates more competence, not less, in much the same way that calculus students are expected to use calculators instead of doing manual arithmetic. 

I believe there are a number of valid points on both sides, and whether AI is helpful or harmful depends entirely on the context of its use. 

The NSDA rule

A recent addition to the NSDA High School Unified Manual has followed a major pattern across organizations and industries by addressing the role of generative AI in the debate space. The new rule, as of July 2023, reads:

Generative Artificial Intelligence: At the 2023 National Tournament, generative artificial intelligence should not be cited as a source; while something like ChatGPT may be used to guide students to articles, ideas, and sources, the original source of any quoted or paraphrased evidence must be available if requested. Students are prohibited from quoting or paraphrasing text directly from generative AI sources like ChatGPT in events in which speeches must be the original created work of a competitor.

Though worded for the National Tournament because it was created after the debate season was over, the rule is expected to extend to debate tournaments in general. This has sparked some level of controversy among progressive debaters, as official rules are generally frowned upon, with the progressive debate community often finding innovative ways to achieve a ballot due to the lack of regulations. I’ll expand on this later.

This rule was created in light of a recent trend in the latter half of the 2022-23 debate season, following the widespread popularization of chatbots like ChatGPT. Primarily used to write essays for school at first, debaters quickly picked up on the possible applications of ChatGPT. Debaters used AI to brainstorm arguments, write unprecedented numbers of blocks in seconds, and even write entire cases. I personally saw heavy use of generative AI on my local circuit, and have noticed its complex effect on debate, which I will explain in more depth later.

AI in Debate

As the use of AI proliferated in the debate space, I witnessed firsthand the effects on my local circuit. On one hand, debaters had significantly more arguments prepared, leading to more clash-heavy debate rounds, and on the other, people seemed to understand their arguments less, with their cases consisting of badly cut cards to fit the argument presented by AI. Here we find pros and cons specific to debate, an important consideration that breaks away from the more general viewpoints. Debate as an activity has the purpose of being educational, and rounds with more clash, which examine topics in more depth, are conducive to that goal. With this in mind, AI’s ability to assist debaters in creating extensive block files has been an overall net positive to debate. Importantly, however, the NSDA rule does not restrict this practice at all.

There is a misconception that this rule bans the use of AI to facilitate debate in any way, which is so far from the truth that it could only be believed by someone who didn’t bother to read the text. The only thing this rule does is prohibit the use of AI as direct cited evidence. The only LD practice that this rule would restrict would look like a debater generating an entire case using ChatGPT, copying it directly, and then claiming that it contains evidence and not just analytics. Therefore, the only criticism of it has to either argue that AI-generated material is valid evidence or argue against the principle of the rule itself, the latter being what I’ve seen so far, although I will address both.

Starting with defending the rule itself, putting the principle of “having rules” aside, I believe there are two main reasons for prohibiting the use of AI as direct evidence: Reliability and Authenticity.

Reliability: As explained above, AI is not always a reliable source, as it is a text-generating model and not a human. This means it could cause a debate round to be a flurry of misinformation that isn’t fact-checked, which could destroy the educational value of the round. This rule still allows for research, but it requires a real source to be found for anything presented as evidence.

Authenticity: Generative AI is a tool meant to accelerate tasks, and thus it produces everything based on the input provided. Thus, it cannot be reliably cited as anything more than an expansion of the debater’s personal opinion. As a result, it is in nature no more authoritative than an analytic and should be treated as such. I could use ChatGPT and have it explain why AI is bad, or why AI is good, so neither of those can be cited as the opinion of an authority. Arguments that claim AI is authoritative due to its large database completely misunderstand or purposely misrepresent the function of generative AI.

These are justifications I believe apply to debate; for speech or other events where originality is a large component of the tested skill, there are of course other justifications, such as preserving the educational value and fairness of the activity.

Interactions with Progressive Debate

The other criticism of the rule, which attacks the nature of restricting debate, may seem unexpected. To someone without experience in progressive debate, the establishment of a new rule, especially to address new technology, may seem reasonable. However, progressive debate has a distinct lack of rules, established instead by community norms. This gives the activity a very unique characteristic, where everything can be questioned and everything must be justified before assumed true. This introduces a new perspective to consider: the pros and cons of restricting debate under any circumstance. Since the point is to convince the judge to vote for you and nothing else, there is no obligation to follow a rigid system of rules or a certain path of argumentation, at least not unless you justify that obligation. This system gives birth to a unique style of debate that challenges deeper analysis and critical thinking. 

Following this line of thinking, some debaters have spoken out against the creation of this inherently restrictive rule, claiming that irrespective of the validity of the rule, it should not exist. This article, for example, presents this claim, as well as arguments for why generative AI is good for debate overall. First and foremost remember that generative AI is not banned, it is just restricted from being used as evidence. Regarding the article, although some individual points are valid, they do not necessarily lead to the conclusion drawn.

As an overview, we must keep in mind that NSDA rules do not exist solely for progressive debaters, and are much more important for traditional debaters or novices. In progressive debate, everything, even the rules, can be challenged, so it is not as big of an issue as it may seem. If the arguments are made from a progressive debater’s standpoint only, that is a fair perspective but does not necessitate a change or truly challenge the rule. However, I believe that in this case, the rule imposed is a justifiable rule, even in the context of progressive debate. 

Another argument, also mentioned in the above article, is that theory arguments can be made in lieu of official rules. Essentially, the argument states that even on the grounds of reliability and authority, an official rule is unnecessary, because theory debates serve the function of rules through setting and debating norms.

In response to this, first of all, just because theory is a possibility doesn’t mean it’s better than a rule. The net educational value of theory debates as opposed to established rules is already questionable, and even if it does exist, it’s hard to argue that it’s better than substance education, especially due to the lack of real-life application. Additionally, whatever education that can possibly be derived from such arguments can already be achieved through normal, preexisting theory debates with other abuses. This rule also does not necessarily restrict debates over the fairness of how AI was used in the round, it simply gives the power to the judge to vote down a debater who cited AI as evidence. This means theory debates can still exist over how heavily AI was used or whether the use of AI, even for guidance, is good. There is clearly room for that debate, as I discussed just a few points from each side when I discussed AI earlier.

Secondly, theory isn’t accessible to all debaters. As mentioned before, this rule is for all debaters, and many debaters would prefer not to deal with theory at all and stick to more traditional arguments. Because the rule does not harm progressive debate, only limit what might have been a single potential argument, when critiquing the rule it is important to remember that the benefits to other forms of debate are important too. 

Lastly, if we’re already debating progressively, the rule matters less. Tournaments that follow the NSDA strictly and unwaveringly are typically not progressive tournaments. In a progressive tournament, the rules set by NSDA really only help back up arguments; they are not direct voting issues unless it is explained why they should be, and tournament-specific rules usually take precedence.


With all of this in mind, AI can be vastly beneficial if used properly, and the NSDA rule simply prohibits a clear type of misuse, making sure that all evidence is reliably sourced, to the benefit of debate as a whole. For the majority of NSDA debaters, this distinction should be enough to continue debating as before, with no changes. For the debaters under other organizations, the concepts discussed and nuances of AI usage are still pertinent, and an understanding of the place of AI can help boost your research by incredible amounts while allowing you to better point out flaws in your opponents’ usage. There is still a ton of potential to be discovered and new strategies to be developed; best of luck to everyone!

– Rik Roy

%d bloggers like this: