Our elite Mastership Sourcebooks for NCFCA and Stoa will release soon! Check them out here!

On November 30th of this year, the tech overlords released an AI application called ChatGPT.  In essence, if given a prompt about virtually anything (and I’ve tested it on relatively obscure topics), ChatGPT will generate a written response to that prompt.  AI writing software is nothing new, but ChatGPT is different in two ways.  Number one, it’s free to use.  But number two, it’s much more powerful than any of its predecessors.  Much, much more powerful.  According to an article from the Atlantic, early analysis suggests that ChatGPT is capable of writing at the MBA level.  Multiple articles I’ve read and videos I’ve watched in the last couple of days (including the one linked above) argue that ChatGPT is so powerful that it very well could mean the end of essay-writing in high school and college given the degree to which it enables effective cheating.

As much as I’d like to discuss ChatGPT’s potential effects on modern academia, that’s beyond the scope of my area of knowledge, and, further, has already been discussed at length by various journals and newspapers in the last two weeks.  I’d like to answer a different but related question: is ChatGPT harmful to competitive debate?

The Problem

Broadly speaking, success in competitive debate has always depended on the ability to think critically.  This critical thinking can and does evince itself in multiple forms, whether argument creation, response generation, or else.  Historically, technological innovation has heightened the importance of critical thinking in debate by automating parts of the debate process that require more grunt work and less brainpower.  For example, the rise of the internet and search engines like Google enabled debaters to spend more time coming up with arguments and less time digging to find the evidence needed to support those arguments.  

ChatGPT is different: it has the ability to function as a replacement for the argument generation process.  For example, I just now put the prompt “Write a debate case in favor of abolishing solitary confinement” into ChatGPT, and it spit out the following in less than fifteen seconds:

Abolishing solitary confinement is a necessary measure to protect the human rights and dignity of prisoners. Solitary confinement, also known as isolation, is the practice of isolating individuals in a cell for 22-24 hours a day, with little to no human contact or stimulation. This practice can have severe psychological and physical effects on individuals, including anxiety, depression, hallucinations, and self-harm.

Additionally, solitary confinement can be used as a form of punishment for prisoners, even for minor infractions. This can be a form of cruel and unusual punishment, violating the Eighth Amendment of the US Constitution. Furthermore, solitary confinement can disproportionately affect vulnerable populations, such as individuals with mental illness or young people.

Furthermore, solitary confinement is not an effective means of rehabilitation or reducing recidivism. In fact, it can have the opposite effect, making individuals more likely to reoffend once they are released. By providing prisoners with social support, education, and rehabilitation programs, we can better address the root causes of their criminal behavior and help them to reintegrate into society.

In conclusion, solitary confinement is a harmful and ineffective practice that violates the human rights and dignity of prisoners. Abolishing solitary confinement is necessary to protect prisoners and promote rehabilitation and reintegration.

Not bad.  But everyone knows what solitary confinement is, right?  Let’s test out ChatGPT on a more obscure topic.  Given the prompt, “Write a debate case in favor of abolishing the PLRA’s physical injury provision,” it returns:

Abolishing the physical injury provision of the Prison Litigation Reform Act (PLRA) is a necessary measure to protect the rights of prisoners and promote accountability in the criminal justice system. The PLRA’s physical injury provision requires that prisoners must show that they have suffered a “physical injury” in order to have their claims heard in federal court. This can prevent prisoners from seeking legal recourse for a wide range of abuses, including sexual assault, discrimination, and inadequate medical care.

Additionally, the physical injury provision can create a disincentive for prison officials to prevent and address abuses. If prisoners cannot seek legal recourse unless they have suffered a physical injury, prison officials may be less likely to take action to prevent abuses from occurring in the first place. This can create a culture of impunity in prisons, allowing abuses to continue unchecked.

Furthermore, the physical injury provision can be used as a tool to silence prisoners and prevent them from speaking out about abuses. By requiring prisoners to show physical injury in order to have their claims heard, the provision can make it difficult for prisoners to speak out about abuses that may not have resulted in physical injury. This can prevent important information from coming to light and prevent accountability for prison officials who engage in abuse.

In conclusion, the physical injury provision of the PLRA is a barrier to justice for prisoners and can create a culture of impunity in prisons. Abolishing this provision is necessary to protect the rights of prisoners and promote accountability in the criminal justice system.

I’ve also run tests on ChatGPT with value resolutions, and it still generates arguments of similar quality.  If anything, ChatGPT is better at LD since LD depends less on having specific evidence/citations (ChatGPT seems to favor non-evidentiary arguments).

I would estimate that all else held equal, 2-4 and 3-3 debaters could instantly become 3-3 and 4-2 debaters using ChatGPT if they plug in aff cases 10 minutes before a round, walk in, and use those arguments.  However, all else is not held equal–everybody has access to ChatGPT.  Given that, how can we expect competitive debate to change?

Going Forward

In this article, I’ll only discuss ChatGPT’s potential impacts on TP for the sake of preserving some sense of brevity (if that’s even achievable at this point).

In a nutshell, I think ChatGPT will improve the quality of novice and intermediate-level debates, but it will not significantly change debates between the top 10 to 20 percent of teams.  Prepared debate formats, broadly, depend on two skill sets: one, the ability to collect information via research, and two, the ability to use a combination of information and reason to communicate intelligent arguments.  Newer and intermediate debaters, by definition, have not yet achieved a comfortable level of competency in either skillset, though they work towards it.  Abstractly, I predict ChatGPT will have a similar effect on these debates to those of sourcebooks: sourcebooks allow debaters to appear more collected and skilled than they actually are, and they minimize the number of debates where teams stand up, mumble for thirty seconds, and then sit down.  In other words, they give substance to a debater’s speech.  Of course, the evidence isn’t necessarily fantastic, and debaters may not use it as effectively as they could, but it partially compensates for deficits in that first skill set I mentioned, research.  ChatGPT does the same thing for the second skill set, argument: it gives debaters arguments they can use, though newer debaters might not capitalize fully on the strength of those arguments.  Nonetheless, it improves their performance at an intrinsic level.  

Nor do I think we should be concerned about a tossup in rankings at this level.  That is, sourcebooks don’t give a “competitive advantage” in a substantive sense; they rather serve as a means by which to avoid being at a competitive disadvantage since the vast majority of newer debaters use them.  The same argument likely goes for ChatGPT.

For better debaters, I doubt that ChatGPT will significantly change the content of these debates.  AI, by its very nature, is trained to think like a normal (though very intelligent) person.  The way that a good debater thinks about debate is different from the way that an intelligent non-debater thinks about debate.  Most people, when approaching pro/con debate (which policy debate is, by definition) imagine two teams introducing various arguments of the form “X is good” or “X is bad” and responding to the opposing team’s arguments.  As you observe, all of ChatGPT’s arguments in the preceding examples take this form, and based on other trials I’ve run with it, this seems to generally hold true.  And in fairness, that is what policy debate tends to be.  But there are other sorts of arguments that indirectly inform the pro/con discussion which, due to their secondary relevance, a non-debater, and more importantly, ChatGPT, does not tend to come up with.  And yet, these sorts of arguments often tend to be far more powerful when used effectively.  I offer three examples:

  1. Solvency arguments  

In a positive sense, solvency arguments don’t say “X plan is bad;” rather, they say “X plan creates zero good.”  Per basic debate theory, the “zero good” outcome flows negative due to the word “should” in the resolution.  In my experience, good debaters tend to win rounds on solvency more than anything else, and I think this is for two reasons.  First, from the negative perspective, solvency arguments are easier to come up with since, though often presented substantively, they technically function as a response to aff’s claim “X plan causes Y outcome.”  All the negative has to do to win solvency is to break some causal link in that chain; they don’t have to build a chain of their own.  In contrast, disadvantages (which necessarily encompass the remainder of traditional “pro/con” arguments) require the negative to build their own claims, which is more difficult to do.  Put simply, a person’s natural instinct is to gravitate towards the DA debate; experienced debaters recognize that this inclination makes the debate more difficult and learn to prefer solvency.  Second, due to this natural inclination, the majority of available negative literature on a given policy proposal tends to center around the DA debate.  The affirmative will always be more familiar with the literature on their case than the negative will, meaning that arguments that depart from the literature (read: solvency) will tend to be more successful since aff is less prepared to respond well.  More briefly, while DAs are an integral part of policy debate, 1) solvency tends to be a round-winner more often, 2) ChatGPT doesn’t like solvency arguments, meaning 3) top-level debates remain largely unchanged.

  1. Counterplans

Counterplans are similar to solvency in that, while they’re technically policy-oriented in nature, it’s not immediately apparent why that’s the case.  For similar reasons, ChatGPT doesn’t like counterplans, leaving room for creative debates here as well.

  1. Topicality

Traditionally conceived, topicality isn’t even concerned with pro/con debate at all, meaning ChatGPT fails to cover this base as well.

In a future article, I hope to dissect the potential impacts of ChatGPT on LD, limited prep formats, and the learning curve of debate.  For now though, I leave you with one takeaway: don’t be afraid of ChatGPT.  Irrespective of how I may present the issue, it’s an unalterable fact that ChatGPT is an incredibly powerful tool for debaters.  ChatGPT has the potential to significantly increase the average quality of debates, and I believe that on balance, it’s a good thing for TP.  Debate is a difficult activity to learn, and ChatGPT makes it more approachable for newer debaters while also preserving the importance of critical thinking at higher levels.  ChatGPT does not make debate less creative; rather, it encourages debaters to prioritize the most creative and educational arguments, ultimately making debate a more human and more worthwhile endeavor.


Ben Brown is the blog manager for Ethos Debate LLC. He competed in Team Policy debate throughout high school, winning 1st place at the 2022 NCFCA national championship. When not debating, Ben can be found wishing he was debating, playing board games, or hanging out with friends and family.

%d bloggers like this: