On November 30th of this year, the tech overlords released an AI application called ChatGPT. In essence, if given a prompt about virtually anything (and I’ve tested it on relatively obscure topics), ChatGPT will generate a written response to that prompt. AI writing software is nothing new, but ChatGPT is different in two ways. Number one, it’s free to use. But number two, it’s much more powerful than any of its predecessors. Much, much more powerful. According to an article from the Atlantic, early analysis suggests that ChatGPT is capable of writing at the MBA level. Multiple articles I’ve read and videos I’ve watched in the last couple of days (including the one linked above) argue that ChatGPT is so powerful that it very well could mean the end of essay-writing in high school and college given the degree to which it enables effective cheating.
As much as I’d like to discuss ChatGPT’s potential effects on modern academia, that’s beyond the scope of my area of knowledge, and, further, has already been discussed at length by various journals and newspapers in the last two weeks. I’d like to answer a different but related question: is ChatGPT harmful to competitive debate?
The Problem
Broadly speaking, success in competitive debate has always depended on the ability to think critically. This critical thinking can and does evince itself in multiple forms, whether argument creation, response generation, or else. Historically, technological innovation has heightened the importance of critical thinking in debate by automating parts of the debate process that require more grunt work and less brainpower. For example, the rise of the internet and search engines like Google enabled debaters to spend more time coming up with arguments and less time digging to find the evidence needed to support those arguments.
ChatGPT is different: it has the ability to function as a replacement for the argument generation process. For example, I just now put the prompt “Write a debate case in favor of abolishing solitary confinement” into ChatGPT, and it spit out the following in less than fifteen seconds:
Abolishing solitary confinement is a necessary measure to protect the human rights and dignity of prisoners. Solitary confinement, also known as isolation, is the practice of isolating individuals in a cell for 22-24 hours a day, with little to no human contact or stimulation. This practice can have severe psychological and physical effects on individuals, including anxiety, depression, hallucinations, and self-harm.
Additionally, solitary confinement can be used as a form of punishment for prisoners, even for minor infractions. This can be a form of cruel and unusual punishment, violating the Eighth Amendment of the US Constitution. Furthermore, solitary confinement can disproportionately affect vulnerable populations, such as individuals with mental illness or young people.
Furthermore, solitary confinement is not an effective means of rehabilitation or reducing recidivism. In fact, it can have the opposite effect, making individuals more likely to reoffend once they are released. By providing prisoners with social support, education, and rehabilitation programs, we can better address the root causes of their criminal behavior and help them to reintegrate into society.
In conclusion, solitary confinement is a harmful and ineffective practice that violates the human rights and dignity of prisoners. Abolishing solitary confinement is necessary to protect prisoners and promote rehabilitation and reintegration.
Not bad. But everyone knows what solitary confinement is, right? Let’s test out ChatGPT on a more obscure topic. Given the prompt, “Write a debate case in favor of abolishing the PLRA’s physical injury provision,” it returns:
Abolishing the physical injury provision of the Prison Litigation Reform Act (PLRA) is a necessary measure to protect the rights of prisoners and promote accountability in the criminal justice system. The PLRA’s physical injury provision requires that prisoners must show that they have suffered a “physical injury” in order to have their claims heard in federal court. This can prevent prisoners from seeking legal recourse for a wide range of abuses, including sexual assault, discrimination, and inadequate medical care.
Additionally, the physical injury provision can create a disincentive for prison officials to prevent and address abuses. If prisoners cannot seek legal recourse unless they have suffered a physical injury, prison officials may be less likely to take action to prevent abuses from occurring in the first place. This can create a culture of impunity in prisons, allowing abuses to continue unchecked.
Furthermore, the physical injury provision can be used as a tool to silence prisoners and prevent them from speaking out about abuses. By requiring prisoners to show physical injury in order to have their claims heard, the provision can make it difficult for prisoners to speak out about abuses that may not have resulted in physical injury. This can prevent important information from coming to light and prevent accountability for prison officials who engage in abuse.
In conclusion, the physical injury provision of the PLRA is a barrier to justice for prisoners and can create a culture of impunity in prisons. Abolishing this provision is necessary to protect the rights of prisoners and promote accountability in the criminal justice system.
I’ve also run tests on ChatGPT with value resolutions, and it still generates arguments of similar quality. If anything, ChatGPT is better at LD since LD depends less on having specific evidence/citations (ChatGPT seems to favor non-evidentiary arguments).
I would estimate that all else held equal, 2-4 and 3-3 debaters could instantly become 3-3 and 4-2 debaters using ChatGPT if they plug in aff cases 10 minutes before a round, walk in, and use those arguments. However, all else is not held equal–everybody has access to ChatGPT. Given that, how can we expect competitive debate to change?
Going Forward
In this article, I’ll only discuss ChatGPT’s potential impacts on TP for the sake of preserving some sense of brevity (if that’s even achievable at this point).
In a nutshell, I think ChatGPT will improve the quality of novice and intermediate-level debates, but it will not significantly change debates between the top 10 to 20 percent of teams. Prepared debate formats, broadly, depend on two skill sets: one, the ability to collect information via research, and two, the ability to use a combination of information and reason to communicate intelligent arguments. Newer and intermediate debaters, by definition, have not yet achieved a comfortable level of competency in either skillset, though they work towards it. Abstractly, I predict ChatGPT will have a similar effect on these debates to those of sourcebooks: sourcebooks allow debaters to appear more collected and skilled than they actually are, and they minimize the number of debates where teams stand up, mumble for thirty seconds, and then sit down. In other words, they give substance to a debater’s speech. Of course, the evidence isn’t necessarily fantastic, and debaters may not use it as effectively as they could, but it partially compensates for deficits in that first skill set I mentioned, research. ChatGPT does the same thing for the second skill set, argument: it gives debaters arguments they can use, though newer debaters might not capitalize fully on the strength of those arguments. Nonetheless, it improves their performance at an intrinsic level.
Nor do I think we should be concerned about a tossup in rankings at this level. That is, sourcebooks don’t give a “competitive advantage” in a substantive sense; they rather serve as a means by which to avoid being at a competitive disadvantage since the vast majority of newer debaters use them. The same argument likely goes for ChatGPT.
For better debaters, I doubt that ChatGPT will significantly change the content of these debates. AI, by its very nature, is trained to think like a normal (though very intelligent) person. The way that a good debater thinks about debate is different from the way that an intelligent non-debater thinks about debate. Most people, when approaching pro/con debate (which policy debate is, by definition) imagine two teams introducing various arguments of the form “X is good” or “X is bad” and responding to the opposing team’s arguments. As you observe, all of ChatGPT’s arguments in the preceding examples take this form, and based on other trials I’ve run with it, this seems to generally hold true. And in fairness, that is what policy debate tends to be. But there are other sorts of arguments that indirectly inform the pro/con discussion which, due to their secondary relevance, a non-debater, and more importantly, ChatGPT, does not tend to come up with. And yet, these sorts of arguments often tend to be far more powerful when used effectively. I offer three examples:
- Solvency arguments
In a positive sense, solvency arguments don’t say “X plan is bad;” rather, they say “X plan creates zero good.” Per basic debate theory, the “zero good” outcome flows negative due to the word “should” in the resolution. In my experience, good debaters tend to win rounds on solvency more than anything else, and I think this is for two reasons. First, from the negative perspective, solvency arguments are easier to come up with since, though often presented substantively, they technically function as a response to aff’s claim “X plan causes Y outcome.” All the negative has to do to win solvency is to break some causal link in that chain; they don’t have to build a chain of their own. In contrast, disadvantages (which necessarily encompass the remainder of traditional “pro/con” arguments) require the negative to build their own claims, which is more difficult to do. Put simply, a person’s natural instinct is to gravitate towards the DA debate; experienced debaters recognize that this inclination makes the debate more difficult and learn to prefer solvency. Second, due to this natural inclination, the majority of available negative literature on a given policy proposal tends to center around the DA debate. The affirmative will always be more familiar with the literature on their case than the negative will, meaning that arguments that depart from the literature (read: solvency) will tend to be more successful since aff is less prepared to respond well. More briefly, while DAs are an integral part of policy debate, 1) solvency tends to be a round-winner more often, 2) ChatGPT doesn’t like solvency arguments, meaning 3) top-level debates remain largely unchanged.
- Counterplans
Counterplans are similar to solvency in that, while they’re technically policy-oriented in nature, it’s not immediately apparent why that’s the case. For similar reasons, ChatGPT doesn’t like counterplans, leaving room for creative debates here as well.
- Topicality
Traditionally conceived, topicality isn’t even concerned with pro/con debate at all, meaning ChatGPT fails to cover this base as well.
In a future article, I hope to dissect the potential impacts of ChatGPT on LD, limited prep formats, and the learning curve of debate. For now though, I leave you with one takeaway: don’t be afraid of ChatGPT. Irrespective of how I may present the issue, it’s an unalterable fact that ChatGPT is an incredibly powerful tool for debaters. ChatGPT has the potential to significantly increase the average quality of debates, and I believe that on balance, it’s a good thing for TP. Debate is a difficult activity to learn, and ChatGPT makes it more approachable for newer debaters while also preserving the importance of critical thinking at higher levels. ChatGPT does not make debate less creative; rather, it encourages debaters to prioritize the most creative and educational arguments, ultimately making debate a more human and more worthwhile endeavor.
——————————————————————————————————————–
Ben Brown is the blog manager for Ethos Debate LLC. He competed in Team Policy debate throughout high school, winning 1st place at the 2022 NCFCA national championship. When not debating, Ben can be found wishing he was debating, playing board games, or hanging out with friends and family.
This is absolutely crazy. Definitely a very powerful tool…. I just went to ChatGPT and it looks like they are temporarily down, in their own words because they “reached capacity.” As soon as they are back up, I’ll be the first to use it.
Great article, and definitely an issue worth discussing.
My opinion is that debate is less about gathering information and more about synthesizing it after you’ve obtained it. The issue is that ChatGPT is certainly not quotable in a round, meaning its best utility is to give you some ideas to go out and research.
Here’s the thing; the way the best debaters research is not argument-focused. They go and read every available intellectual article on the issue, and then assemble their arguments. Using ChatGPT gives you argument ideas, sure, but in doing so you lose the ability to learn first, prep second. This means you’ll miss out on the more creative arguments, while teams that forego ChatGPT will still be finding those. ChatGPT may even develop a feedback-loop situation, where it produces the same answers consistently because those are “the right answers”. I don’t know if it has any open input mechanisms, but there’s got to be something. (Unless it’s a “locked algorithm”, in which case we never need fear it improve past the SQ).
At least in the short term, ChatGPT won’t be focused around debate, but general discussion. That means that it won’t give you the full version of every argument, it’ll give you the “top 3” so it doesn’t bore you. Long-term, we could see a specialized AI that writes entire briefs complete with citations, but that would still have the problem I mentioned above: AI will get stuck in feedback loops, and the elite debaters can surpass the “metagame” that it creates.
At least, that’s my hope. Because otherwise debate may become much less about knowledge and much more of a flow game defined by the rebuttals.
Delete this
*proceeds to tell ChatGPT to write my debate cases*
Great article!
After putting in some prompts, I think that this will effect parli rounds a lot. Because Stoa has recently made internet use an option to the tournament director, we’ll be seeing a lot more tournaments banning internet in all rounds.
I think that this will be used a lot for debate, but I also think that the people that use it too much are at a disadvantage, because the good debaters look through many articals, and get a full understanding of what is really happening – whether it’s good for their side or not. When I gave the AI prompts, it told me what I know to be good arguments, but it never actually explained the situation very well.
This should make rapping the 2AR a bit easier.
This was a really neat article! I’ve been hearing about ChatGPT a lot lately, but I hadn’t given it much thought in terms of competitive debate. It will be interesting seeing if it starts to be used a lot.
Very interesting article. I hadn’t even thought of the possibility of AI being used in a debate setting.
Do you see this becoming a significant competitive factor in limited prep debate such as Parlimentary?
Honestly, I think this would work pretty good for some congress rounds. It talks very smoothly and for some reason reminds me of the speaking in congress. After that just find a quick source from a decent argument it comes up with.