Our elite Mastership Sourcebooks for NCFCA and Stoa will release soon! Check them out here!

Fallacies in general, are confusing. And if the subject weren’t complicated enough already, some may feel that logicians decided to even further complicate it by distinguishing between formal and informal fallacies. By the end of this article, I hope to explain that making and understanding this distinction can actually help you in your analysis, as well as providing some examples of such fallacies. Since Joshua discussed Informal Fallacies last week in Lies of Rhetoric, I will be covering formal fallacies.

Understanding the Distinction: “Formal” vs. “Informal”

Khan Academy has a great video on this, but I will also be explaining it here. Almost any definition that you look up will at least consider formal fallacies to be arguments which are flawed in structure, rather than in content. By “flawed,” the definition usually means that a conclusion does not necessarily follow from the given premises. Thus, as stated here, “All formal fallacies are specific types of nonsequiturs” (which is Latin for “It does not follow”). Such fallacies are considered different from informal fallacies, because the latter tend to err in cognitive biases, flawed assumptions/assertions, etc. and therefore, in content, rather than the actual formal argument structure.

The types of formal fallacies

Whereas the list of informal fallacy types is extensive, there are much fewer formal fallacies you are likely to encounter. Here are some of the main formal fallacies:

Affirming the antecedent (AKA Assuming the Cause)

This mistake occurs in the argument structure “If A, then B. B. Therefore, A.” Suppose an argument is as follows:

  • Premise 1: If it is midnight, it is dark outside. (If A then B)
  • Premise 2: It is dark outside. (B)
  • Conclusion: It is midnight. (A)

Here, the content of the premises can all be correct. However, it should be apparent that such a conclusion is flawed because even if one accepts the premises, the conclusion does not necessarily follow. This is because you can give a counterexample (e.g. “It’s dark and 1:00, not midnight”) where both premises are correct, but the conclusion is not. The problem is that one misinterprets the first premise. This argument, therefore, suffers from a structural flaw.

Argumentum ad Monsantium

This is a type of ad hominem style fallacy, in which it assumes that if your arguments sound like they are made to support someone or something (e.g. Monsanto in GMO debates), then you or your source is probably biased in this way. Therefore, they assume, you or (more commonly) your source(s) should not be trusted. This can be either an informal or formal fallacy, depending on whether or not they make explicit arguments about why supporting something inherently undermines impartiality and credibility, but either way, you should recognize it as a fallacy.

Note: although the fallacy’s name singles out Monsanto, it does not only refer to issues involving that one company.

Denying the consequent

This formal fallacy occurs when one assumes that because a cause doesn’t happen, one of its results necessarily doesn’t either. See, for example, the argument:

  • If their study’s predictions all came true, it is a reliable study
  • Not all of their study’s predictions came true.
  • Therefore, it is not a reliable study.

While both premises may be true, just because a study is not perfect does not make it “unreliable,” and such a conclusion certainly does not follow from the premises.

Irrelevant objection/conclusion (AKA Ignoratio elenchi, AKA Missing the point)

This is actually a very common type of fallacy in debate. This fallacy occurs when someone makes a point or observation which, although seemingly contrary to the other side’s argument, doesn’t necessarily contradict it. For example, suppose I argue that a certain law enforcement policy has reduced crime in a city overall. If someone quotes some anecdote about a recent robbery to try and contest the policy’s effectiveness, they would probably be committing an irrelevant objection/conclusion fallacy, because they are not actually contesting as to whether crime has generally gone down.

Appeal to perfection (AKA Nirvana Fallacy)

This fallacy is basically a subtype of the above, but it is so common in policy debate that I felt it important to devote an extra section to it. Essentially, this argument states “They don’t solve everything, so we shouldn’t pass their plan.” They can be totally right that you don’t solve everything—they might even have destroyed some of your advantages or harm-solvency pairs. However, it is a complete non-sequitur to make the jump from “They aren’t perfect” to “Don’t pass their plan.” I have even lost a round where practically the only arguments standing by the 2NR were “They don’t solve for some of their harms,” despite the fact that we spent a total of 4 minutes (in rebuttals) explaining that just because our plan wasn’t perfect doesn’t mean it shouldn’t pass. Still, this potential confusion/misconception is one reason why you must be careful and prudent when giving harms you know you may not be able to guarantee solvency.

Straw Man:

Making an opponent’s argument out to be something it actually isn’t, then responding to that made-up argument. The name comes from an analogy of creating a straw (i.e. hay) version of the argument, then knocking that straw representation over, claiming that you knocked over the real argument.

General Missing Premise

As its name implies, the argument assumes some link or premise which is necessary for the conclusion without actually stating it. For example, in my old “Remove Duties on Chinese Tires” case of 2015–16, people would almost always argue safety disadvantages, by saying that “Because Chinese tires are of lesser quality than American tires, and reducing tariffs would increase Chinese imports, their plan, therefore, reduces safety.” Although this might make sense at first (it certainly appealed to the many anti-China community judges) it is actually missing a very key premise: “… would increase Chinese imports, causing Americans to switch from American tires to Chinese tires, their plan, therefore, reduces safety.” Unfortunately for negatives, this key premise just so happened to be contradicted by what is arguably the most central point of the case: “We just switched from cheap Chinese tires to more expensive—but similar quality—Indonesian, Mexican, Brazilian, Thai, etc. tires; practically nobody buys a Chinese tire instead of an American tire.”

Stronger Part = Stronger Conclusion (AKA Unwarranted Necessity)

We all know that it doesn’t matter how large a number is if it is multiplied by zero, but sometimes arguments aren’t as simple as math. In particular, this fallacy arises when people fail to see that their conclusions are the product rather than the sum of their premises’ strength.

Most experienced policy debaters have probably at least once had a judge or opponent who didn’t understand that the strength of a stock issue in a case cannot “outweigh” the lack of another stock issue. For example, it doesn’t matter how significant the case’s harms are—or even also how well it solves those harms—if the case isn’t topical. Another example of this would be in the above “Missing premise” example: a negative could (and some did) spend minutes explaining how bad the Chinese tires (supposedly) were. Yet, this shouldn’t have mattered after I gave my response, because, put mathematically, it multiplies the rest of their premises’ strength by zero.

False Dilemma

A false dilemma is where someone suggests that you must either choose one thing or the other, usually with their option being presented as at least better than the other option. This can be—but isn’t always—formally fallacious depending on how the argument is made (coincidentally, it is therefore, a false dilemma to say that false dilemmas must be either formal or informal fallacies).

Fallacy-Fallacy

This is when one assumes that a conclusion is wrong because some support for it is fallacious. This kind of fallacy can be either formal or informal depending on how it is argued, but it is always invalid reasoning that use of fallacy disproves a conclusion; it simply means the fallacious support should not be accepted. Perhaps the conclusion is actually correct, but the person’s means of supporting it are just flawed.

Conclusion

Fallacies can be confusing, and their separations even more so, but just remember that formal = structure; informal = content. In reality, your confusion may actually stem more from trying to classify each fallacy type as specifically one or the other, which doesn’t always work. Just recognize that ultimately, although understanding the distinction can help conceptually dissect arguments and diagnose their flaws, making the distinction is far less important than being able to identify and articulate in an understandable way why the argument at hand is flawed. In fact, because it is so important, these latter tasks are the subject of the next article in this series.


Harrison Durland is a blogging intern at Ethos. Now a college student at Ole Miss, he is studying international affairs, Russian, (hopefully public policy,) and intelligence and security studies, seeking to do analyst work and perhaps later move into public policy or organizational administration. He began debate in his sophomore year of high school, in Stoa. Despite an unenthusiastic first year, he later found that he had a passion for debate, especially policy debate. His third and final year of high school debate was 2016, during which year he qualified to NITOC. His primary interests outside of debate and academics include his faith, ethics, and game and decision theory.

%d bloggers like this: