Bryan Caplan’s argument for pacifism
It’s been a while since an argument has made me think as much as Bryan Caplan’s many iterations of his argument for pacifism. The argument goes something like this
- The short-run costs of war are very high
- The long-run benefits of war are highly uncertain
- For a war to be morally justified, its long-run benefits have to be substantially larger than its short-run costs
First of all, the third premise is ambiguous as it stands. I assume that the ‘long-run benefits’ Bryan refers to in the third premise are not the actual but rather expected benefits. Indeed, otherwise his conclusion would not be action-guiding as the whole point is we are highly uncertain over what the long-run benefits actually would be and whether or not they would be substantial. The conclusion would instead read ‘it is highly uncertain whether or not we should go to war’.
Recast, the argument reads
A) The short run costs are very high
B) The long-run benefits of war are highly uncertain
C) For a war to be morally justified, the expected long-run benefits have to be substantially larger than its short-run costs
Hmm, I’m still not happy with the third premise due to the ambiguity of ‘expected’. For example, if there is a coin toss where I get £10 if it’s heads and have to pay £10 if it’s tails the expected value is zero. The most basic concept of expected value is a function of the value of all the possible outcomes multiplied by probabilities of each of those outcomes occurring. In the case of war, this is not a useful idea of expected value, not just due to the problem of assigning probabilities but that the logical space of possible outcomes is indefinable*. Expected value is a very handy concept where you have games with defined rules, but it leaves us high and dry when trying to address most really difficult real-world problems.
Maybe I’ll think of some way to further refine C) to make it less problematic, but I’m sceptical because I believe this is a particular instance of a very general issue with consequentialist/cost-benefit type moral theorizing: namely, that in conditions of Knightian uncertainty it appears impossible for there to be a fact of the matter about what we ought to do. My argument for this is very simple
- In order for there to be a fact of the matter about what we ought to do, it has to in some way be discoverable (basically a restatement of the ought-implies-can principle)
- In cases where one of the significant consequences is subject to Knightian uncertainty, there is no way to discover any fact of the matter about what we ought to do
Of course, if you add in a Taleb-like premise 3)
- Every moral decision incorporates Knightian uncertainty as to what the (eventual) significant consequences of any decision will be
Then we are led to a most unhappy conclusion
- There is no fact of the matter as to how we should decide moral cases
Back at university, I used to call this the ‘moral paralysis of consequentialism’ – that if what you are genuinely trying to do when making a moral decision is to in some way facilitate the best future outcomes, there is no way of deciding what to do.
I’ve been thinking about this problem for about three years now and I haven’t made any significant progress since I wrote my first undergraduate essay on the subject. Sorry.
Final point, I would be very interested to hear what Bryan has to say about taking strong preventative action against climate-change. And, for that matter, Pascal’s wager. If there is some kind of consistent general decision principle underlying his third premise, discussion of those cases should illuminate greatly.
*Unless you take the possible outcomes to simply be all logically possible outcomes, but I think it’s safe to say this wouldn’t get us anywhere