Home > Philosophy, Philosophy for the masses > Bryan Caplan’s argument for pacifism

Bryan Caplan’s argument for pacifism

It’s been a while since an argument has made me think as much as Bryan Caplan’s many iterations of his argument for pacifism. The argument goes something like this

  1. The short-run costs of war are very high
  2. The long-run benefits of war are highly uncertain
  3. For a war to be morally justified, its long-run benefits have to be substantially larger than its short-run costs

First of all, the third premise is ambiguous as it stands. I assume that the ‘long-run benefits’ Bryan refers to in the third premise are not the actual but rather expected benefits. Indeed, otherwise his conclusion would not be action-guiding as the whole point is we are highly uncertain over what the long-run benefits actually would be and whether or not they would be substantial. The conclusion would instead read ‘it is highly uncertain whether or not we should go to war’.

Recast, the argument reads

A) The short run costs are very high

B) The long-run benefits of war are highly uncertain

C) For a war to be morally justified, the expected long-run benefits have to be substantially larger than its short-run costs

Hmm, I’m still not happy with the third premise due to the ambiguity of ‘expected’. For example, if there is a coin toss where I get £10 if it’s heads and have to pay £10 if it’s tails the expected value is zero. The most basic concept of expected value is a function of the value of all the possible outcomes multiplied by probabilities of each of those outcomes occurring. In the case of war, this is not a useful idea of expected value, not just due to the problem of assigning probabilities but that the logical space of possible outcomes is indefinable*. Expected value is a very handy concept where you have games with defined rules, but it leaves us high and dry when trying to address most really difficult real-world problems.

Maybe I’ll think of some way to further refine C) to make it less problematic, but I’m sceptical because I believe this is a particular instance of a very general issue with consequentialist/cost-benefit type moral theorizing: namely, that in conditions of Knightian uncertainty it appears impossible for there to be a fact of the matter about what we ought to do. My argument for this is very simple

  1. In order for there to be a fact of the matter about what we ought to do, it has to in some way be discoverable (basically a restatement of the ought-implies-can principle)
  2. In cases where one of the significant consequences is subject to Knightian uncertainty, there is no way to discover any fact of the matter about what we ought to do

Of course, if you add in a Taleb-like premise 3)

  1. Every moral decision incorporates Knightian uncertainty as to what the (eventual) significant consequences of any decision will be

Then we are led to a most unhappy conclusion

  1. There is no fact of the matter as to how we should decide moral cases

Back at university, I used to call this the ‘moral paralysis of consequentialism’ – that if what you are genuinely trying to do when making a moral decision is to in some way facilitate the best future outcomes, there is no way of deciding what to do.

I’ve been thinking about this problem for about three years now and I haven’t made any significant progress since I wrote my first undergraduate essay on the subject. Sorry.

Final point, I would be very interested to hear what Bryan has to say about taking strong preventative action against climate-change. And, for that matter, Pascal’s wager. If there is some kind of consistent general decision principle underlying his third premise, discussion of those cases should illuminate greatly.

*Unless you take the possible outcomes to simply be all logically possible outcomes, but I think it’s safe to say this wouldn’t get us anywhere

  1. David Franklin
    July 20, 2011 at 10:24 pm

    Having not done philosophy or economics I had to Google ‘Knightian uncertainty’ so apologies in advance for my lack of education/understanding. It sounds like it’s pretty much the “unknown unknowns” of a situation. When you say “subject to Knightian uncertainty”, I assume this means “dominated by Knightian uncertainty” – when I toss a coin a freak gust of wind might carry it away somewhere irretrievable but I can still get a pretty accurate estimate for the probability of heads. Then the question in your argument becomes all about estimating the extent to which the uncertainty dominates. This isn’t “impossible by definition” – something along the lines of “we think unpredictable events will take us outside of our known sample space 40% of the time”. You still maximise your expected utility by doing what’s best within the remaining 60%, and that’s true whatever the number is. If you think that unpredictable events are usually bad, you can adjust for that in the calculation. If you think that you’re not allowed to think that because then they would stop being ‘unknown unknowns’, that’s fine too. What am I missing?

  2. July 20, 2011 at 11:09 pm

    Hey dave! Suppose you flip a coin, if it’s heads nothing happens but if it’s tails then something logically possible but currently false becomes true. How much would I have to pay you to roll it? If you think that tails is the land of unknown unknowns, then by your reasoning you would flip the coin for any amount of money. But the point is a lot of the stuff that really ‘matters’ is in the unknown unknown space.

    In the case of my original moral paralysis argument a few years ago, I considered the example whether or not I should go to a rally for a cause I support. On the basis that I have no idea as to whether my participation in the rally would make the solution I favour more likely to be enacted or less likely, maybe increasing the pushback from the opposition, then the only basis on which I can make my decision on whether or not to go to the rally is that it will be an inconvenience to my day, for example. This the gets compounded by the uncertainty that my diagnosis and solution is correct etc. That’s what I mean by moral paralysis.

    Than being said, I’m reconsidering whether Caplan’s argument actually works by placing the short-run costs of war into the category of ‘things certain enough to make a moral decision on the basis of it’. Then his argument would go through somewhat like my argument against going to the rally, except that the issue is not how ‘substantial’ the expected long-benefits of war are but merely that they are highly uncertain and must therefore be removed from the calculation. I think my criticism of the argument still stands, namely that the ‘expected’ in ‘substantial expected benefits’ is difficult to make sense of given the types of potential benefits we are talking about. But you may have opened up a route to another argument for pacifism which is to say that in order for war to be permissible, the benefits have to be of a sufficient level of certainty not met by actual cases. Food for thought…

  3. David Franklin
    July 21, 2011 at 12:07 am

    In the footnote you’re right to say that considering the “set of all logically possible outcomes” doesn’t get you anywhere – going down the mathematical route, the so-called “universal set” doesn’t exist so you can’t define a probability measure on it. Going down the physical route (reduce the universe to N binary values which can flip between 0 and 1, so the set of possible outcomes is not only measurable but finite with size 2^N) is clearly impractical, and the extreme case of the below. In reality these unknown unknowns are in a sample space which we have to try to estimate (only being human, after all). The longer we spend estimating, the smaller the space of unknowns (as t tends to infinity – or even to 2^N – the probability of any given unknown remaining unknown tends to zero). In reality t is pretty small, so we have to do the best we can. The fact that we’d tend to be risk-averse in this case is that on the whole our standard of living is good enough that we are change-averse. This is also the reason that countries with a lower standard of living are, in general, more politically unstable. Coming back to the unknown unknowns, clearly if you are in a situation where you can give no estimation to your expected utility then you are indifferent to the event occurring. Being risk-averse is our way of giving a very rough estimation of the space of unknowns.

    As for the original question, the result is that it’s not a purely mathematical question but a psychological one – my answer depends entirely on what logical outcomes I think you have the power to change in setting the question and what mood I think you were in when you set it.

  4. David Franklin
    July 21, 2011 at 12:14 am

    One of my favourite quotes of all time:

    “One day, Alice came to a fork in the road, and saw a Cheshire cat in the tree. ‘Which path should I take?’, she asked the cat. ‘Where do you want to go?’, was its response. ‘I don’t know’, said Alice. ‘Then’, said the cat, ‘it doesn’t matter’.”

  5. July 21, 2011 at 6:49 am

    I think you’re right. I also think what you say doesn’t substantively change the moral paralysis conclusion, although when I have time I will rephrase the argument.

  6. March 25, 2014 at 3:55 pm

    I read this post fully concerning the resemblance of most recent
    and earlier technologies, it’s remarkable article.

  7. March 25, 2014 at 4:12 pm

    For latest news you have to pay a quick visit the web and on world-wide-web
    I found this site as a most excellent web site for newest updates.

  8. April 1, 2014 at 3:17 pm

    If some one wants to be updated with latest technologies therefore he must be pay a visit this site and be up to date every day.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: