It’s been a while since an argument has made me think as much as Bryan Caplan’s many iterations of his argument for pacifism. The argument goes something like this
- The short-run costs of war are very high
- The long-run benefits of war are highly uncertain
- For a war to be morally justified, its long-run benefits have to be substantially larger than its short-run costs
First of all, the third premise is ambiguous as it stands. I assume that the ‘long-run benefits’ Bryan refers to in the third premise are not the actual but rather expected benefits. Indeed, otherwise his conclusion would not be action-guiding as the whole point is we are highly uncertain over what the long-run benefits actually would be and whether or not they would be substantial. The conclusion would instead read ‘it is highly uncertain whether or not we should go to war’.
Recast, the argument reads
A) The short run costs are very high
B) The long-run benefits of war are highly uncertain
C) For a war to be morally justified, the expected long-run benefits have to be substantially larger than its short-run costs
Hmm, I’m still not happy with the third premise due to the ambiguity of ‘expected’. For example, if there is a coin toss where I get £10 if it’s heads and have to pay £10 if it’s tails the expected value is zero. The most basic concept of expected value is a function of the value of all the possible outcomes multiplied by probabilities of each of those outcomes occurring. In the case of war, this is not a useful idea of expected value, not just due to the problem of assigning probabilities but that the logical space of possible outcomes is indefinable*. Expected value is a very handy concept where you have games with defined rules, but it leaves us high and dry when trying to address most really difficult real-world problems.
Maybe I’ll think of some way to further refine C) to make it less problematic, but I’m sceptical because I believe this is a particular instance of a very general issue with consequentialist/cost-benefit type moral theorizing: namely, that in conditions of Knightian uncertainty it appears impossible for there to be a fact of the matter about what we ought to do. My argument for this is very simple
- In order for there to be a fact of the matter about what we ought to do, it has to in some way be discoverable (basically a restatement of the ought-implies-can principle)
- In cases where one of the significant consequences is subject to Knightian uncertainty, there is no way to discover any fact of the matter about what we ought to do
Of course, if you add in a Taleb-like premise 3)
- Every moral decision incorporates Knightian uncertainty as to what the (eventual) significant consequences of any decision will be
Then we are led to a most unhappy conclusion
- There is no fact of the matter as to how we should decide moral cases
Back at university, I used to call this the ‘moral paralysis of consequentialism’ – that if what you are genuinely trying to do when making a moral decision is to in some way facilitate the best future outcomes, there is no way of deciding what to do.
I’ve been thinking about this problem for about three years now and I haven’t made any significant progress since I wrote my first undergraduate essay on the subject. Sorry.
Final point, I would be very interested to hear what Bryan has to say about taking strong preventative action against climate-change. And, for that matter, Pascal’s wager. If there is some kind of consistent general decision principle underlying his third premise, discussion of those cases should illuminate greatly.
*Unless you take the possible outcomes to simply be all logically possible outcomes, but I think it’s safe to say this wouldn’t get us anywhere
Suppose you are accosted by an omniscient highwayman – and instead of demanding your money or your life, he is after something altogether more tricky: empirical truth. Tell him something empirically true and you escape unharmed. What should you say*?
Consider, for a moment, externalist definitions of knowledge – for example that a belief is the production of a reliable mechanism for producing truth. Would this knowledge, if I had it, be of any use to me here? It would not, as it is not a condition of externalist knowledge that I can internally discern what is true and what is not true. My perceptual beliefs may be reliable, and if they are reliable then I know them, according to the externalist. But that knowledge is of no comfort to me in this situation, as it doesn’t help me make a choice whether to assert one proposition or the other. P may be the production of a reliable mechanism and Q may not be, but if I have no idea whether P or Q are produced by reliable mechanisms, I have no reason consciously available to me to assert P or Q to the highwayman.
If someone demands of me that I speak truth, in order to be able to make choices based on reason in order to meet that demand, it must be the case that any reason for believing something be available to me. Those are the only kinds of reasons that are any good to me if I’m to actively try and stay alive in this situation. I need to consider my internal justifications for believing any particular proposition to be true.
Now, suppose I’m considering whether proposition P would be a good candidate to assert to the highwayman. I look at my reasons for P. Say I believe P because of [R → P] and R. But why then not assert either R or [R → P]? Surely P cannot be on a better internal epistemic footing than both of those two propositions put together? For if my belief that P is truly justified only by [R → P] and R, then P would have the same evidential status as [[R → P] & R]. Since you cannot increase the probability of a proposition being true through conjunction, I should be better off asserting either R or [R → P].
Suppose then I choose to assert R. Why do I believe R? If I give reasons, then I am better off asserting the reasons for R than R itself. I can never get to a point where I can rationally assert one proposition rather than any other. This is true even if I gave some coherentist-style justification for a proposition, as the issue with coherentism is there is absolutely zero reason to believe than a coherent set of propositions remotely resembles the truth.
Now, fortunately we are not usually in the position of having to assert truth to an omniscient psychopath. But consider the following not-implausible epistemic norm:
A) You should assert only what is justified
As our highwayman example shows, if we go about actually trying to meet such a test based on what is consciously available to us, we’ll never get to a point of justification. External justification is of no help to us when consciously trying to meet epistemic norms. And that, in my view, is the real challenge of the sceptic. There is nothing intrinsically wrong with a belief having the property of being produced by a reliable mechanism, or of its being caused by the truth of its content. It’s a genuine property that any number of our beliefs could have, but since the justification is external it doesn’t help us in the deliberate practice of choosing what we should believe or assert. If we were to instead invoke a stronger epistemic norm:
B) You should only assert what you
know can discern to be true
And combine this with the ‘ought-implies-can’ principle, the fact that we cannot give any internal reasons for believing one proposition rather than another from our own internal perspective leads to the unhappy conclusion that
C) One ought not to assert anything.
*Some clever-clogs is going to figure out that the optimal strategy in this situation would be to say ‘My life is being threatened’. If it’s true then you pass the test, if it’s not then your life isn’t threatened anyway. So let’s also assume than our omniscient highwayman also has an irrational dislike for clever-clogs
Update: A number of the arguments I have made in this post are highly problematic (I always publish too hastily). However, I still consider the basic point to be correct and will hopefully publish a follow-up soon.
What do economists mean when they are talking about the phenomenon of declining marginal utility of income? It’s pretty simple, really. If you earn £10k a year, an extra £1,000 is worth a lot more to you than if you earn £100k a year. Another way of putting it is: the world is better off ‘happiness-wise’ (ceteris paribus) if the extra £1,000 goes to the less well-off. I used to believe that this generates a simple argument from utilitarian premises for widespread income redistribution.
But a few months ago, I had a thought that actually this isn’t so obviously the case. Declining marginal utility of income has an interesting corollary, namely that those who work jobs that require long hours should be paid more per hour in order to maintain the same level of happiness. I have made this point to a number of people in discussion and I don’t feel I have made myself clear, so I thought I would have a stab in written form.
Imagine a prime specimen of homo economicus whom I shall gender-neutrally term ‘Alex’. Now, let us first suppose that for Alex, the marginal utility of income is constant. Alex is deciding whether to take a job for £20k that requires 40 hours per week, or £30k that requires 60 hours per week. Alex would be indifferent to these two options, for if we understand the decision on how much he/she wishes to work as a trade-off between income and leisure, then as he/she is being equally compensated for his foregone leisure in each case, Alex would be neither better nor worse off with either job.
However, we know that marginal utility of income is not constant. The first 20k is more important, and of the first 20k the first few thousand is even more important as this is what will feed and clothe our imaginary prospective employee. The things we buy with our extra income as we get richer are less important than the things we would buy first. That’s the declining marginal utility of income. There is also a second effect, which is the declining marginal utility of leisure. In this example, the marginal hours Alex works (say he/she works 9-9 rather than 9-5) are more valuable than the marginal hours worked by the 9-5 employee. This is because if you get out of work at 5, you still have time in the day to do things like have hobbies and go and see friends. Getting out at 4 rather than 5 doesn’t make much of a difference to your ability to have extensive hobbies outside of work or to your social life, but the difference between getting out at 8 and 9 makes a very large difference – not only in terms of the time you have, but in how much energy you have to spend that time fruitfully. Therefore, in order to be compensated for all that Alex loses by working longer hours, Alex gets paid more for that time. In fact, declining marginal utility is everywhere. Working is (at least in one way) like cheesecake: the next slice is never as good as the previous one, and there would come a point where I would actually pay to not have an extra slice.
So, we have Alex working a 60 hour a week job for (say) £40k p.a., and suppose we have Casey working a 40 hour a week job for (say) £20k p.a. It is entirely plausible that, in terms of their actual welfare, Alex and Casey are equally well off. Alex may be able to buy box seats at the theatre, but Casey has the time to do amateur theatre. So if we were considering what justice would require in terms of redistribution of the income of Alex and Casey towards someone else, ‘Morgan’ who cannot earn income due to disability or illness, it
would be may well be inegalitarian to require Alex to give more a higher proportion of income to Morgan than Casey. People who choose to work longer hours should not automatically be considered better off just because they have more income, and when you combine the two effects of working more and being compensated more highly for those marginal hours due to the declining marginal utility of income the amount of extra income required to equalize their well being could be considerable.
Let me make one thing perfectly clear: for income in excess of that required to compensate workers with long hours I think the redistribution argument holds – but with the ceteris paribus assumption. And there are lots of things that matter (from a utilitarian perspective) that would need to be taken into account such as the effect of high marginal taxes on work incentives. If you don’t think that work incentives matter, think about this: an economy is fundamentally about exchange – trades made between two or more parties for their mutual benefit. If I buy a sandwich at M&S for £3, what is basically happening is that I would rather have the sandwich than the £3, and M&S would rather have the £3 than the sandwich. Everyone is better off. Now, the people who run M&S are much richer than I am. But I want them to make the sandwich and trade with me rather than not. It is too easy to think of those who earn more as being parasitic on society’s resources – but a crucial part of society’s resources is labour itself, people taking resources and turning them into something more useful, desirable and, indeed, valuable.
The economist Tyler Cowen recently wrote an excellent essay in The American Interest entitled ‘The Inequality That Matters’, in which he makes the following point:
“The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.”
If it is the case that much of what is really valuable is that which cannot be priced and exchanged in a market, then this makes a considerable difference to optimal public policy from the perspective of an egalitarian – let alone a utilitarian.
Update: if you are of an egalitarian mindset, and have not read Elizabeth Anderson’s fantastic ‘What is the Point of Equality?’ paper, please do. You are missing out on both a superb critique and impassioned defence
Update 2: That Alex and Casey should be taxed in direct proportion to their income does not at all follow, as has been pointed out to me. Strictly speaking what should happen from an egalitarian perspective is that each should be taxed in such a way that their welfare loss is equal. It is a completely open question as to what this would actually look like, because it depends on the exact way in which the marginal utility of income varies with income. If I was attempting to argue against progressive taxation/redistribution this would be a problem, but I’m not. I’m just attempting to show that arguing for progressive taxation/redistribution merely on the basis of declining marginal utility of income doesn’t do the necessary work. I have therefore adjusted the strength of my claim accordingly.
 File under ‘totally fucking obvious to everyone who isn’t an economist’
 Again, only an economist would call everything that isn’t gainful employment ‘leisure’, but I’ll stick to the convention here
 I’m going to start running out of gender-neutral names soon…
 Still going strong
 My original point was incorrect, as the declining marginal utility of income means that any given £1 taken from Alex is worth less to Alex than £1 from Casey is worth to Casey. However, if you take £1 from Alex and not from Casey, then Casey is better off than Alex since I assumed they are equally well off. The tax should therefore be proportionate, and proportion of income is as good a proxy as any for how to keep their welfare constant, even if it isn’t perfect
 There is also a third substantial effect, which is cost of living and relative rates of inflation. I would point the reader in the direction of two excellent essays from a couple of American conservative writers whom I enjoy reading (it’s not an extensive list) – the first is from Reihan Salam at the NRO, on variations in income and cost of living across the US, and the second is from Will Wilkinson, a.k.a. the sole redeeming feature of the CATO institute. Also very much worth reading is Steve Waldman’s response to Wilkinson, which can be found here. I link, you decide.
If I murder someone, I can be imprisoned by the government. However, it is only the government that is allowed to imprison me – if you do it, that is called false imprisonment and is itself a criminal offence. Why is it that governments are allowed to lock people up for the good of society and individual citizens aren’t?
Given that using overwhelming force to get people to do what you want them to do (e.g. remain in an 8×10 room for the next 20-25 years) is generally speaking not acceptable for individuals or corporations to do, one might think there should be a reason why governments are allowed to. Let’s review a few candidates:
1) You expressly consented to the government having such a right.
In terms of giving an argument for the legitimacy of actual states, this is a non-starter. No such consent has been given, except maybe by those who acquire citizenship by choice.
2) You tacitly consented to the government having such a right.
It isn’t necessary for consent to expressly say ‘I consent’. Sex is an obvious example; non-consensual sex is illegal, but there are (mercifully) a number of uncodified ways of signifying consent. Even if we assume that such tacit consent would legitimize government coercion, there do not exist the kind of uncodified ways of tacitly signalling consent to government coercion as in the case of sex. Proposed acts of tacit consent to such coercion – voting, not voluntarily leaving the territory – do not have the necessary link between the proposed act and the understanding on the part of the actor of what it is they are consenting to by performing that act in order for the analogy with express consent to go through.
3) The principle of fair play
Another non-starter. The principle of fair play essentially this: if a community engages in a mutually beneficial enterprise that requires co-operation, then if you benefit from the enterprise you shouldn’t be allowed to free-ride on the cost of producing that benefit. It is a non-starter because it fails to explain why the enterprise of government generates a strong enough claim to allow coercion when other co-operative enterprises do not. It may account for a moral obligation to co-operate, but simply demonstrating the existence of a moral obligation is insufficient for legitimizing coercion as an instrument of ensuring the fulfilment of that obligation.
4) Natural Duty of Justice
The gist of this line of thought is that we have an obligation to support just institutions. If the state is just, then we should do what it says. But again, this only speaks to a moral obligation – that I ought to obey the rules of a just state does not speak to the right of that state to coerce me into following those rules.
5) Hypothetical Consent
Medical procedures, in the ideal world, are consented to. However, sometimes we are not able to consent to things that we would – I cannot consent to surgery after being involved in a motorbike accident where I am left unconscious, but the doctors are quite permitted to perform the surgery because I would grant the permission if given the opportunity. The hypothetical consent tradition in political philosophy says it’s fine that people don’t actually consent to government coercion, it’s enough that they would if asked. Why would they do so? Because it is way better that we have a government than not. However, hypothetical consent (when it comes to government) proves far too much. If it legitimizes all government coercion where having such a coercive authority is better than having none, then if anarchy is really really bad – as those in the hypothetical consent tradition are inclined to think it is, cf Thomas Hobbes – then just plain ol’ bad tyrannical government passes the hypothetical consent test. Furthermore, it might be better to allow coercion by both the government AND (say) me. If having government is better than anarchy, then probably so is having government and a few select individuals locking people up in their basements.
We can generalize the considerations I have briefly and flippantly adduced by saying that any theory of why it is OK for the government and no one else to engage in certain coercive practices should meet two constraints: it should cover at least the vast majority of people who can be subjected to such coercion, and most crucially it should explain why we reserve this right exclusively to the government. The second constraint has (to my knowledge) been ignored by all the literature on the subject, despite its being such an obvious and important feature of the modern conception of the state.
Now, I am not by any means a political anarchist. Indeed, I think these considerations point somewhat in the other direction, politically speaking. For example, suppose you are a hard-line libertarian who thinks it is the job of the government to enforce contracts entered into by consenting, mentally capable adults and nothing else. The government in such a contract is, in effect, an unnamed third party who the contracting parties agree to have settle a dispute and execute that settlement – by force, if necessary. But why, on this hard-line libertarian position, can’t consenting adults name some other third party? Why can’t the parties say in the event of a dispute or breach of contract that Mr T be brought in to settle it*? Even the most extreme of libertarians are incapable of accounting for why only government is allowed to use coercive force.
The lesson I take from this is that politics, like ethics more broadly speaking, involves us trading off against each other all sorts of things we find valuable. For example, there is a long and tedious debate amongst political philosophers as to what ‘freedom’ means. Does being free mean not having people interfere with what you want to do, or having the ability to do the things you want to do? The answer is: both matter, and both taken to the extreme lead to highly undesirable outcomes. If we want to feed everybody, then we may have to take stuff from one person and give it to another – stuff they might not give up otherwise (this is also known as ‘taxation’). That a homeless person might be ‘free’ to eat but unable is a simple demonstration that many things other than freedom from coercion are valuable. Deciding what a government or anyone should and should not be allowed to do is essentially trading off the value of not being coerced against whatever it is that can be achieved by the coercion.
* I pity the fool who would agree to such an arrangement
Suppose that determinism is true, and that I just put my hand down on my desk. As a compatibilist, I claim that this is a free but determined act. I was able to act otherwise, for instance to raise my hand. But there is a true historical proposition H about the intrinsic state of the world long ago, and a true proposition L specifying the laws of nature, such that H and L jointly determine what I did, and jointly contradict the proposition that I raised my hand. If I had raised my hand, then at least one of three things would have been true: contradictions would have been true, H would not have been true, or L would not have been true. So if I claim that I am able to raise my hand, I am committed to the claim that I have one of three incredible abilities: the ability to make contradictions true, the ability to change the past, or the ability to break (or change) the laws. It’s absurd to suppose that I have any of these abilities. Therefore, by reductio, I could not have raised my hand. - David Lewis
So goes the classic ‘consequence argument’ for incompatibilism, the thesis that free will is inconsistent with a deterministic universe. Against the consequence argument, Lewis goes on to point out that there are two different kinds of counterfactuals concerning the possibility of my raising my hand:
A: If I had raised my hand, either the state of the past or the laws of nature would have been different
B: If I had raised my hand, my act would have caused the past or the laws of nature to change
B is plainly absurd; one could not say that I ‘could’ raise by hand on the basis of B, and compatibilists do not say so. A, however, is perfectly all right on the determinist thesis, and incompatibilist ought to readily admit so. Now, with A there comes also an ability which I can be said to have, which I shall call A1:
A1: I am able to do something, such that if I did it, the state of the past or the laws of nature would have been different
I agree with the compatibilist that this represents a genuine ability I have. There is nothing contradictory in claiming anyone has such an ability, so I see no reason to think that it doesn’t represent a genuine property. The compatibilist claims that this argument for incompatibilism fails because it equivocates between the ability A1 which I do have, and the corresponding ability B1:
B1: I am able to do something, such that if I did it, the state of the past or the laws of nature would have been caused by my doing it
Against ability B1, the argument goes through*. But that, so says the compatibilist, is not what is at issue for we have abilities of the A1-type and those are good enough for making us free. Those who know me more than a little will be aware of my general disdain for arguing over definitions – if you want to call compatibilist freedom ‘free will’, then you are very welcome to make that stipulation. But the incompatibilist has available what I believe to be a very strong response, which is to say that the abilities such as A1 are morally irrelevant. For example, I have an A1-type ability to flap my arms and fly. I am able to flap my arms and fly, such that if I did it, the state of the past or the laws of nature would have been different. But it would be ridiculous to be accused of failing to act if I neglect to save someone from falling out of a building on the grounds that I ‘could’ have flapped my arms and flown to save them. So if we then extend this point to classic cases of, say, a child drowning in a pond who I fail to jump in and save because I didn’t want to get my clothes wet, the incompatibilist will make the analogous point which is my A1-type ability to jump in the pond and save the child is not a morally relevant one. It is of exactly the same kind as my ‘ability’ to flap my arms and fly, and I fail to see the morally relevant different between them except that in the save-the-child case we very much believe that I am morally responsible and so we just work backwards to get the right answer. But this is no way to do philosophy, for surely the onus is on the compatibilist to show why we should make a distinction.
I therefore conclude that while the consequence argument doesn’t necessarily succeed against it’s intended target – free will – it does make a powerful case for the incompatibility of moral responsibility with a deterministic universe. Indeed, I have often wondered whether it would be a big step forward in the debate over free will if we realized that what we are really arguing about is moral responsibility.
In philosophy, when we are trying to understand a concept or figuring out whether it actually applies as much as we initially believe it does, we can often get confused by two different ways of proceeding. One way is to look at the ordinary cases of the things we call ‘freedom’, ‘knowledge’ etc. and then see what is in common with those cases. Another way is to look at the kinds of reasons we give for or against the concept applying to a particular case. The problem is, these two ways of looking at it tend to come into conflict. In the case of freedom/moral responsibility – and, I would argue, in the case of knowledge – we have a conflict between our ordinary applications of the concept and the kinds of reasons we have for giving and (especially) denying that the concept applies. Sceptics about knowledge and moral responsibility say that if we consistently applied the kinds of reasons we give for denying that someone has knowledge (e.g. they haven’t ruled out all the possibilities) or is morally responsible (e.g. I can’t change my genes, so whatever my genes cause is something I am not responsible for) then we will end up denying that the anyone knows or is responsible for very much at all. It seems as if we either have to give up a whole lot of common sense thinking about who knows or is responsible for what, or we have to arbitrarily limit our application of the reasons we give for the concept not applying.
It strikes me that philosophy cannot tell you how to decide between those two, only that we must if we want to be consistent in our thinking.
*This very brief exposition of the consequence argument is indebted to Kadri Vihvelin’s excellent “Arguments for Incompatibilism” entry at the Stanford Encyclopedia of Philosophy
A series of posts for those with a general interest in philosophy, designed to lay out some of the more esoteric issues philosophers have been squabbling about over the years and why they are vexed by them.
One of the many gifts that the German mathematician and logician Gottlob Frege bequeathed to modern philosophy was the distinction between the ‘sense’ and the ‘reference’ of a proper name. Indeed, one of the papers that stands out as paradigmatic of the modern analytical tradition in philosophy is Bertrand Russell’s 1905 paper On Denoting, which took issue with Frege’s treatment of the topic and arguably helped usher in the professionalization of discipline seen in the 20th Century (to mixed results, imho).
In that classic 1892 paper Über Sinn und Bedeutung (On Sense and Reference), Frege distinguished between the referent of a proper name (the object it is the name of) and the ‘sense’, which roughly speaking is the thought one has when one is thinking about the object. For Frege and others following him, there was also a relationship between the sense and the referent – but I am going to leave the question of the determination of reference for another, separate post.
Frege was arguing against a theory of names put forward by John Stuart Mill, which was that the meaning of a name is simply the referent – which has the upshot that two different names for the same object mean the same thing. However, Frege observed, this would mean that all identity statements would be trivial; i.e. that if the names ‘Clark Kent’ and ‘Superman’ name the same object, then ‘Clark Kent is Superman’ means the same as ‘Clark Kent is Clark Kent’ and ‘Superman is Superman’. But this is not true – identity statements can be informative. They shouldn’t, on a correct theory of the meaning of a proper name, come out as tautologies.
So, pulling us in one direction for distinguishing between sense and reference is that identity statements (even if they ‘really are’ tautologies when we think about the reference of the names) don’t seem to be that way on their face. This then leads one to the conclusion that there is some other way of understanding a name on its face, as it were.
One prominent reading of Frege was that he identified the sense of a name with a definite description – for example, ‘Aristotle’ means ‘The teacher of Alexander the Great’. However, as a theory of sense, one of they key drawbacks of this is that then to say ‘Aristotle was the teacher of Alexander the Great’ is in fact to say ‘The teacher of Alexander the Great was the teacher of Alexander the Great’ which is also a tautology. Indeed, if you identify the sense of the name as anything which can be also be said to be of an object (like a description, or indeed any property), then attributing that to the object will come out as a tautology.
This is the central reason why I believe the question of the sense of a proper name – the thought one is having when one uses the name – is so intractable and will prove to be so for years to come. If we deny that the meaning of a proper name is anything but the object, then we have the problem of identity statements being tautologies. However, if instead we think of a name as meaning the object (or a non-specific object) but having certain identifying properties, then attributing those properties to the object becomes a tautology.
Now, it has always seemed to me that the answer to this puzzle is going to have to either lie on the ‘descriptivist’ side – because I can’t even think what it would be to have a thought about a specific object without thinking about that specific object having certain properties… I mean, try thinking about your car or house but without them having the property of being a car or being a house – or in completely changing the way philosophers have treated subject-predicate sentences, like ‘John is bald’ (sorry, John).
It is actually quite amazing/distressing how far you can get in philosophy just by saying relatively benign things, like “If ‘John is bald’ is true, then there must be an object (John) that has a property (being bald)”. My inner Wittgenstein* suspects that everything starts going wrong when we over-interpret that simple and seemingly innocuous observation, which then goes on to do a whole lot of metaphysical damage. In what I intend to be the next post in this series – the problem of universals – I hope to show how this can happen somewhat spectacularly**.
*And believe me if you thought having inner angels and demons is tough, try having an inner Wittgenstein as well
**Or at least, as spectacularly as anything happens in modern analytic philosophy