(I was originally going to flesh this out and try and get this published, but have come to the belief that the blogosphere is intellectually and morally preferable to academic journals for discussing these ideas. Also, this is obviously way easier, and I’ve been sitting on the argument for so long I just want to put it out there and see what people think. So I’ve mostly just copied this from the draft, and put some token links in to make it look like a blog post…)
It has been alleged against the epistemic account of vagueness that the determination of the precise meaning of a vague term appears miraculous, severing the connection between meaning and use. Given the infinite possible precisifications of terms like “thin”, “heap” and “tall”, how on earth is one of those concepts picked out by our ordinary expressions? However, an actual refutation of epistemicism has never been forthcoming, and the epistemicist defence (e.g. Williamson 1994) against the above accusation has been to correctly point out that there isn’t any promising account of how the meaning of non-vague terms is related to use, and that it is unfair to demand that one be provided:
“Every known recipe for extracting meaning from use breaks down even cases where vagueness is irrelevant. The inability of the epistemic view of vagueness to provide a successful recipe is an inability it shares with all its rivals. Nor is there any reason to suppose that such a recipe must exist.” [Williamson 1994 p207 (1996 edition)]
Whilst I agree with Tim Williamson (no relation!) that such a recipe may forever be beyond our grasp, we can still ask whether the epistemic account of vagueness is consistent with the necessary conditions of a true account of meaning determination. It could be the case in the physical sciences that a ‘Theory of Everything’ is necessarily impossible to describe, but this would not prohibit us from making less ambitious but nevertheless true statements about physics and seeing whether they are consistent with other propositions. I submit than the following condition would have to be met in the case of meaning determination
(1) The determination of the meaning of a word/expression/concept* makes reference to the actions/assertions/intentions/beliefs* of the speakers of the language
(1) does not imply that meaning is exclusively determined by use, whatever that may mean, but it is impossible to see how we could explain how words change their meaning without some version of it. The ordinary phenomenon of semantic change provides the most compelling intuitive support for there being a role for the speakers of a language in determining the correct meaning of the words they use. Examples abound of words whose meaning has changed over time, with the most plausible explanation of the change being that the speakers of the language gradually took the word to mean something different. This is completely consistent with a realist account of universals, whereby the word “awful” could have once denoted the same universal as “awesome” does now, with the changing actions/assertions/intentions/beliefs of English-speakers eventually changing the denotation.
But who exactly are the speakers of English? The set of English-speakers is a classic example of a vague set; there are borderline cases in terms of who alive today might count as a member of the set (infants/bilinguals/speakers of dialects), and there are borderline cases going back in time (Was Chaucer a speaker of English? How about Shakespeare? Or Samuel Johnson?). On the epistemicist account, it is either True or False as to whether Chaucer spoke my language, even if we cannot ever know whether he did.
However, “English-speaker” is a word in English and, if (1) is correct, its meaning is partially determined by the actions/assertions/intentions/beliefs of the set of English-speakers. The epistemicist cannot account for how “English-speaker” takes on a precise meaning in English, for it requires that an already precise set of individuals be constituted (the set of English speakers), whose actions/assertions/intentions/beliefs serve as an input into the function that is supposed to determine the contents of the very same set. It would be as if a polity attempted to establish the extent of the franchise on the basis of a democratic vote.
Therefore, unless the epistemicist can account for semantic change without positing a (1)-like principle that leads to the problem of the self-constituting set of language-speakers, the epistemicist explanation of vagueness cannot be correct. But since my claim is actually negative (I’m agnostic as to the precise form (1) must take, it merely seems to me that you *can’t* come up with an account of semantic change that won’t result in the circular feature I have described, although I don’t know how to prove it), I would be very interested if anyone can come up with a plausible substitute for (1) that does not have the problem for epistemicism I have described. I can’t think of one, which makes this argument the closest thing to a refutation of epistemicism that I am aware of, and as such is worthy of further inquiry.
Comments/counter-arguments/reading suggestions are most welcome.
*Delete as appropriate, according to your philosophical sensibilities
Something I’ve been meaning to talk about for a while is the ‘ideal/non-ideal’ distinction in political theory. A lot of ink has been spilled on exactly what the distinction is or should be, and up front I will say I haven’t read most of it (I really hope to soon get round to reading my former political theory tutor Lea Ypi‘s new book ‘Global Justice and Avant-Garde Political Agency‘, which has a discussion on the topic. Lea, if I screw this post up in some horribly obvious way – I’m sorry!). But the gist (I think) of the distinction is this: when doing ideal theory, we are thinking about what the perfectly just world would look like. When doing non-ideal theory, we are thinking about what justice demands of us in the imperfect world in which we actually live.
One way of thinking about what we are doing in political theory is that we establish what the ideal is (ideal theory), and then figure out how to make the actual world look as much like the ideal world as possible (non-ideal theory). I have a problem with this way of thinking about political philosophy. And it’s essentially the same problem I have with economic theories that are deduced from assumptions that are unrealistic, and whose conclusions are not backed up through some other methodology. The Lipsey-Lancaster Theorem ( or ‘Theory of Second Best‘) in economics states that when one of the optimality conditions for a theory cannot be satisfied, it does not follow that the that other optimality conditions still hold. That is to say, if your proof that x is optimal requires y and z, and y does not hold, it is not necessarily the case that the best alternative to x has z as an optimality condition. When I first read the paper, by Richard Lipsey and Kelvin Lancaster, my mind was blown. But it’s just a particular instance of a very general phenomenon: that when you have a deductive argument following from certain premises, and then weaken one of the premises – all bets are off. It may be the case that a weakened premise can still support a weakened conclusion, but it’s equally possible that nothing follows at all.
I think this relates directly to the ideal/non-ideal theory question, because once you have weakened one of your assumptions about , say, what people are actually like (or especially what people can know, which is almost always ignored) then it just doesn’t follow that moving the world towards something more closely approximating the just world (as derived from ideal premises) is actually what justice demands of us. I take this to be a simple point of logic.
I think this can make the question of advocacy really difficult. If there is anything I have ever learned, it is that often an ‘answer’ to a problem requires a number of distinct elements in order to work. Once you take one of those elements away from me, I have to completely rethink what the right answer is (I find this to be especially true when thinking about financial regulation). If my ‘ideal’ answer would be a world featuring x, y and z, and I can’t have z, it simply doesn’t follow that I should still want x and y. X and y might be a deadly combination on their own! (For example, read x as ‘capital requirements for banks based on risk-weighted assets’, y as ‘having ratings agencies assess the riskiness of assets for regulatory purposes’ and z as ‘competition and competence in the ratings agency business’. We didn’t have z, and it worked out really badly). This means that people can so easily end up talking past each other, because they have different implicit assumptions as to what possibilities are allowed within the particular ‘non-ideal’ rules of the debate.
Of course, too often I use this as an excuse to be lazy about advocacy. I definitely have a tendency to go too far in the sceptical direction, and just throw my hands up in the air and say I have no idea what to do (although when it comes to effectively relieving deprivation, I generally trust GiveWell). But maybe I see difficulty where there is none. If that is the case, I would very much like to know.
Given the balance of the things I write about, I get the feeling that people probably think I’m pretty conservative. I write quite a lot about taxation, the inability of the government to ‘stimulate’ the economy through spending and the fact that states are by nature coercive. And while I have become more conservative in the last 2-3 years, it is at the margin. Now, it just so happens that this is also the stuff that I find really interesting and I just feel like I have more things to say about it. It’s also the case that the standard of argument around these topics is extremely high in the blogs I read, and it’s more fun to get involved in that rather than amateur political philosophy (and Matt Yglesias has pretty much has that angle covered, with appropriate snarkiness).
But I have a whole set of prior beliefs than I haven’t written about nearly as much, but that massively inform my political thinking. For example, I think there are incredibly compelling arguments to the effect that I owe the relative level of my income to factors way beyond my control. Imagine tomorrow everyone woke up and loads more people were good at consulting and not many people at all were good at flipping burgers. Prediction: my salary goes way down, fast food worker’s salaries go way up. My salary is a function of the supply of people offering my skillset, and the demand by other people for those skills. And whatever you think about free will etc., I certainly don’t control other people’s desires or skillsets. I’m but a tiny cog in a massive machine called ‘the labour market’.
That kind of thinking makes me pretty egalitarian. And if total redistribution of income was totally costless, I’d be pretty inclined towards it. But it very much isn’t costless, and I think it would have appalling practical consequences. If human beings were perfectly charitable and unselfish, then it would be fine. But we aren’t. We value ourselves over others. We get greedy when we get power, and massive income redistribution through the government creates a massive concentration of power. We always, always need to be thinking about what will in fact happen if we try and change something.
And this is why I write about the things I do. Because I think people don’t really understand the consequences of policy – and even if they happen to be right, they are almost certainly way too sure about it. The fact that we probably understate the effective tax rate as a percentage of their income for people with savings* doesn’t remotely mean (if correct) that the capital gains tax rate should be zero, or even less than it is now. But it probably does mean that you should have a weaker preference for the tax at the margin. I want you to update your beliefs to reflect new information. Where you end up at depends on where you started (FWIW, I started out thinking they should be taxed the same. I now think it is appropriate that cap gains rates should be lower than income tax rates).
One of the best books I’ve read in the last year is Selfish Reasons to Have More Kids by Bryan Caplan. Whilst the book as a whole is fantastic, it’s the structure of Bryan’s argument that is so wonderful. Bryan presents tons of evidence from twin adoption studies to suggest that genetics has a much larger role to play in the kind of person you turn out to be in the long run than the environment you grew up in**. If true, this means that raising kids need be nowhere near as stressful as most people make it to be. Suppose Bryan is right. If Bryan is right, and your beliefs on the subject of the effect of parenting on children has changed, you should have more kids than you were planning to before hand. Why? Because the cost of having them has fallen (individual parenting decisions aren’t as big a deal as you thought, so you can stop stressing about it). General rule: if the cost of something falls, or the benefits of something increase – you should want more of that thing. This is still consistent with wanting none of that thing, if the costs still outweigh the benefits. But it’s a great example of updating our beliefs to reflect new information (and I highly recommend the book).
I can’t present you with a complete and coherent position on anything. There’s just too many things to know, too many factors to consider and I’m not clever enough. Don’t get too hung up trying to figure out what I think. I’m not too sure myself, a lot of the time! Think at the margin, for yourself. Is this right, is this wrong, how does it affect what I already believe, should I be so sure. It’s not the natural way to think, but it’s the smart way.
* note: that statement depends on the fact that savings were at one point earned, which is obviously not true in the case of inheritance or expropriation
**provided the environment is such that adoption would be legally approved
“Don’t think, but look!” – Wittgenstein, Philosophical Investigations (66)
I’ve been wanting to write something about the epistemology of economics for a while, and have been spurred on by Noah Smith’s post yesterday, linking back to an older piece from Frances Woolley at Worthwhile Canadian Initiative. I recommend you go and read them yourselves if you like, and here I take a very similar line to Noah.
As anyone who did the equivalent of Logic 101 will know, deduction is essentially a schema for truth preservation. A logically valid inference is one where it is not possible for the premises and the negation of the conclusion to be true at the same time. If you put in truth and do it right you will get more truth out at the end – but if you start with garbage, then all bets are off.
People often talk about the idea of there being a conflict or dichotomy between inductive and deductive styles of reasoning. I don’t think that’s the right way to think about it. Sure, if you deduct from premises that are self-evidently false (read: most premises in economic models) then you have no reason to think the conclusion will be true. But if you go about inducting will-nilly without some kind of vaguely deductive model about how the world works, then you will end up believing all kinds of crazy things. What I think we do most of the time is create models from simplified premises that may not be ‘true’, but are a starting point from which we can see whether the simplification still gets us to the right answer. But how can we know whether it gets us to the right answer?
By testing it. By making predictions and seeing if they come true.
Deductive models, when they begin with premises that are either obviously false or gross simplifications, are only ever worth anything as a preliminary to inductive testing. Furthermore, if you are beginning with premises that you think are self-evidently true, then you need to seriously question whether you are right*, or even saying anything at all**.
In Frances’ post, she describes a genuine distinction between the ways in which traditional economist and behavioural psychologists in fact go about trying to understand the world, and ends on a wistful note:
Dabbling in economic psychology or behavioural economics is a little like taking the red pill – you go down the rabbit hole, and wake up realizing that the entire world is an illusion…
I want a purple pill – a merging of the red and the blue – that would allow me to merge behavioural insights into a coherent model of economic behaviour…
But I don’t know if such a thing is even possible.
I can see there is a genuine issue with the mathematical complexity of a more realistic model of decision making. The model may become too complex to manipulate in order to generate novel results. But I don’t think this is the biggest problem. I’m pretty sanguine about the fact that straightforwardly false assumptions are knowingly put into models (it works in physics!). The economist’s biggest problem is that even with really simple assumptions, the phenomena he/she is attempting to model are often too complicated to be amenable to robust inductive testing. And this can be used as an excuse not to try***.
My belief that recessions are caused by monetary disequilibrium is basically the product of a model where you simplify the economy to assume there is only money, generic units of ‘output’ and sticky prices. It generates the prediction that reducing the value of currency in such circumstances will help achieve full employment of resources and an economic recovery. But who says this is the ‘right’ simplification to make? Well, this chart from a 1992 book by Barry Eichengreen, is a good place to start (h/t Brad DeLong)
Obviously, this graph by itself doesn’t constitute proof. Some people who would disagree with me also cite this chart in support of their view, or disagree that it provides much by the way of evidence. But if it wasn’t for the fact that the graph looked like this rather than the other way around, I’d think that maybe the money-output simplification is not a good one to make. The point being, the model generates predictions about what we should expect to happen in recessions when either the money supply is increased, or it is signalled that the central bank will do so if necessary. The model gives you an idea for where to look. It is not a substitute for looking.
People trained in economics are often very good at thinking at the margin. And at the margin, in macroeconomics I think we**** need a bit less abstract thinking and a lot more casual empiricism. It doesn’t take a genius to point out that some very popular explanations for the housing boom don’t even pass the laugh test once you take a cursory glance at the relevant data. It shouldn’t be a point of dispute, pace John Cochrane, that nominal changes obviously have real effects. People with some economics training ought to be very good at quickly correcting such things. But quite frankly, we could do a lot better – especially when it comes to correcting ourselves. Too often we don’t even bother to look; easily satisfied by a little just-so story about incentives, or an elegant but unrealistic model of a messy reality.
(File under self-admonishment.)
*I’m not going to argue now about what may or may not count as ‘self-evidently true’, but suffice to say that the set of things that are self-evidently true is a somewhat miniscule subset of the things people believe to be self-evidently true
**I’ve heard people defend the economist’s model of ‘rationality’ on the basis that you can take pretty much any action by any person and identify a set of preferences that makes that action ‘rational’. But if you say that, you haven’t created a ‘theory of action’ with any useful content, but rather created a schema by which we can model people’s actions if we can identify the terms of the schema empirically. If you have no way of telling what a person’s preferences are except insofar as they are revealed by their actions, and you stipulate the preferences are always consistent, then you don’t have a useful theory of anything. You can ‘model’ ex-post facto anything I do by attributing some set of beliefs and desires to me, but that doesn’t mean you remotely understand or predict what I will do or have done
***If you think that it is not in the business of economics to generate empirical predictions, then I have the right to ask why I should give a rat’s ass about economists’ views on policy
****I did IB economics and intro micro/macro at Oxford. It counts, OK?
Suppose you have an intuition, ‘P’*. Should you believe P? Should you believe not-P? For some propositions (e.g. the moral kind) there may not be much to go on other than our intuitions. So for the moral intuitionists in particular, here’s a little puzzle for you.
Suppose intuitions have a probability of being true that is greater than 0.5. Therefore, if you have an intuition that P you should believe it rather than not-P. But if this is true, then the consensus of the masses’ intuition is not only more reliable than yours, but is actually very reliable (Condorcet’s theorem). If intuitions have a probability of greater than 0.5 of being true, you should therefore not trust your intuition, but ask the masses.
Suppose instead intuitions have a probability of being true that is less than 0.5. Therefore, if you intuit P, not-P is more likely. But not only should you should not trust your intuition, you can be pretty certain of the right answer by asking the masses and then believing the opposite!
Obviously I have posed the question in a very simplistic way, and there’s questions of statistical independence, the different reliability of different kinds of intuitions** etc. But it does seem that whatever the probability of your intuitions being true are, in a lot of cases it could still make sense to ask around and then either believe what the masses say, or the opposite (depending on your view of the prior probability of the particular intuition being correct).
*This is philosopher-notation for an arbitrary proposition. P could be anything – ‘Murder is always wrong’, ‘I am the same person I was yesterday’ etc.
**There even may be classes of intuitions that are necessarily reliable on the level of the masses, because they are intuitions about proper conventions… but let’s not go into that here
Marshall: So when Lily and I get married… who’s gonna get the apartment?
Ted: Wow… that’s a tough one. Y’know who I think could handle a problem like that?
Ted: Future Ted & Future Marshall.
Marshall: Totally! Let’s let those guys handle it.
Ted: Dammit, Past Ted!
It’s a basic fact of human nature that we discount the future benefits and costs in our decisions. If you offer me the choice between £1000 today and £1010 in a year, I’ll take the £1000 today even though the £1010 is straightforwardly more money (assuming away inflation, for the moment and interest on the £1000 you give me today). I want to think about this in a different way to the way we normally do. For the moment, imagine that rather than being the same person with inconsistent preferences (Richard in 1 years time would have rather waited for the extra tenner), but as two completely different people. Future Richard is a different person to Richard – he’s just a person I care about a lot more than other people.
Like most people (I hope), I think we ought to be altruistic. There may be practical reasons for saying that it is either acceptable or indeed morally required to value yourself or your family over others, but these reasons are in service to the fact that it would be good for all if we did this*. One way of formalising altruism is that instead of simply acting to satisfy our own needs/desires/preferences, we ought to give everyone’s needs/desires/preferences equal weight in our decision making. But this is exactly what we fail to do towards our own future selves. We fail to give them appropriate weighting in our considerations, and it generates all kinds of deleterious effects on their welfare (hence why I’m writing a blog post rather than kicking off my CFA studies, or getting into the utter mountain of work that I need to get done in the next 7 days, for example). I guess I want to suggest that we are morally required to value the pains, pleasures, aspirations and hopes of our future selves to the same extent as today.
This is especially important if we think about the benefits of public policy from the standpoint of Net Present Value. This is because one of the factors that determines the discount rate is our preferences about future consumption – we have to be paid interest in order to induce us to save for the future, and contrariwise we have to pay others to lend us money to spend now rather than later. But we shouldn’t (from an altruistic standpoint) be looking to maximise value for current persons, but for all persons across time. The discount rate reflected in NPV calculations is morally prejudiced against future persons, because we don’t care about them as much.
That’s not to say at that interest rates and discounting shouldn’t exist – far from it! Take, for example, the preference for consumption smoothing over a lifetime – the interest rate co-ordinates the desires of those below their long term average wage to borrow to maintain consumption, and those who are above it to save for the point where they are below it. Interest rates do not necessarily reflect a failure of treating all persons equally. But our failure to be altruistic towards ourselves means that the interest rate is higher than it otherwise would and ought to be. The discount factor does not in and of itself drown out the future voices of Future Ted and Future Marshall (because remember: the discount factor represents opportunity cost), but using NPV calculations gives the wrong result in public policy – which ought to be conducted towards the promotion of the common good – by using a discount rate that partially reflects our failure to be altruistic towards our future selves.
Indeed, one might even make the argument that altruism towards our future selves is a much more achievable target than getting us to be altruistic towards each other. It would mean we would do less bad and stupid things towards ourselves, and less bad and stupid things towards each other (as we would be more concerned with future consequences of that on our relationships). Punishment would be a bigger deterrent, and as such we would have to punish less people.
All it would take is for us to care more about the person we already care about the most.
*Yes, yes I know it’s a lot more complicated than this but it doesn’t affect the argument
When the facts change, I change my mind. What do you do, sir? – JM Keynes
At a restaurant somewhere back in 2008 I had a long argument with my father about private schooling. I was making the argument that private schools should be banned, on the grounds that it was unfair for the children of wealthy parents to receive better schooling, which then increases their chances of going to the best universities, which then increases their chance of getting the best jobs etc. My father objected not on the grounds that this wasn’t unfair, but that the ban was a gross invasion of personal liberty and wouldn’t work anyway, as how could you stop parents hiring tutors at home, or tutoring their kids themselves, without spying on them in their own homes? I was a philosophy student, arguing from the position of what I thought was theoretically ‘right’, whereas my father was approaching from a more pragmatic ‘would it work?’ perspective.
My recent journey from being a liberal to a liberal-libertarian (or ‘Liberaltarian’) is almost the opposite of the one that Will Wilkinson spoke of in a characteristically outstanding essay over at Big Think. Mr Wilkinson describes himself at one point in as having undergone a “drift from right-leaning libertarian to libertarian-leaning liberal” who came to recognise the external forces that affect people’s lives and over which they have little or no authorship. At university I found myself falling into a much more extreme position – that if you consider the kinds of reasons that we accept as excuses or absolution of responsibility in normal cases, then it appears that we are not responsible (or have diminished responsibility) for almost everything we do. I still find these arguments extremely compelling. However, I have come to agree with Mr. Wilkinson that fostering individual conscientiousness and an ethos of responsibility are extremely important in practice for a prosperous and well-functioning society.
I would argue that in my own drift towards libertarianism I have not really revised my moral principles much, if at all. In a post last year (a time when I would still say things like Wilkinson being ‘the sole redeeming feature of the CATO Institute’), I argued that the state is justified only because the good it does outweighs the bad of its coercive nature, and actually took this to be an argument against libertarianism on the basis that even a radically libertarian state is unjustifiably coercive in normal social contract terms. Everyone is playing the ‘how good are the consequences of government’ game whether they like it or not, and I thought the consequences of liberal government were good.
What I have radically revised are my empirical beliefs about the world – namely, what is the actual effect of government policy and regulation, especially on the poor. I found myself completely persuaded by libertarian economists enamoured with public choice theory, which attempts to explain the systemic reasons why government policies so often fail to achieve their objectives. The absolutely superb Bleeding Heart Libertarians blog is a treasure-trove of compelling arguments that government regulation and initiatives designed to help the poor are often either so flawed or subject to co-opting by special interests that they end up hurting the very people they intended to protect.
Scott Sumner awakened me to the importance of monetary over fiscal policy (and, indeed, the arguable irrelevance of the latter) as the appropriate mechanism for macroeconomic management. And argued that the falsity of the efficient markets hypothesis doesn’t have the policy consequences I thought it does.
Michael Mandel, amongst others, persuaded me that economies grow through constant innovation and invention.
Will Wilkinson, amongst others, persuaded me that inequality is a complicated phenomenon and a consequence of injustice, rather than constitutive of it.
Brian Caplan, amongst others, persuaded me that the most powerful policy weapon against poverty by far would be open borders.
And finally, a year of reading philosophy, politics, economics and finance blogs for 2+ hours every day has demonstrated to me just how little I knew even as an ostensibly informed university student, and that the ability of people to understand and shape complex phenomena is much less than we ordinarily think. And thus I am persuaded by Hayek that the great thing about markets is that they co-ordinate all the little bits of knowledge that individual agents have and that simply cannot be aggregated by even an extremely intelligent, inhumanly diligent and miraculously benevolent group of policymakers.
I have always been a consequentialist of sorts, whose politics was concerned with successfully increasing the welfare of the least well off. I have now come to the view that this would be achieved through adopting a substantially more libertarian policy stance. But if you can persuade me that an interventionist policy with good intentions will actually achieve good results, or that a regulation won’t hamper innovation and reduce the welfare of the future poor, or that I shouldn’t worry about the powers that a new law gives to the state and the special interests that will attract – if you do all that, then I will change my mind.
The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design – FA Hayek