…we think about taxes the wrong way around. Most people think that raising a 5% tax rate to 10% is more noticeable (and painful) than raising a 50% tax rate to 55%. After all, the first represents a doubling of your tax rate; the second is only a 10% increase. But this is exactly the wrong way to think about it. The pain of a tax hike is determined not by how your current tax rate compares to your earlier tax rate, but by how your current disposable income compares to your earlier disposable income. Doubling the tax rate from 5% to 10% decreases your disposable income by about 5.25%. But raising it from 50% to 55% decreases your disposable income by 10%. That’s a much bigger whack.
OK, I’m willing to call it – it’s not as good as season 1… but it’s still probably the best thing on TV. Spoilers (and some tough love) below the fold
[All spoilers have been put below the fold]
The Killing (Forbrydelsen in the original Danish), my favourite current show on television, returned to the BBC for a second season of intrigue, complexity and the realistic pacing of criminal investigation that led it to be included as part of an entertainment movement of dubious acclaim – ‘The New Boring‘. Naturally, I completely disagree that The Killing is boring at all. On the contrary, it has a sweeping Dickensian quality to it that has quite rightly been compared to HBO’s The Wire (my favourite show ever) in terms of the rich depiction of its characters and the institutions in which they live and work. Of course, with a novel you can always turn to the next page, whereas with The Killing I have to wait a week for my next fix. I’m not sure how well I’m going to cope with this, as I got into Season 1 about three quarters of the way in to its broadcast on BBC4 and could therefore watch large numbers of episodes back-to-back on iPlayer…
Anyway, before I get into the spoilers and my thoughts on the beginnings of what I saw unfold, I want to talk a little bit about some aspects of the first season that made it stand out in its depiction of particular issues that are not often done nearly as well. For example, I am a huge Law & Order fan and especially of Special Victims Unit, which features an NYPD department tasked with solving and preventing crimes of a sexual nature. The show explores a number of themes on sexual violence, and I would say on this front it has been a net force for good in raising awareness of the issue. But it is constrained by its one-crime-per-episode format that doesn’t allow for much exploration of the victim’s background and relationships, as well as the effects of such crimes on them (if they survive) and their families and friends. The horror is often restricted to descriptions of the crime, which can limit the total impact of the violence by focusing our attention on just the act itself rather than what it may represent, or how its effects ripple through all those affected by it. Indeed, there is always the very real danger that you end up failing to straddle the fine line between the horrifying and the titillating; where the draw of the show becomes the very graphic depictions of the violence it seeks to condemn*.
The crime in season I of The Killing had a significant sexual element to it, and it was made all the more horrendous – and all the more real – by paying close attention to the anguish and turmoil it unleashed on the lives of those who knew and loved Nanna Birk Larsen. This is especially true given the lamentable real-life fact that the perpetrator of a crime like this is often a relative or friend, which ends up placing the grief-stricken under suspicion of the very monstrosity which has brought down their world. The Killing also filled out in detail the relationships and aspirations of Nanna, which takes her from being (for want of a much better term) an otherwise ‘generic’ young woman to a complete and compelling depiction of a person whose life project, in all its glorious complexity, was brutally and unforgivably interrupted.
Of course, I would be in serious remiss if I didn’t comment on the show’s centre of gravity: Detective Sarah Lund (played by the utterly mesmerizing Sofie Gråbøl). It says something of the sophistication of the show that while Lund is in many ways a ‘typical’ detective – devoted to the job, more competent than her peers and superiors, possessing an uncanny knack for detail, totally indifferent to how her line of inquiry may inconvenience those in authority, and someone who has difficulties balancing all the destructive elements of her personality which make her good at her job with her family and other commitments – the fact that she is a woman is simultaneously remarkable and unremarkable. Lund is remarkable for being in the mould of what is a much more traditionally ‘male’ character type, but the way the show is done makes her hypothetical existence totally unremarkable at the same time. It would have been all too easy for Lund to have come across as contrived (which SVU arguably stepped on the wrong side of by making Mariska Hargitay’s character a child from her mother’s rape). But luckily for us, what we instead have been offered is a realistic and powerful portrait by an exceptionally talented actress of a workaholic detective who just so happens to be a woman. I want to live in a world where Sarah Lund is remarkable for the right reasons, and I think The Killing is a positive depiction of a world where that is the case.
Now, this is the part where you go and watch season I if you haven’t seen it, watch season II episodes 1&2 on iPlayer if you have seen season I, and if you have done both those things – read on.
*I think this is a serious issue in Stieg Larsson’s ‘Millenium Trilogy’, but that’s a topic for a whole other discussion
[SERIOUS SPOILERS FOR EPISODES 1 & 2 BELOW]
[Thinking out loud, usual caveats on speculative reasoning etc. This line of thought was inspired by a Karl Smith post about a month ago, but I take a different angle here]
Something I have been thinking about recently is why it is the case that government is often so incompetent at delivering services. I don’t think it is fully explained by the profit motive in the private sector, but rather by a more general feature which I want to term ‘institutional decay’. I think that any institution geared towards providing a particular good/service, be it public or private, will tend towards incompetence over time.
The processes and systems of an institution are designed for various parts of production and delivery that are specific to the particular challenges at a particular point in time. I don’t see any reason why smart, creative and dedicated people in government can’t do this better than in the private sector. But the costs of designing and implementing such systems are significant and largely fixed. It is very difficult for an institution to change its systems unless it is under some kind of existential threat (e.g. bankruptcy), as people get used to working under the system and don’t want it to change, and the senior people who are responsible for the system take it as a kind of personal affront if the system isn’t the right one for the job anymore. Even if the system is initially designed very well, the evolving challenges of how to deliver the product/service successfully will eventually render it obsolete. This is just as true for an enormous private corporation as it is for an NHS hospital. In a competitive capitalist economy, innovative new systems and processes don’t have to jump over all the internal hurdles inside existing institutions in order to be implemented – you can start a new firm, or go to a smaller company with a plan for how they can better serve their customers. And if you are innovative and serve customers better, you will grow and challenge the incumbents.
The second reason I think this is that even if the challenges of delivering a particular product are stable over time, institutions will become sclerotic under the pressure of internal rent-seeking behaviour. A few months ago I read Mancur Olson’s superb book ‘The Rise and Decline of Nations’, in which he describes how under conditions of stability the political institutions of a nation will tend towards creating distributional factions that co-opt the political process towards producing private goods for particular factions rather than public goods which benefit all. It’s a follow-up to his extremely influential book ‘The Logic of Collective Action’, which is also on my ever-lengthening reading list. Anyway, the point is that there is no reason to think Olsonian rent-seeking and ensuing sclerosis is restricted to the public sphere. It can occur in any institutional setting with distributional features, and the logic of collective action applies just as much to departments within companies as to political lobbying groups.
So I think we have a kind of dilemma: existing institutions tend towards incompetence under conditions of changing consumer needs and challenges of production, and also under conditions of stability as distributional factions tend to solidify and lead to incompetent and fractious governance.
The difference with a competitive capitalist system is that the dynamics of institutional decay can be overcome through the creation of new institutions. And if you look through history at how those companies praised for their innovation and dynamism almost never fail to eventually fail, this view of the world looks quite plausible to me.
So, this is a theme we have been hearing a lot from the MMT (Modern Monetary Theorist) crowd and when I heard it over dinner last night from someone who is not in that camp, I felt that now more than ever that this was an argument in need of clear, simple and decisive refutation. When people say quantitative easing is just an ‘asset swap’, they’re claiming that exchanging bonds for money won’t affect prices. It is true that it won’t necessarily cause inflation, but for reasons that having nothing to do with it being an asset swap. Here is my simple reductio example:
1) Conventional monetary policy operations work via swapping reserves for short-term government bonds
2) If the Fed/BOE/ECB decided to increase interest rates to 10% tomorrow and achieved this through conventional policy, there would be serious deflation
3) Therefore, the fact that anything is an ‘asset swap’ is irrelevant
The reason quantitative easing hasn’t been inflationary in Japan or the US or the UK is because of the expectation that as soon as prices start increasing, the central bank will vacuum up the money again. Temporary injections of money do not affect aggregate demand*^, and it is a necessary ingredient that we must believe the central bank will allow inflation or NGDP growth in order for it to happen.
*Announcements of QE do seem to have had some very modest affects in the US, which is the result of people’s expectations about the Fed’s implicit target changing. You’d at least be more likely to think that Bernanke is determined to prevent deflation
^Thought experiment: imagine the Fed said they will double the money supply tomorrow, and then reduce it to half its current level by 2014. Would you buy a house?
So, I’m feeling fairly sanguine about my conceptual issues with NGDP. So long as the capital intensity of the economy does not change unduly over a short space of time, the fact that we treat capital goods differently to intermediate goods doesn’t bother me too much for NGDP-targeting purposes. I’d say that NDP is certainly the correct measure of welfare, and if you try and think about how you’d give GDP some kind of micro-foundation in utility theory, I think you’d see why that is pretty quickly. I remain very open to views as to what the best nominal expenditure figure for central banking purposes is, but I think NGDP is probably fine even if you get some issues with accounting decisions as to what to count as ‘capital’ in borderline cases. And I definitely think that we don’t want to be getting into the question of how (or whether) we should be depreciating capital when it is idle during a recession, and what impact that has on Nominal NDP (for example, straight line depreciation would make recessions look worse relative to, say, the machine hours method).
Googling last night on the topic of calculating GDP in practice, I stumbled across a paper hosted on the BEA website, entitled ‘Taking the Pulse of the Economy: Measuring GDP‘ (Landefield, Seskin & Fraumeni) originally published in the Journal of Economic Perspectives. It’s actually quite interesting. Let me begin by quoting the paper:
In the United States, the GDP and the national accounts estimates are fundamentally based on detailed economic census data and other information that is available only once every ﬁve years. The challenge lies in developing a framework and methods that take these economic census data and combine them using a mosaic of monthly, quarterly, and annual economic indicators to produce quarterly and annual GDP estimates. For example, one problem is that the other economic indicators that are used to extrapolate GDP in between the ﬁve-year economic census data—such as retail sales, housing starts, and manufacturers shipments of capital goods—are often collected for purposes other than estimating GDP and may embody deﬁnitions that differ from those used in the national accounts.
Yikes! There’s a really good summary on p200-201 of all the various metrics that are used to calculate the various components of GDP. It’s safe to say that the list initially provided me with little comfort. However, it does appear that revisions at the time of the census in the past have been low
For the last ﬁve benchmark revisions of GDP, which correspond to the census years 1982, 1987, 1992, 1997, and 2002, the nominal level of GDP was revised an average of 1.1 percent, and the growth rate between benchmark years was revised an average of 0.26 percentage point. The corresponding mean absolute revisions to the nominal level of GDP and the growth rate were similar in magnitude because most of the revisions were upward.
This does give me some comfort, subject to the key question that the benchmarking exercise is accurate. And I think it probably is – the assumptions you’d need to make for ‘fitting’ the data in order to avoid a large one-time revision would likely, over time, get fishier and fishier. Eventually some government statistician would work out they could make a name for themselves by whistleblowing on the whole thing.
So, whilst I have one more farewell post in the works featuring GDP accounting, I will be looking at ‘G’ rather than ‘I’. What I would say is that it has been surprisingly difficult for me to get a grip on what GDP ‘means’. Market Monetarists are fond of saying that everyone else starts their analysis with real GDP, whereas they (‘we’, maybe? I think I count as one) start with goods/services and the money that is exchanged for them. This is what allows them (us) to diagnose a recession as monetary disequilibrium. But I guess this exercise has led me one step further back even from that – I see the building blocks as money and transactions, some of which count towards NGDP and some which don’t, and that it is surprisingly tricky to understand how we build up those transactions into more meaningful economic indicators.
Same caveats as before: I really think this can’t be right, so can someone please show me why it is wrong.
Intermediate good: Something used in the production of another good. Deducted from GDP when used up.
Capital good: Something used in the production of another good, but we don’t *really* know how much of it is used up at any point in time. Not deducted from GDP when used up.
Maybe it’s just me, but I don’t see why the fact we don’t *really* know how much of a capital good is used up should make a difference to national income.
I agree with Josh and I agree with C.J. and I agree with Sam. And you know how that makes me crazy. – Toby Ziegler
Steve Waldman (of Interfluidity) makes a terse but thought-provoking remark on Twitter about the recent exchange (here, here, here and finally here) between myself and Lars Christensen on arguments for NGDP targeting
Best to replace the fiscal/monetary debate w/rules vs discretion debate that is catholic about means
Automatic stabilizers are the key to effective 1) policy and 2) expectation-setting. Because 1) They happen, and 2) People know they’re gonna happen. Could be fiscal or monetary, largely a question of where you inject the money.
Lars has also written a response to Steve here, which expresses some reservations about his fiscal policy ideas and reiterates the valuable point that the key to all this is rules. I don’t have a lot to add at this point, except to say that I think that central bank NGDP-targeting, if it works, is a solution to a specific problem, which Karl Smith today described better than I ever could
I can’t hammer this home enough. A recession is not when something bad happens. A recession is not when people are poor.
A recession is when markets fail to clear. We have workers without factories and factories without workers. We have cars without drivers and drivers without cars. We homes without families and families without their own home.
Prices clear markets. If there is a recession, something is wrong with prices.
I personally think central bank NGDP-targeting is probably sufficient for solving this problem (my recent worries about GDP accounting aside for one moment). But this has a lot to do with central bank credibility, and I am not currently in possession of a model of central banking that takes into account public choice and other factors, assuring me that it will always retain its credibility. Prudence seems to dictate baking in some helpful fiscal rules as well. Well-designed fiscal rules will do little harm if the central bank is credible, and will help if for whatever reason it is less credible in the face of a large fall in velocity. Put it this way: imagine we designed fiscal rules that aggressively amplified changes in the velocity of circulation of money. That would seem to make central banks (at the margin) less credible in targeting NGDP, as the amount of hypothetical QE it would have to do to signal seriousness about hitting the target gets larger. Making central banks more credible at the margin sounds like a good idea to me.
The ‘MMT’ers (‘Modern Monetary Theorists’) Steve and Lars refer to are right in their contention that the distinction between monetary and fiscal policy is artificial, insofar as you can conceive of fiscal policy as the government creating money through spending and destroying it through taxes and issuing bonds. That being said, there may be excellent reasons for leaving this artificial distinction enshrined in our institutions. Given the inconsistent preferences of the public on tax and spending, it’s probably a very good thing if people see the government as operating under a clear budget constraint.
Central bank NGDP-targeting in no way answers the question about the proper role or size of government, but rather liberates it from having to fight ‘recessions’ – indeed, I think adopting the rule would stop us from falling into them in the first place. My hope is that in an NGDP-targeting world, debates about what the government should or should not be doing will be of a higher quality, and there will be less hard choices to make. Now that would be progress.
WARNING: this is me in full thinking-out-loud mode. Please take it in that spirit. I would, in fact, be positively delighted to be shown this isn’t something I should worry about.
MV=PQ is a tautology. Of itself, it places no constraints on reality and is merely a device for organising our thoughts. But if you start empirically defining the variables, then it becomes useful. For example, it seems to me that there is an M pretty amenable to definition – the monetary base. And whilst a fair amount of electrons have been spilled on the blogosphere about the impossibility of defining P, (almost) everyone seems to agree that PQ can be defined – as NGDP. V is then left over as whatever it is than happens to the monetary base in order to produce PQ. NGDP-targeting fans such as myself say the central bank should set a rule such that it will manipulate M (and, most importantly, our expectations about future path of M) in such a way as to produce a certain level of NGDP. Whilst we can’t observe V empirically and can only infer it from M and PQ, we know the kinds of things that affect it just be thinking how the economy works (the banking system, the payments system, anything that affects demand for money relative to goods and services etc.). So when we see changes in those things, we forecast changes in NGDP (assuming constant M), and NGDP-targeting advocates like me think the Fed should pinky swear to offset these changes by producing more or less M, if necessary. We don’t need an independent way of accurately measuring V, we just need to know the kinds of things that affect it in order to stabilize the forecast for PQ, which itself stabilizes PQ. Isn’t theory beautiful.
…But, I have learnt to always be VERY wary when dealing with tautologies. One minute you can be saying something interesting and useful, and the next minute you can be saying, well, nothing at all. So, bearing in mind this is me thinking out loud, what if it is the case that we can’t define PQ? For example, because NGDP does not factor in depreciation, whether you count something as an intermediate good or a final investment good seems to affect NGDP (because you count intermediate goods being used up, but not capital). This distinction does not seem to me to be an economically relevant one, and has big implications for what counts as ‘PQ’.
So, suppose I’m right and there is no economically relevant distinction between using up intermediate goods and depreciating the capital stock, or that there is no way of drawing a line between the two in reality (completely prepared to be proved wrong on this). What has happened to MV=PQ? Well, we can define M and… er… that’s it. To be consistent, we can either start counting (some) intermediate goods as final investment goods, or we can start factoring in depreciation of final investment goods. Whichever way we go, we have some extremely nasty conceptual issues before we even get into the question of making oranges equal to Apples for real GDP/price level disentangling purposes.
I’m inclined to say that there is no fact of the matter as to what NGDP is, because there is no fact of the matter as to what counts as ‘final goods and services’. And I’m also inclined to say that it is decidedly non-obvious that NGDP is the relevant nominal expenditure figure for macroeconomic stability, as we’re getting into the issue of what exactly it means for ‘the economy’ to be stable. When I have been saying things like ‘recessions are caused by an increase in the demand for money relative to goods and services’, I thought I had a pretty tight concept of what goods and services meant. Now I’m not so sure.
The reason this worries me is that the statistical organisations have some kind of way of producing a final number, and I haven’t got the foggiest idea how they can possibly do it. The measurement issues alone boggle the mind, but there seem to me some fairly profound questions about what it is they are trying to measure at all. Which leaves me with my biggest worry: that when trying to calculate a statistic with both deep conceptual issues and high potential for mismeasurement, even if you can arrive at an empirical definition of the concept, you will (consciously or otherwise) start making the tricky judgement calls in such a way that the result fits with your prior understanding of the world. And that, my friends, would be a problem.
Lars Christensen (the first person to subscribe to my blog, thank you Lars!) critiques my claim that we need monetary ‘stimulus':
It might be because Richard is not an economist (no offence intend), but to a quasi-reactionary economist like myself when I hear the word “stimulus” I am reminded of discretionary policies. Market Monetarists are arguing strongly against discretionary policies and in favour of rules.
I am on the record here and here as saying that it is the creation of the expectation (which I think is best expressed through adopting an explicit rule) that makes the difference and I am absolutely in favour of making sure this part of the message is not diluted. I completely agree that the monetary policy we both favour is not ‘discretionary’ in the old Keynesian sense*. I termed the adoption of an explicit central bank target the ‘rule of law’ policy, and I think it’s a good description.
Linguistically, on the market monetarist position I think it remains ‘true enough’ to say that the Federal Reserve could ‘stimulate’ the US economy in terms of increasing output and employment by adopting a NGDP level target (at least against the baseline of the contraction that actually took place). This all being said, if it is considered misleading to term this action ‘stimulus’, then I am quite happy to bow to the convention. In particular, the point Lars makes here is excellent and should be taken on board:
In fact from a strategic point of view more QE without a clear monetary policy rule might in fact undermine the public/political support for NGDP level targeting as another round of QE just risks just increasing the money base without really increasing expectations for NGDP growth. This is a key reason why it is so important for me to stress why we are favouring NGDP targeting. We have to be right for the right reasons.
However, I also believe that if a person thinks that the government should in some way manage ‘aggregate demand’ – which I at least implicitly used to think – then they are ripe for conversion to the cause. The way I personally travelled this road was basically starting from intuitive Keynesianism, to ‘the central bank moves last’ to monetary disequilibrium theory. Alternatively, you could travel the (shorter) road from monetarism by saying the central bank should offset changes to velocity of circulation because it turns out it’s not all that stable.
People start from lots of different places. The tent should be big. Of course, I agree that we must be right for the right reasons – but that figuring out how to present your case to people who bring their own conceptual baggage to the table is important if you want to build a movement. As anyone who has ever spent a lot of time arguing about philosophy will have experienced, people can initially take the same words to mean vastly different things, and it often takes repeated interactions to learn what the other person intends by a certain term (see almost any comment thread on Scott Sumner’s blog). We need to be sensitive to other people’s priors, whilst also being careful to state our position clearly. That is a very difficult brief given the seemingly vast divergence of opinion on macroeconomic policy that has emerged over the last three years. But I fear it’s a necessary one.
On a personal note, I’d like to thank Lars both for responding to me, and for a very kind email he sent last week. I would not have upped my blogging without it.
*On my use of language, it’s worth noting that as well as being a non-economist, the Great Moderation had already started when I was born. I missed out on what I am sure must have been an intensely frustrating debate in the prior years