Tuesday 27 December 2011

SOPA

The US Constitution declares that "The Congress shall have Power ... To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."  The House of Representatives is considering exercising this power by passing the so-called "Stop Online Piracy Act" - SOPA for short.  There is strong and well-reasoned concern about the technical requirements of the proposed Act, outlined here, and internet companies are near-unanimous in their opposition.

There's no doubt that SOPA would damage the internet; the question is what would be gained in return.  The media conglomerate Viacom has produced an online video putting its case.  They seem to be trying to persuade us that there will be no more SpongeBob Squarepants without new copyright protections online.  I don't believe them.  Successful films, television programmes, and books are more profitable than ever before.  J.K Rowling is past half-way to being a sterling billionaire.  The truth is that the content that's attractive to 'pirates' is the content that's already enormously profitable.  This Act is aimed at making very rich people even richer.  It is not necessary for the promotion of making children's television programmes that there should be no unauthorized Rugrats toys, nor does songwriting depend on the ability of Warner Brothers to collect royalties for online performances of Happy Birthday.

Intriguingly, SOPA has the support of AFL-CIO, the American Federation of Labor and Congress of Industrial Organizations.  It says it believes that SOPA would increase employment.  In many circumstances Trades Unions add balance to the unequal relationship between employers and employees, but they are essentially economic actors who are interested not in employment in general but in the employment of their own actual and potential members.  It would be possible for a government to raise funds by auctioning the right to collect tolls on roads, and the Amalgamated Union of Tollbooth Operators would be pleased to support it.  That doesn't make it a good idea.


It's possible that if Warner Brothers made even more money out of Harry Potter and Happy Birthday it would spend more supporting potential new hits from otherwise struggling artists.  I'd be interested to hear from any such who believe in this.  Pending that, let's not break the internet: I'm willing in exchange for that to allow media multi-millionaires to struggle on with what they've already got.

Saturday 24 December 2011

Slow roasting

Chris Dillow seasonally enjoins us to 'stick it to The Man' by cooking stuff. (I suppose it's implicit that The Man should not be allowed to eat what we cook.)  I claim no great proficiency in the kitchen, but what's on my mind today is the right way to cook turkey.  There is a way that gives much better results than the usual method, which is to roast it slowly overnight.

Before I try to persuade you of this, a word of warning: this method is widely disrecommended.  However you cook the bird, it's important that every part of it get hot enough: the USDA recommends 165F/74C.  This is comfortably above the temperature at which vegetative bacteria such as salmonella and staphylococci will die.  This paper takes a look at the bacteriology.  The concern with slow roasting is, or at least should be, that the meat will spend longer at intermediate temperatures at which bacteria multiply rapidly and may secrete toxins.  This creates "a small reason to set a minimum time for raw food cook come-up". 

You should choose a bird that's been reared non-intensively.  I like to think that such birds are less likely to be contaminated with harmful bacteria, but in any case you owe it to a creature you're going to eat that it should have enjoyed a life worth living.  And it will taste better.  It will cost more too: most families in the UK can afford it, if you can't then you'll have better things to worry about than my culinary advice.

So to the cooking.  The problem with roasting large pieces of meat at 160C or higher is that the outside will have spent a long time at a high temperature before the inside gets hot enough to be safe.  That's ok with meats like beef for which it creates an interesting variation in texture and flavour, especially if there's a good covering of fat to keep the outside moist.  But turkey just dries out.  You can avoid this by cooking at much more gentle temperatures.  And it makes the whole process of cooking a meal much easier, because there's no need to be exact with the timing provided you give it long enough.  In one way, this method is safer because you won't be tempted to take the turkey out too soon because the outside is getting overcooked or because the rest of the meal is ready.

I hesitate to point you to any specific procedure: I mix and match from various sources.  But the key points are:

- Cook stuffing separately.  This leaves the cavity empty for the turkey to cook from the inside too.
- Start the turkey at a high temperature to kill surface bacteria.  (Or boil it for a few minutes instead if you're equipped to do so)
- Cook overnight on a rack at a temperature just below boiling point (cooler and bacterial toxins are more of a concern, hotter and it's harder to keep the meat moist, though a foil tent completely covering the roasting pan may do it)
- Use a meat thermometer to make sure the turkey is hot enough all the way through.  If you do this with time to spare you can always turn the heat up at the end to speed things up if necessary.

Best get started in the next hour or so...

Thursday 22 December 2011

Racial abuse update

There were developments yesterday in two of the cases I discussed six weeks ago.

First, John Terry is to face criminal charges detailed here:
On 23 October 2011 at Loftus Road Stadium, London W12, you used threatening, abusive or insulting words or behaviour, or disorderly behaviour within the hearing or sight of a person likely to be caused harassment, alarm or distress which was racially aggravated in accordance with section 28 of the Crime and Disorder Act 1998.
Contrary to section 31 (1) (c) of the Crime and Disorder Act 1998
 31 (1) (c) says "a person is guilty of an offence under this section if he commits an offence under section 5 of the Public Order Act 1986 (harassment, alarm or distress) which is racially or religiously aggravated for the purposes of this section".  Section 28 defines "racially or religiously aggravated": I think it safe to assume that any case against Terry will have no problem satisfying that definition.  Section 5 of the POA says "A person is guilty of an offence if he...uses threatening, abusive or insulting words...within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby...It is a defence for the accused to prove that that he had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment, alarm or distress, or...that his conduct was reasonable."

I confess that I overlooked in my earlier comment that section 5, unlike section 4A, does not require anyone actually to have been distressed.  It would be improper for me to speculate at this stage as to the outcome of the case, but I note that if convicted Terry faces a fine he would find trivial (a maximum of £2,500 if I read the scale correctly).  I cannot see how the public interest has been served by the police and CPS pursuing the case rather than allowed the FA to get on with its proceedings.

Meanwhile, the FA has shown how seriously it intends to take this sort of thing by banning Luis Suárez for eight matches and fining him £40,000.   The financial penalty imposed on Suárez, in whose case the police have shown no interest, is much heavier than the maximum fine faced by Terry, but even so it's the ban that will really hurt.  Suárez is paid about £4m a year by Liverpool, and they paid Ajax about the same per year again for his contract, so the value to the club of his services is of the order of £200,000 a game.  It's not surprising that the club is very disappointed by the penalty.

Reportedly Suárez admits to calling Patrice Evra either "negro" or "negrito", speaking in Spanish, where the words do not carry all the same overtones.  And there's an unclear allegation that Evra started it by referring in some way to Suárez's origins in South America.  I suspect that neither player could hold butter unmelted in his mouth for very long, and that this case is on the borderline between racial abuse that ought to be stamped out and playground tit-for-tat that ought to be left on the field.  It's possible that the FA has decided that a salutory ban reduced on appeal to a slap on the wrist is the way to send the required message - the FA statement emphasized that Suárez "has the right to appeal" and suspended the ban to give him the chance to do so: good luck to them sorting this out.

Tuesday 20 December 2011

Intellectual-property rights: academic papers

I wrote earlier that I'm opposed to intellectual property rights wherever plausible alternatives exits.  I'll start my review of the alternatives with perhaps the easiest case: academic papers.

My relevant experience is all in science and finance: conceivably things work differently in the humanities.  In the fields I know about, which are overwhelmingly the ones that matter to the vast majority of the population, the way that journals work is that authors submit papers for publication, the editor asks experts in the field to review the paper, the reviewers make recommendations for changes and for or against publication, the authors are invited to make any changes the editor thinks advisable, and the editor eventually publishes the paper or rejects it.  The journal, which has contributed the least to this process, ends up with the copyright to the paper: the author and reviewers work for notional fees or none.

Copyright is therefore no incentive for the production of academic papers.  Its only function is to provide incentives for journals to carry out a filtering process by which readers get an indication of which papers are worth reading, and funding bodies get an indication of whose research is worth funding.  Since there are too many papers to read, and too many researchers to fund, both these filters are valuable.

In practice, authors often circulate their papers to peers before submitting them for publication, both as a courtesy to anyone whose work they cite, and in the hope of getting helpful feedback.  Also,they often  make a version of the paper freely available on their personal websites - this is worth knowing if a paper you want to consult is hidden behind an expensive paywall.  I suppose that journals disapprove of this practice, but think it prudent not to draw attention to it by objecting publicly.

The alternative is simple: authors, as they do now, should consult whomsoever they wish until they think their paper ready for general release.  Then they should publish them on websites dedicated to the purpose.  arXiv does this already for some of the geekier fields.  Here's an outline of how it works, and here are some comments by its founder on its implications for academic publishing.  Here are some thoughts on its disadvantages: none of them seem to me to be fundamental to the question of copyright.  Interestingly, there is no suggestion that prestigious journals in Physics have been unable to operate without exclusive publication rights.

It may be that copyright restrictions are necessary in most cases to make it worthwhile to operate pre-publication peer review: here are some comments by Richard Smith, former editor of the BMJ, on how small a loss it would be to do without.

I submit that medical research in particular would benefit from free publication along the lines of arXiv.  That would get results out quicker, make them easier to consult online, and encourage publication of negative trial results.

If filtering mechanisms are required, something along the lines of Amazon's book-review system would be possible.  The user should have the option to apply weightings to the reviewers, favouring for example ones with high academic titles, or ones whose views, positive and negative, he shares regarding other specified papers.

Let's abolish copyright on academic papers now.  I predict that a few prestigious journals will survive, and the rest will be more than adequately replaced by free on-line publishing.

Wednesday 14 December 2011

Assholocracy

This blog is usually restrained in its use of language, fondly imagining itself to be read by relatives as well as its sometime trading-floor colleagues.  However, it is resolved to express itself with vigour when the occasion demands.  And so it now does: Geoffrey Pullum, whose book The Great Eskimo Vocabulary Hoax made me briefly regret never previously having perceived the attraction of a career in linguistics, thinks it important that the title of this post should get more google hits.  I am delighted to be able to render him this trifling service.

Intellectual-property rights

Tangible-property rights are a good idea.  They both encourage the supply of additional stuff and provide a mechanism for apportioning finite supply to where it's most wanted.  (This would work better if wealth were shared more equally.)

Intellectual-property rights are a bad idea.  They encourage the creation of new intellectual property, but they impoverish humanity by restricting the use of non-rivalrous goods.

Starting from scratch, would anyone really want the system we have now?  I think one would explore every other idea for rewarding innovation and creativity before being willing to settle on what we have as the least bad option.  So I'm going to explore other ideas in future posts...

Tuesday 13 December 2011

Alcohol-Attributable Fractions

This press release on Saturday reported that there were 1,173,386 hospital admissions related to alcohol in 2010-11, an increase of 11% from the previous year and more than double the number in 2002-3  (Hospital Episode Statistics are calculated from 1st April to 31st March).  Andrew Lansley for the government said:
These figures are disturbing evidence that, despite total consumption of alcohol not increasing recently, we have serious problems with both binge-drinking and long-term excessive alcohol abuse in a minority of people.
These consistent rises show that Labour took their eye off the ball on tackling alcohol abuse during their 13 years in power. Their reckless policies, such as the decision to unleash a 24-hour drinking culture in our country, only made matters worse.
Whereas Diane Abbott for the opposition was of the view that
The alarm bells should be ringing with the publication of these figures. It is clear that this Government is rapidly pushing us towards a binge-drinking crisis.
It is clear that for Andrew Lansley, the be-all and end-all is whether his friends in big business are happy, and, unfortunately, it is costing our NHS and British families an absolute fortune. A recent report predicted that binge-drinking will cost the NHS £3.8 billion by 2015, with 1.5 million A&E admissions a year.
So there you have it.  For the Tories, the problem is one of both binge drinking and long-term alcohol abuse, and it's Labour's fault.  For Labour it's just binge drinking encouraged by big business that's the problem, and the Tories are to blame (Ms Abbott was talking about a different report: I haven't traced it but it's mentioned here)

The press release was reproduced, quotes and all, in most of the papers.  The Guardian fleshed it out a bit.  The Daily Mail took Ms Abbott's word for it that all the admissions were to A&E.  The Times [paywall] got a quote from Mark Bellis, director of the North West Public Health Observatory which compiled the figures "These things are working their way through the system from a massive increase in alcohol consumption over the past two or three decades.  We've probably got more of this to come...Particularly at this time of year, we've got to address our relationship with drunkenness."

There's one dissenting voice, which calls the story a lie and links to this description of how the figures are calculated (the analysis dates from when the 2009-10 figures were published in May this year - the calculation seems to be quicker now).  The statistics are compiled not, as you might think, by asking people admitted to hospital whether they've been drinking (in A&E) or how much they drink (on the heart, liver, and cancer wards), but by applying an "alcohol-attributable fraction" to each admission according to diagnosis, age, and sex.  This methodology is confirmed in a comment by Andy Sutherland of the NHS Information Centre (I'm fairly confident that it's really him, because the press release correction he promises did appear).

The calculation of AAFs specific to England is described in this pdf (the purpose of which is described here). The method for each diagnosis is to identify the best available research on the increased (or decreased) risk associated with various levels of alcohol consumption (by age and sex where the data were available), apply the levels of alcohol consumption determined by the 2005 General Household Survey (by age and sex, adjusted using new estimates of the alcoholic content of drinks), and hence calculate what proportion of hospital admissions in 2005 were related to alcohol consumption.  Ideally alcohol consumption figures integrated over time should be used for diseases which take many years to develop, but the method seems broadly reasonable to me.

The data for all other years since 2002-3 have been calculated using the same AAFs.  Collated data can be downloaded in a spreadsheet here, showing a steady rise in alcohol-related hospital admissions.

Let me say that again: "calculated using the same alcohol-attributable fractions".  So what has happened is that researchers into each diagnosis have analysed data on alcohol consumption for people with and without the disease, and fitted those data to a model in which the diagnosis is due to two perfectly uncorrelated factors, one for alcohol consumption and one for everything else.  Applying this model to data on alcohol consumption in 2005, statisticians have deduced what fraction of 2005 hospital admissions for each diagnosis was due to the alcohol factor - the 2005 AAF.  And then these two perfectly uncorrelated factors have been assumed to be perfectly correlated in every other year, so that the AAF remains constant.  I am shocked that reputable statisticians have put their names to this method.  I can see no good reason not to repeat the 2005 analysis each year (except perhaps that it would take longer to get the numbers out).  Certainly that would give different results, since current alcohol consumption would actually be an input to the analysis.

What are the data on alcohol consumption?  The General Lifestyle Survey reports on weekly alcohol consumption above safe limits:
Following an increase between 1998 and 2000, there has been a decline since 2002 in the proportion of men drinking more than, on average, 21 units a week and in the proportion of women drinking more than 14 units...This trend seems to be continuing under the new methodology; between 2006 and 2009 the proportion of men drinking more than 21 units a week fell from 31 per cent to 26 per cent and the proportion of women drinking more than 14 units a week fell from 20 per cent to 18 per cent. These falls were driven by falls in the younger age groups... 
and on average weekly consumption:
The British Beer and Pub Association (BBPA) makes annual estimates of per capita alcohol consumption using data provided by HM Revenue and Customs. These show a steady increase in consumption from 1998 to 2004, followed by a decline of about 5 per cent to 2006, and then a further decline of about 7 per cent from 2006 to 2009. The decline measured by the GHS is much greater, at about 15 per cent between 2002 and 2006.
(there was a change of methodology in 2006 that makes it difficult to produce a single set of numbers)

It would seem that any increase in hospital admissions must be due either to the after-affects of long-term abuse which may have increased in the years up to 2002 or so before declining thereafter, or to occasional binge drinking not captured by weekly averages.  So I looked at data for three diagnoses: "toxic affect of alcohol", which should be an indication of binge drinking, "alcoholic liver disease", to capture the effects of long-term alcohol abuse, and "atrial fibrillation and flutter" to look at what's happening with a common diagnosis with a small but non-zero AAF.  These are available here, based on the same data as the alcohol-related admissions figures.

I've included series for total admissions and for the alcohol-related admissions data I'm writing about.  All the series are normalized to 2002-3, when the numbers were: total admissions 11,414,074; alcohol-related admissions 510,780; atrial fibrillation and flutter 68,731; alcoholic liver disease 11,582; toxic effect of alcohol 1407.

What's striking is that the alcohol-related admissions numbers have increased far faster than any of the other series (admissions for the toxic effect of alcohol have not increased at all).  How can this be explained?  I looked through the diagnoses to find any that had at least doubled from at least 10,000 since 2002-3: there were 27.  But only one of them had a non-zero AAF: hypertensive renal disease.  I must be looking at the wrong data - AAFs for many diagnoses are higher for younger patients, so there must have been a big increase in these admissions among the relatively young, which don't appear in the totals I looked at.  (The data are there in the spreadsheets, but you get only so much for your money.)

One thing did catch my eye however, which is the increase in admissions for "obesity" from 1,297 to 11,740. This may be associated with increasing availability of bariatric surgery, but it's no secret that there have been big increases in obesity.  Furthermore, obesity is linked to hypertension and diabetes, both of which will increase hospital admissions among the relatively young (not least for hypertensive renal disease).

This is speculative, but my guess is that the alleged rise in alcohol-related hospital admissions is in fact a rise in obesity-related hospital admissions, which are linked to some of the same diagnoses at similar ages.  Perhaps the statisticians behind this weekend's newspaper stories could find time to look into this hypothesis.

What the UK wanted

Frances Coppola links to this document, which the Telegraph tells us is the "UK protocol demand to the EU".  I reproduce here the demands, which in the document are interspersed with what look like briefing notes:
1. Unanimity on:-
  i) Transfer of powers from national level to EU agencies
  ii) Maximum harmonisation provisions that prevent member states imposing addditional requirements
  iii) Fiscal interests of member states and imposition of taxes, levies etc.
  iv) The location of the European Supervisory Authorities
2) General provisions for:-
  i) Requirement for executive powers of ESAs to be clearly set out and not replace the exercise of discretion by member states' competent authorities
  ii) Ensuring that 3rd country financial institutions that operate only in one member state are authorised and supervised in that member state if they do not want a passport
  iii) No discrimination within the single market for financial services on the grounds of the member states within which an institution is established.
[The most relevant ESAs here are the European Banking Authority, based in London, and the European Securities and Markets Authority, based in Paris.]

The note on 1iii) explains that "...measures which entail very sizeable levies on the financial sector, such as the Deposit Guarantee Scheme Directive, are being pursued under QMV [Qualified Majority Voting] legal bases"

Coppola comments:
...they amount to imposing a UK veto in areas pertaining to financial markets and regulation. Existing EU practice allows decision-making in these areas to be done by Qualified Majority Voting, which would in effect mean that a tighter, more unified Eurozone could consistently out-vote the UK and therefore impose on the UK's financial sector regulation and taxation against the will of the UK government. It isn't correct to suggest, as some commentators have, that Cameron was trying to evade tighter regulation of the financial sector, or prevent imposition of a Financial Transactions Tax (FTT). In fact paragraph 2 of the proposed changes would allow the UK to impose higher capital requirements than the EU requires and unilaterally implement the ring-fence recommended by the Vickers committee. And the FTT is not mentioned in the proposals at all - and it would require all 27 nations to agree to it anyway. No, this was simply an attempt to preserve the UK's authority over its financial sector, which dominates its economy.
 I'm not entirely convinced by Coppola's argument about the FTT.  First, not every leaked note is a complete and accurate representation of what was actually discussed.  Second, Cameron may have been concerned that some of the powers he wished to limit could be used to twist the UK's arm - "if you can't agree to the FTT we'll have to introduce a swingeing new levy instead".

Be that as it may, what are the deal-breakers in the demand?  Sarkozy spoke of a "lack of regulation of financial services".  Cameron claimed to have "protected Britain's financial services...from the development of eurozone integration".  1i) would seem to be relevant: the notes tells us that "agreed...restrictions are being tested routinely in new legislation seeking to extend the supervisory powers of the ESA".  Still, it's hard to see that if both parties wanted an agreement they couldn't reach a compromise on ESA powers.  I think we should take Henry Peterson's advice and follow the money, so my attention is more on 1iii) and 1ii): Merkel wants taxes and levies to be determined centrally, Cameron wants them to be in the hands of member states.

Incidentally, Paddy Ashdown in the Guardian asserts that a deal couldn't be reached because Sarkozy is utterly fed up with Cameron.  But I persist in believing that it's Sarkozy who talks, but Merkel who decides.  Because Merkel is the one with the money.

Update: Peter Mandelson thinks the EU will be able to force the FTT on us:
...EU financial regulation will be decided by majority vote and the majority will argue for strong regulation to curb the activities of the people who have done most to exacerbate, in their view, the eurozone crisis. The eurozone will introduce a financial transaction tax that will hurt the City and we will be powerless to halt it.

Monday 12 December 2011

MBA

I read in the Guardian that Osita Mba, the HMRC whistleblower, has a master's degree from Oxford.  In fact he is a Bachelor of Civil Law - at Oxford this is a postgraduate course but not a master's degree.

Which is disappointing, in that I was hoping to find that he had a qualification as reported, in business administration.

Saturday 10 December 2011

The capitalist yolk

The BBC has a story about the probability of getting six double-yolked eggs in a box of six.  It points out that young hens are more likely to lay double-yolked eggs, and those eggs are larger than normal, so if you buy a box of large eggs which happen to have been selected from eggs laid by young hens, the odds of their all being double-yolked are much shorter than the one in a million trillion they first thought of.

Well yes, but the odds must still be pretty long.  And there have been other similar stories, not least this one about a woman who opened 29 double-yolked eggs in a row from a box of 30.

When something wildly improbable seems to have happened, it's as well to consider alternative explanations.  Here's a clue from David Spiegelhalter, Professor of the Public Understanding of Risk, who was intrigued to find he could buy a box of double-yolked eggs from Waitrose: he mentions it in this article.

It turns out that double-yolked eggs can be identified easily enough by shining a light through them - "egg candling".  (It's safe to try this at home.)  Egg producers operate egg-grading machines: I suppose the machines include automated candlers as part of the grading process.  So it's not unlikely that some machines automatically separate double-yolked eggs. Some of those eggs will be sold as such for a premium, if demand exists.  And if the demand is less than the number of double-yolked eggs produced, they'll just get put into regular boxes for the appropriate size of egg.  I reckon the odds of this happening sometimes would be very close to one.

Questions from the EU summit

We are told that 26 out of 27 European leaders agreed to amend the Lisbon treaty, but David Cameron exercised Britain's veto.  So the 26 will do as they wanted, but it won't be an EU agreement.

The leaders' statements raise more questions then they answer.  Among them:

- Were they really hoping to revise the Lisbon treaty?
A revised treaty would have to be approved by all 27 parliaments, and the Irish would probably have to have a referendum (unless the Supreme Court decided the revision didn't substantially alter the character of the Union).  It took two years and two Irish referendums to get the original Lisbon treaty ratified.

Cynics, including me, will have at least a passing suspicion that none of the countries really wanted unanimous agreement.  It would suit everyone if the deal could be kept just this side of requiring an new referendum in Ireland.  The 26 countries want to be seen to be tough on banks, especially Germany which has spent much more (Figure 1.6) than the UK on bail-outs.  Cameron wants to be seen to be tough on the EU.  This looks like a separation made in heaven.

- What exactly was it Merkel and Cameron couldn't agree on?
Sarkozy has made the clearest statement about this: he says the sticking point had been Mr Cameron's insistence on a protocol allowing London to opt out of proposed change on financial services. "We were not able to accept because we consider quite the contrary - that a very large and substantial amount of the problems we are facing around the world are a result of lack of regulation of financial services and therefore can't have a waiver for the United Kingdom."  The word 'because' is being strained to breaking point there: there's nothing about the substance of the plan that requires new powers to regulate financial services.

This document gives an outline of the plan.  The first ten clauses are concerned with the enforcement of budgetary discipline in the member states.  The rest is about the new "European Stability Mechanism" (ESM).  Clause 15 provides for qualified majority voting when emergency assistance is needed.  There's nothing about financial regulation.

The major disagreement between the UK and the others has been on the proposed European Financial Transactions Tax: the rest of the EU wants a tax on business in London to be paid directly to Brussels and the UK doesn't.  It's plausible that (unpublished) details of the proposed agreement would remove the UK's veto on this, and Cameron wouldn't agree.

- Why wouldn't Cameron tell us?
He said "We have protected Britain's financial services, and manufacturing companies that need to be able to trade their businesses, their products, into Europe. We've protected all these industries from the development of eurozone integration spilling over and affecting the non-euro members of the European Union". Which seems to be a suggestion joining the other 26 would make it harder for the UK to sell them stuff. Colour me sceptical on that one.

If the real sticking point was that he wanted to retain a veto on a Financial Transactions Tax, why not say so?  OK, banks are not popular, but neither is giving money to the EU.  Couldn't he have said "they want to impose a tax on business in London that would be paid directly to Brussels.  I wouldn't agree to let them do that."?  The only explanation that makes sense to me is that the 27 agreed not to be specific about the problem, so that each could spin it in their own way.

- What is Merkel's plan to save the Euro?
No one seriously imagines that austerity alone is going to do it.  Cutting government spending never achieves the intended savings, because the government gets some proportion of its spending back in tax revenues.  The underlying problem is in the balance of trade: if each country had its own currency then FX rates would have adjusted to prevent the imbalances getting too large.  As it is, Italy, Spain, Portugal and Greece are all running large deficits.  It would in theory be possible for austerity to reduce imports enough (except perhaps in Greece which has a problem collecting taxes), but that's a theory that requires people not to mind having their living standards crushed.

Last time I wrote about this I guessed that the markets guessed that she was going quietly to allow the ECB to undertake a massive programme of Quantitative Easing, using the money to buy PIGS bonds.  There's still no sign of that.  The plan as it stands seems to be to calm down the bond markets with more or less believable promises of austerity, with the ESM - a slightly souped up EFSF - to contain any local difficulties: it's hard to see that's going to be sufficient to let Italy refinance its debt at affordable interest rates.

What might work in the medium term would be for Germany to spend (you might prefer to say 'invest') its trade surplus in the PIGS.  For example, it likes solar power: how about building solar cell factories in those countries, buying up land there where the sun shines a lot (this is not the hardest part) and covering it in solar power stations?  That would help meet any undertakings the EU may make in Durban, and I think it would be a lot easier to persuade German voters to spend money on saving the planet than on supporting pensioners in Greece.

- Whose hand did Nicolas Sarkozy want to shake in Le Snub?
Sarkozy air kisses the hand of Dalia Grybauskaitė, president of Lithuania, then seems to swerve a handshake with Cameron, in favour of Dimitris Christofias, president of Cyprus.  According to the Telegraph it was just a trick of the camera angle - "Mr Sarkozy was making eye contact with a man beyond Mr Cameron". According to me, Sarkozy was making a beeline for one man in the room whose hand he could shake without having to look up.  (It's interesting that none of the newspapers' European correspondents is able to identify minor European presidents.)

- Has Ed Miliband got a plan?
According to Miliband, Cameron "mishandled these negotiations spectacularly".  But how would he know, he wasn't there?  Miliband has got a real chance of becoming Prime Minister in three years or so: he needs to start behaving like someone who can be taken seriously in that role.  When commenting on international affairs, he should be saying something statesmenlike, along the lines of "it's unfortunate for Great Britain that the Prime Minister was unable to reach an agreement in the best interests of the country.  I will be meeting with Mr Cameron to find out why he was unable to do so."

Friday 9 December 2011

More on marginal tax rates

In my post on optimum tax rates I mentioned as an afterthought that the 52% marginal direct tax rate in the UK goes through what seems to me to be a psychologically important level of 50%.  Comments on a blog that's easier to read than this one have led me to expand on the point.

The analysis of optimum marginal tax rates depends on how much taxable income changes when the tax rate changes.  Changes in taxable income can result from two causes: reduction of broad income and tax avoidance which reduces taxable income without the taxpayer actually earning less money.

Tax avoidance, through income timing or taking income in a different way, will involve careful planning, so all avoidable taxes should be considered.  But reduction of broad income by trying less hard to earn money, or by moving overseas to a friendly tax regime, will be caused not so much by considered analysis of what's worth it for the money as by one's gut reaction to the marginal tax rate - "Do I really want to do this piece of work just so that Osborne can get 52% of the reward?"

In that context it seems to me that 50% is an important level to breach.  It may be that the curve relating taxable income to marginal tax rate has a kink in it at about that level, so that either 42% or 62% might raise more revenue than 52%.

This is just speculation; empirical evidence would be hard to come by.  One can't simply experiment with tax rates from year to year: a temporary change will see more elasticity than a permanent change, because some top-rate taxpayers are able to advance or defer their income.

Optimum tax rates

Peter Diamond and Emmanuel Saez have published a paper which includes a calculation of the "optimal top marginal tax rate" in the USA, on the assumption that the only criterion is to maximize revenue - there is negligible social utility in letting rich people keep their income.  The calculation has attracted considerable interest from on-line commentators.  Nobel laureate Paul Krugman writes in the New York Times in defence of the criterion.  Brad DeLong observes that Adam Smith disagrees, partly because the rest of us take vicarious pleasure in the rich enjoying their wealth.  Richard Green fears that higher taxes on high earners might cause them to pay their servants less.  Kevin Drum reports with evident satisfaction that according to one number in the paper the peak of the Laffer Curve is at a (US) Federal income tax of 76%, far above the current top rate of 35%.

In the UK, the "#1 economics blogger" Richard Murphy quotes Drum at length, and concludes that we are comfortably below the peak of the Laffer Curve.  Murphy is not one to concern himself with details, but he seems simply to be noting that the top UK tax rate of 50% is a lot less than the 76% he's quoted.

Meanwhile, the UK's leading libertarian scandium oligopolist, Tim Worstall, asked his readers to calculate what tax rate in the UK, including employers' and employees' National Insurance and VAT, would be directly comparable with the tax numbers used by the paper for the USA, which includes the Medicare tax and state income and sales taxes.  He used the analyses they (I might say 'we') submitted to declare that Murphy is wrong (that's always Worstall's conclusion) and that the true optimum top UK income tax rate is 40%.

As you might expect, I am going to adjudicate.  First, an outline of what Diamond and Saez actually did:  They assume, in line with extensive empirical research, that income at the top end follows a Pareto distribution, that is it has a probability density function falling off according to a power law, p(z) = C/z^(1+a).  They find empirically that the parameter 'a' in the USA is 1.5 .  Then they assume, following various other authors, that taxable income is an elastic function of retained earnings (in the economic sense of 'elastic', i.e. a given change in the logarithm of the fraction of marginal income not taken in tax results in a proportional change in the logarithm of taxable income reported).  This is a convenient assumption, in that it means that a change in marginal tax rate leaves the power law shape unchanged, affecting only the value of 'C'.  As the tax rate increases the fraction of income retained falls, so that a given change in tax rate has a larger proportional effect on the retained fraction, and hence a larger effect on taxable income reported (e.g. a tax rate change from 0% to 1% takes away one hundredth of your net income, but a change from 90% to 91% takes away a tenth).  So with the elasticity assumption there is a critical tax rate at which the reduction in taxable income when the tax rate is increased balances out the extra tax raised by the higher rate - this is the optimum rate, which turns out to be 1/(1+elasticity.a), as the mathematically inclined reader may care to prove.  The difficulty now is to determine the elasticity parameter.  They report a mid-range estimate from the empirical literature of 0.25, but go on to use figures from another paper, also co-authored by Saez, which finds an elasticity of taxable income for top earners of 0.57, but only 0.17 for 'broad income', which they define as "Total Income less Capital Gains [and] Social Security Benefits".  The implication is that most of the elasticity is due to tax avoidance rather than reduced income.

How applicable is this to the UK?  The Pareto distribution seems to hold quite generally, but the power law may be different: this paper reports a=1.37 in the USA and a=1.06 in the UK.  I would expect the 'broad income' elasticity to be somewhat higher in the UK, because high-earning Britons are more likely than Americans to move overseas, if only because Americans are discouraged by the extraordinary geographical range of American tax laws.  But I would expect the taxable income elasticity to be smaller in the UK, because there are fewer deductions available.

What taxes is it appropriate to include in a UK calculation?  Income tax obviously, and the 2% top rate employees' national insurance contribution (it's the same for the self-employed).  This corresponds to the 1.45% employees' Medicare tax included in the US calculation.  Also included in the US calculation are the 1.45% employers' Medicare tax and 40% of average (state) sales taxes, which is 2.32%  Analogously, we should include employers' NI of 13.8% and some fraction of the 20% VAT rate.  But I'm unconvinced that this is right.  The relevant taxes in the USA are quite small, so Diamond and Saez may have included them just to avoid argument.  In the UK the issue is more important, and deserves some consideration.  It seems to me that tax avoidance schemes are chosen by careful calculation of their benefits, but scaling of effort in response to tax changes is more emotional: I doubt that many people would think of employers' NI as a consideration there.  However, the incidence of employers' NI is considered to be largely on the employee, so it may make working abroad relatively attractive financially.  Regarding VAT, I doubt that much of the marginal income of high earners goes on goods subject to VAT.  For the most part, a person earning well into six figures buys whatever retail goods they feel like already.  And psychologically, paying tax when you buy stuff does not affect your attitude to earning money in the same way as having to hand more than half of it over to the government as you get it.

My rough numbers: a=1.25, broad income elasticity = 0.27, taxable income elasticity = 0.4, optimum combined marginal tax rate = 67%.  Employer's NI contributing to elasticity effect = 2%, VAT contributing to elasticity effect = 5%, Marginal income tax rate net of 2% employees' NI to give 67% combined rate = 62%

There's a good bit of guesswork in the parameters I've used, so there's no reason why anyone else should get the same answer.  But I think it's pretty hard to defend the choice of elasticities of either 0.57 in the UK (Worstall) or 0.17 in the USA (Drum).

In the interests of full disclosure I should say that I've paid tax at the 50% rate ever since it was introduced.  I may not do so in the 2012-13 tax year.  I can tell you that there's a psychological impact from direct taxes exceeding half one's marginal earnings: it's OK for you not to care.

Tuesday 6 December 2011

And then there were none

There used to be one tolerably sane candidate seeking the Republican nomination for the US presidency.  Not any more.  Here's John Huntsman revising his position on climate change:
there are questions about the validity of the science — evidence by one university over in Scotland recently
I think he means the University of East Anglia. It's reassuring to note that he's wrong about the geography as well.

Saturday 3 December 2011

FTT and stock market crashes

Could a Financial Transactions Tax in Europe avert major falls in equity markets?  I'm going to consider this in the light of major equity index falls working backwards from now.



1) The Eurozone debt crisis sell-off starting in July 2011.
Between 7th July and 24th November this year the FTSE fell by 14.6% in reaction to the Eurozone debt crisis.  Its hard to see how an FTT on shares could have had much of an effect on that.  It's possible that an FTT on bonds (which is also part of the proposal) could have slightly reduced the falls in sovereign bond prices, but the underlying problem is the massive deficit and debt problems of several Euro countries.  (I've put an end date to the sell off at the recent market low, but I'm not promising the decline in equity prices is over.)

2) The Flash Crash of 6th May 2010.
Starting at about 2:40pm on the east coast, the S&P and other major indexes fell about 5% over 5 minutes, then recovered their losses over the next 10 minutes.  The exact causes are not definitely known, but it's certain that high-frequency trading played an important part.  It's probable that the crash wouldn't have happened had there been an FTT in the US.

However, the temporary crash had no effect on European markets, which were closed.  Had the crash occurred earlier in the day, there would have been some reaction in Europe, which would have been smaller with an FTT than without.  There's no way to quantify this.

3) The Financial Crisis sell-off between October 2007 and March 2009.
Between the end of October 2007 and 3rd March 2009 the FTSE lost 47.7% of its peak value.  This was one effect of a global financial crisis caused by the collapse of a credit boom built on the back of rising US house prices.  Banks had built up extraordinary levels of exposure to mortgage-backed securities, but an FTT would have affected this not at all.

4) 9/11
The FTSE fell 5.72% on 11th September 2001, in reaction to terrorist attacks in New York.  This was the only one of the ten biggest one-day percentage falls not to have happened in either October 1987 or during 2008.  The fall was a rational response to the information then available about the attacks (which occurred during the European afternoon).  An FTT would have been irrelevant.

5) The collapse of the tech bubble at the beginning of the third millennium
After reaching a new high on the last trading day of the century, the FTSE fell progressively, reaching a low on  12th March 2003, by which time it had lost 52.6% of its peak value.  A lot happens in three years, but the simple explanation is that there was a gradual re-evaluation of the true value of the internet market.  An FTT would have no bearing on this.

6) The Russian Financial crisis and the collapse of LTCM, July-October 1998
Between 20th July and 8th October 1998 the FTSE fell by 24% as a result of a financial crisis in Russia and in a reaction to the (not unrelated) failure of the hedge fund Long-Term Capital Management on 23rd September.  The market recovered the LTCM part of its losses over the following eight days as it became apparent that the damage had been contained.

It would probably have been impossible for LTCM to have executed its strategies in the presence of an FTT in the USA, so its boom and bust never would have happened.  An FTT in Europe would have made little difference to it.  So an FTT in the USA could have averted the last 6.6% of the fall.

7) Black Monday, October 1987
The FTSE fell 5.4% on Friday 16th October, and 5.7% on Monday 19th.  The major action happened in the US market, which fell precipitously after the FTSE had closed: the S&P lost 20.4% on the day.  As a result, the FTSE opened on the 20th down another 18.1%.

The causes of this one were complex.  The losses on the 19th seem to have been accelerated by program trading and portfolio insurance strategies in the US.  It's possible that an FTT would have discouraged the development of these strategies.

***

Of the seven market falls I've looked at, one, which had no effect in Europe, would probably have been prevented by an FTT in the USA, one would probably have been reduced by it by about a quarter, and one might have been reduced by an unknown amount.  None would have been significantly affected by an FTT in Europe.

It's not surprising that an FTT would have more effect in the USA.  For regulatory reasons, most share trading in the USA is done on (electronic) exchanges, which makes automated trading much more profitable.  In Europe, similar exchanges exist but most large share trades are OTC (over-the-counter).


***

While writing this, I came across this BBC analysis which covers many of the same events (without reference to a Financial Transactions Tax).

Thursday 1 December 2011

Hypersensitivity

The BBC, in an article about AIDS funding, lists the ten leading causes of death in high, middle, and low-income countries.  It seems that about 2% of global mortality is due to "hypersensitive heart disease".  I think it's rather tactless of the BBC to say so - mightn't it hurt the poor darling's feelings?

This isn't spelling correction software doing its worst - "turberculosis" is also on the list.

But this is a serious subject, so on a more serious note: why won't the BBC link to its sources?  The data come from this WHO report (the WHO gets the names of the diseases right).  There's a list of countries by income group on page 170 here: the list is derived from a World Bank spreadsheet.

Update: The BBC has deleted the list.  Here's a screenshot:

The FTT, noise trading, and volatility

I summarized quite a lot of research in my post about the FTT and volatility with an airy "There are other high-frequency noise effects which have been theoretically analysed".  I'll say a bit more.

First, except in the special circumstances I discussed previously, speculative trading can increase volatility only if it loses money.  As Milton Friedman noted in his seminal 1953 paper The Case For Flexible Exchange Rates
People who argue that speculation is generally destabilizing seldom realize that this is largely equivalent to saying that speculators lose money, since speculation can be destabilizing in general only if speculators on the average sell when the currency is low in price and buy when it is high.
That does not mean that individual speculators cannot profit from activities that increase volatility, but they can do so only at the expense of other speculators.  Consider a market in which trend-following speculators are active.  An ingenious speculator might create an artificial trend by buying a stock in sufficient volume, causing the trend-followers to start buying into the trend, driving the stock higher.  When the clever guy judges the trend-followers have filled their boots, he'll dump the stock at the higher price, locking in a profit.  The stock will thereafter gradually revert to whatever it's really worth, and at some point the trend-followers will sell out, realising their losses.

But there is only so much money that unsuccessful speculators are willing to lose, so the capacity for speculators to increase volatility is limited.  (Bankers have lost apocalyptic amounts of money in the last few years, but not on speculative trading of the sort that might be discouraged by an FTT.)

Nevertheless, the review paper I discussed previously cited no fewer than twelve papers attempting to predict the effect on volatility of a transaction tax.  (The ante-penultimate link doesn't now work, and the paper, which is good, doesn't discuss volatility directly.)  Each of them sets up an a model of a securities market, and predicts how it will operate with and without a transaction tax by means of theoretical analysis, or by computer simulation of trading strategies, or by having humans playing a trading game.

An essential component for a transaction tax to be able to reduce volatility in these models is the existence of what Fischer Black (of Black-Scholes) called "noise traders".  These are traders who speculate in the market without having any information not already priced in, as distinct from "information traders".  In Black's conception, traders often do not know which sort of trader they are, which creates uncertainty essential for the operation of a liquid market.  The papers I list do not all follow the definition exactly, but they incorporate the concept in some form.  Their results seem to depend on to the extent that the way the market is modelled tends to cause the transactions tax to discourage noise traders more than others.

None of the papers has a set-up which is much like actual equity markets: this is partly because they are analysing something like the Tobin tax proposed for FX markets and partly because it's often easier to analyse something other than reality.  But it's the equity market that the proposed European FTT is mainly concerned with (the FX market is excluded).  And none of the papers includes the full range of trading strategies that operate in actual markets.

A noise trader whose decisions are indistinguishable from random will add a small amount of volatility, and tend gradually to lose money.  I am sceptical that there are many traders of this sort.  Most speculative traders follow some sort of strategy, broadly the strategies are either trend-following or contrarian.  (One of the papers listed explicitly includes both these strategies; others may do so implicitly by using human traders in their simulations.)  It's important to include the trend-followers to give a transactions tax a fair chance to reduce volatility significantly

If I were to attempt something like this I would want to include at least the following:
 - large trades being executed gradually.  This would feed a series of trades in the same direction, with the broker varying the size and timing in an attempt to disguise what he's doing.  These trades are profitable for trend-followers
 - trades being done for exogenous reasons.  These are not strictly noise trades, and will not be deterred by a small transactions tax, but their size and direction looks random.
- information trades (some authors call them fundamental trades).  These are done by traders who have used private aptitude to deduce fundamental valuations from public information.
- hedge trades.  These are trades done by option traders who are in aggregate either long or short gamma.  If option traders are short gamma their hedging tends to increase volatility, and vice versa.
- insider trades.  These are trades done using information that is not yet public, but is made public after some time delay.  These trades are profitable for trend-followers.
- trend-following speculative trades.
- contrarian speculative trades.
- speculative trades attempting to profit at the expense of other speculative strategies
- a stochastic process for the fundamental value.  All traders will be aware of the direction of large changes in fundamental value (corresponding to obviously important news).

That's a lot of things to put in, and a lot of parameters and relative weightings to vary.  However, there are only three sorts of trades which tend to increase volatility - short gamma hedging, unsuccessful trend trades (successful trend trades don't increase volatility, they just bring the price change forward), and trades parasitic on trend trades.  If things are set up so that trend trading is profitable despite being unsuccessful quite often then a transactions tax sufficient to make it unprofitable can decrease volatility significantly.  Note that trend traders do need to trade quite often, because they need to unwind their position quickly if a trend they've traded on fails to continue.

My guess is that with some care it would be possible to create a set-up where this happens.  More tentatively, I guess that the actual market doesn't match it.