Tuesday, 27 December 2011

SOPA

The US Constitution declares that "The Congress shall have Power ... To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."  The House of Representatives is considering exercising this power by passing the so-called "Stop Online Piracy Act" - SOPA for short.  There is strong and well-reasoned concern about the technical requirements of the proposed Act, outlined here, and internet companies are near-unanimous in their opposition.

There's no doubt that SOPA would damage the internet; the question is what would be gained in return.  The media conglomerate Viacom has produced an online video putting its case.  They seem to be trying to persuade us that there will be no more SpongeBob Squarepants without new copyright protections online.  I don't believe them.  Successful films, television programmes, and books are more profitable than ever before.  J.K Rowling is past half-way to being a sterling billionaire.  The truth is that the content that's attractive to 'pirates' is the content that's already enormously profitable.  This Act is aimed at making very rich people even richer.  It is not necessary for the promotion of making children's television programmes that there should be no unauthorized Rugrats toys, nor does songwriting depend on the ability of Warner Brothers to collect royalties for online performances of Happy Birthday.

Intriguingly, SOPA has the support of AFL-CIO, the American Federation of Labor and Congress of Industrial Organizations.  It says it believes that SOPA would increase employment.  In many circumstances Trades Unions add balance to the unequal relationship between employers and employees, but they are essentially economic actors who are interested not in employment in general but in the employment of their own actual and potential members.  It would be possible for a government to raise funds by auctioning the right to collect tolls on roads, and the Amalgamated Union of Tollbooth Operators would be pleased to support it.  That doesn't make it a good idea.


It's possible that if Warner Brothers made even more money out of Harry Potter and Happy Birthday it would spend more supporting potential new hits from otherwise struggling artists.  I'd be interested to hear from any such who believe in this.  Pending that, let's not break the internet: I'm willing in exchange for that to allow media multi-millionaires to struggle on with what they've already got.

Saturday, 24 December 2011

Slow roasting

Chris Dillow seasonally enjoins us to 'stick it to The Man' by cooking stuff. (I suppose it's implicit that The Man should not be allowed to eat what we cook.)  I claim no great proficiency in the kitchen, but what's on my mind today is the right way to cook turkey.  There is a way that gives much better results than the usual method, which is to roast it slowly overnight.

Before I try to persuade you of this, a word of warning: this method is widely disrecommended.  However you cook the bird, it's important that every part of it get hot enough: the USDA recommends 165F/74C.  This is comfortably above the temperature at which vegetative bacteria such as salmonella and staphylococci will die.  This paper takes a look at the bacteriology.  The concern with slow roasting is, or at least should be, that the meat will spend longer at intermediate temperatures at which bacteria multiply rapidly and may secrete toxins.  This creates "a small reason to set a minimum time for raw food cook come-up". 

You should choose a bird that's been reared non-intensively.  I like to think that such birds are less likely to be contaminated with harmful bacteria, but in any case you owe it to a creature you're going to eat that it should have enjoyed a life worth living.  And it will taste better.  It will cost more too: most families in the UK can afford it, if you can't then you'll have better things to worry about than my culinary advice.

So to the cooking.  The problem with roasting large pieces of meat at 160C or higher is that the outside will have spent a long time at a high temperature before the inside gets hot enough to be safe.  That's ok with meats like beef for which it creates an interesting variation in texture and flavour, especially if there's a good covering of fat to keep the outside moist.  But turkey just dries out.  You can avoid this by cooking at much more gentle temperatures.  And it makes the whole process of cooking a meal much easier, because there's no need to be exact with the timing provided you give it long enough.  In one way, this method is safer because you won't be tempted to take the turkey out too soon because the outside is getting overcooked or because the rest of the meal is ready.

I hesitate to point you to any specific procedure: I mix and match from various sources.  But the key points are:

- Cook stuffing separately.  This leaves the cavity empty for the turkey to cook from the inside too.
- Start the turkey at a high temperature to kill surface bacteria.  (Or boil it for a few minutes instead if you're equipped to do so)
- Cook overnight on a rack at a temperature just below boiling point (cooler and bacterial toxins are more of a concern, hotter and it's harder to keep the meat moist, though a foil tent completely covering the roasting pan may do it)
- Use a meat thermometer to make sure the turkey is hot enough all the way through.  If you do this with time to spare you can always turn the heat up at the end to speed things up if necessary.

Best get started in the next hour or so...

Thursday, 22 December 2011

Racial abuse update

There were developments yesterday in two of the cases I discussed six weeks ago.

First, John Terry is to face criminal charges detailed here:
On 23 October 2011 at Loftus Road Stadium, London W12, you used threatening, abusive or insulting words or behaviour, or disorderly behaviour within the hearing or sight of a person likely to be caused harassment, alarm or distress which was racially aggravated in accordance with section 28 of the Crime and Disorder Act 1998.
Contrary to section 31 (1) (c) of the Crime and Disorder Act 1998
 31 (1) (c) says "a person is guilty of an offence under this section if he commits an offence under section 5 of the Public Order Act 1986 (harassment, alarm or distress) which is racially or religiously aggravated for the purposes of this section".  Section 28 defines "racially or religiously aggravated": I think it safe to assume that any case against Terry will have no problem satisfying that definition.  Section 5 of the POA says "A person is guilty of an offence if he...uses threatening, abusive or insulting words...within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby...It is a defence for the accused to prove that that he had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment, alarm or distress, or...that his conduct was reasonable."

I confess that I overlooked in my earlier comment that section 5, unlike section 4A, does not require anyone actually to have been distressed.  It would be improper for me to speculate at this stage as to the outcome of the case, but I note that if convicted Terry faces a fine he would find trivial (a maximum of £2,500 if I read the scale correctly).  I cannot see how the public interest has been served by the police and CPS pursuing the case rather than allowed the FA to get on with its proceedings.

Meanwhile, the FA has shown how seriously it intends to take this sort of thing by banning Luis Suárez for eight matches and fining him £40,000.   The financial penalty imposed on Suárez, in whose case the police have shown no interest, is much heavier than the maximum fine faced by Terry, but even so it's the ban that will really hurt.  Suárez is paid about £4m a year by Liverpool, and they paid Ajax about the same per year again for his contract, so the value to the club of his services is of the order of £200,000 a game.  It's not surprising that the club is very disappointed by the penalty.

Reportedly Suárez admits to calling Patrice Evra either "negro" or "negrito", speaking in Spanish, where the words do not carry all the same overtones.  And there's an unclear allegation that Evra started it by referring in some way to Suárez's origins in South America.  I suspect that neither player could hold butter unmelted in his mouth for very long, and that this case is on the borderline between racial abuse that ought to be stamped out and playground tit-for-tat that ought to be left on the field.  It's possible that the FA has decided that a salutory ban reduced on appeal to a slap on the wrist is the way to send the required message - the FA statement emphasized that Suárez "has the right to appeal" and suspended the ban to give him the chance to do so: good luck to them sorting this out.

Tuesday, 20 December 2011

Intellectual-property rights: academic papers

I wrote earlier that I'm opposed to intellectual property rights wherever plausible alternatives exits.  I'll start my review of the alternatives with perhaps the easiest case: academic papers.

My relevant experience is all in science and finance: conceivably things work differently in the humanities.  In the fields I know about, which are overwhelmingly the ones that matter to the vast majority of the population, the way that journals work is that authors submit papers for publication, the editor asks experts in the field to review the paper, the reviewers make recommendations for changes and for or against publication, the authors are invited to make any changes the editor thinks advisable, and the editor eventually publishes the paper or rejects it.  The journal, which has contributed the least to this process, ends up with the copyright to the paper: the author and reviewers work for notional fees or none.

Copyright is therefore no incentive for the production of academic papers.  Its only function is to provide incentives for journals to carry out a filtering process by which readers get an indication of which papers are worth reading, and funding bodies get an indication of whose research is worth funding.  Since there are too many papers to read, and too many researchers to fund, both these filters are valuable.

In practice, authors often circulate their papers to peers before submitting them for publication, both as a courtesy to anyone whose work they cite, and in the hope of getting helpful feedback.  Also,they often  make a version of the paper freely available on their personal websites - this is worth knowing if a paper you want to consult is hidden behind an expensive paywall.  I suppose that journals disapprove of this practice, but think it prudent not to draw attention to it by objecting publicly.

The alternative is simple: authors, as they do now, should consult whomsoever they wish until they think their paper ready for general release.  Then they should publish them on websites dedicated to the purpose.  arXiv does this already for some of the geekier fields.  Here's an outline of how it works, and here are some comments by its founder on its implications for academic publishing.  Here are some thoughts on its disadvantages: none of them seem to me to be fundamental to the question of copyright.  Interestingly, there is no suggestion that prestigious journals in Physics have been unable to operate without exclusive publication rights.

It may be that copyright restrictions are necessary in most cases to make it worthwhile to operate pre-publication peer review: here are some comments by Richard Smith, former editor of the BMJ, on how small a loss it would be to do without.

I submit that medical research in particular would benefit from free publication along the lines of arXiv.  That would get results out quicker, make them easier to consult online, and encourage publication of negative trial results.

If filtering mechanisms are required, something along the lines of Amazon's book-review system would be possible.  The user should have the option to apply weightings to the reviewers, favouring for example ones with high academic titles, or ones whose views, positive and negative, he shares regarding other specified papers.

Let's abolish copyright on academic papers now.  I predict that a few prestigious journals will survive, and the rest will be more than adequately replaced by free on-line publishing.

Wednesday, 14 December 2011

Assholocracy

This blog is usually restrained in its use of language, fondly imagining itself to be read by relatives as well as its sometime trading-floor colleagues.  However, it is resolved to express itself with vigour when the occasion demands.  And so it now does: Geoffrey Pullum, whose book The Great Eskimo Vocabulary Hoax made me briefly regret never previously having perceived the attraction of a career in linguistics, thinks it important that the title of this post should get more google hits.  I am delighted to be able to render him this trifling service.

Intellectual-property rights

Tangible-property rights are a good idea.  They both encourage the supply of additional stuff and provide a mechanism for apportioning finite supply to where it's most wanted.  (This would work better if wealth were shared more equally.)

Intellectual-property rights are a bad idea.  They encourage the creation of new intellectual property, but they impoverish humanity by restricting the use of non-rivalrous goods.

Starting from scratch, would anyone really want the system we have now?  I think one would explore every other idea for rewarding innovation and creativity before being willing to settle on what we have as the least bad option.  So I'm going to explore other ideas in future posts...

Tuesday, 13 December 2011

Alcohol-Attributable Fractions

This press release on Saturday reported that there were 1,173,386 hospital admissions related to alcohol in 2010-11, an increase of 11% from the previous year and more than double the number in 2002-3  (Hospital Episode Statistics are calculated from 1st April to 31st March).  Andrew Lansley for the government said:
These figures are disturbing evidence that, despite total consumption of alcohol not increasing recently, we have serious problems with both binge-drinking and long-term excessive alcohol abuse in a minority of people.
These consistent rises show that Labour took their eye off the ball on tackling alcohol abuse during their 13 years in power. Their reckless policies, such as the decision to unleash a 24-hour drinking culture in our country, only made matters worse.
Whereas Diane Abbott for the opposition was of the view that
The alarm bells should be ringing with the publication of these figures. It is clear that this Government is rapidly pushing us towards a binge-drinking crisis.
It is clear that for Andrew Lansley, the be-all and end-all is whether his friends in big business are happy, and, unfortunately, it is costing our NHS and British families an absolute fortune. A recent report predicted that binge-drinking will cost the NHS £3.8 billion by 2015, with 1.5 million A&E admissions a year.
So there you have it.  For the Tories, the problem is one of both binge drinking and long-term alcohol abuse, and it's Labour's fault.  For Labour it's just binge drinking encouraged by big business that's the problem, and the Tories are to blame (Ms Abbott was talking about a different report: I haven't traced it but it's mentioned here)

The press release was reproduced, quotes and all, in most of the papers.  The Guardian fleshed it out a bit.  The Daily Mail took Ms Abbott's word for it that all the admissions were to A&E.  The Times [paywall] got a quote from Mark Bellis, director of the North West Public Health Observatory which compiled the figures "These things are working their way through the system from a massive increase in alcohol consumption over the past two or three decades.  We've probably got more of this to come...Particularly at this time of year, we've got to address our relationship with drunkenness."

There's one dissenting voice, which calls the story a lie and links to this description of how the figures are calculated (the analysis dates from when the 2009-10 figures were published in May this year - the calculation seems to be quicker now).  The statistics are compiled not, as you might think, by asking people admitted to hospital whether they've been drinking (in A&E) or how much they drink (on the heart, liver, and cancer wards), but by applying an "alcohol-attributable fraction" to each admission according to diagnosis, age, and sex.  This methodology is confirmed in a comment by Andy Sutherland of the NHS Information Centre (I'm fairly confident that it's really him, because the press release correction he promises did appear).

The calculation of AAFs specific to England is described in this pdf (the purpose of which is described here). The method for each diagnosis is to identify the best available research on the increased (or decreased) risk associated with various levels of alcohol consumption (by age and sex where the data were available), apply the levels of alcohol consumption determined by the 2005 General Household Survey (by age and sex, adjusted using new estimates of the alcoholic content of drinks), and hence calculate what proportion of hospital admissions in 2005 were related to alcohol consumption.  Ideally alcohol consumption figures integrated over time should be used for diseases which take many years to develop, but the method seems broadly reasonable to me.

The data for all other years since 2002-3 have been calculated using the same AAFs.  Collated data can be downloaded in a spreadsheet here, showing a steady rise in alcohol-related hospital admissions.

Let me say that again: "calculated using the same alcohol-attributable fractions".  So what has happened is that researchers into each diagnosis have analysed data on alcohol consumption for people with and without the disease, and fitted those data to a model in which the diagnosis is due to two perfectly uncorrelated factors, one for alcohol consumption and one for everything else.  Applying this model to data on alcohol consumption in 2005, statisticians have deduced what fraction of 2005 hospital admissions for each diagnosis was due to the alcohol factor - the 2005 AAF.  And then these two perfectly uncorrelated factors have been assumed to be perfectly correlated in every other year, so that the AAF remains constant.  I am shocked that reputable statisticians have put their names to this method.  I can see no good reason not to repeat the 2005 analysis each year (except perhaps that it would take longer to get the numbers out).  Certainly that would give different results, since current alcohol consumption would actually be an input to the analysis.

What are the data on alcohol consumption?  The General Lifestyle Survey reports on weekly alcohol consumption above safe limits:
Following an increase between 1998 and 2000, there has been a decline since 2002 in the proportion of men drinking more than, on average, 21 units a week and in the proportion of women drinking more than 14 units...This trend seems to be continuing under the new methodology; between 2006 and 2009 the proportion of men drinking more than 21 units a week fell from 31 per cent to 26 per cent and the proportion of women drinking more than 14 units a week fell from 20 per cent to 18 per cent. These falls were driven by falls in the younger age groups... 
and on average weekly consumption:
The British Beer and Pub Association (BBPA) makes annual estimates of per capita alcohol consumption using data provided by HM Revenue and Customs. These show a steady increase in consumption from 1998 to 2004, followed by a decline of about 5 per cent to 2006, and then a further decline of about 7 per cent from 2006 to 2009. The decline measured by the GHS is much greater, at about 15 per cent between 2002 and 2006.
(there was a change of methodology in 2006 that makes it difficult to produce a single set of numbers)

It would seem that any increase in hospital admissions must be due either to the after-affects of long-term abuse which may have increased in the years up to 2002 or so before declining thereafter, or to occasional binge drinking not captured by weekly averages.  So I looked at data for three diagnoses: "toxic affect of alcohol", which should be an indication of binge drinking, "alcoholic liver disease", to capture the effects of long-term alcohol abuse, and "atrial fibrillation and flutter" to look at what's happening with a common diagnosis with a small but non-zero AAF.  These are available here, based on the same data as the alcohol-related admissions figures.

I've included series for total admissions and for the alcohol-related admissions data I'm writing about.  All the series are normalized to 2002-3, when the numbers were: total admissions 11,414,074; alcohol-related admissions 510,780; atrial fibrillation and flutter 68,731; alcoholic liver disease 11,582; toxic effect of alcohol 1407.

What's striking is that the alcohol-related admissions numbers have increased far faster than any of the other series (admissions for the toxic effect of alcohol have not increased at all).  How can this be explained?  I looked through the diagnoses to find any that had at least doubled from at least 10,000 since 2002-3: there were 27.  But only one of them had a non-zero AAF: hypertensive renal disease.  I must be looking at the wrong data - AAFs for many diagnoses are higher for younger patients, so there must have been a big increase in these admissions among the relatively young, which don't appear in the totals I looked at.  (The data are there in the spreadsheets, but you get only so much for your money.)

One thing did catch my eye however, which is the increase in admissions for "obesity" from 1,297 to 11,740. This may be associated with increasing availability of bariatric surgery, but it's no secret that there have been big increases in obesity.  Furthermore, obesity is linked to hypertension and diabetes, both of which will increase hospital admissions among the relatively young (not least for hypertensive renal disease).

This is speculative, but my guess is that the alleged rise in alcohol-related hospital admissions is in fact a rise in obesity-related hospital admissions, which are linked to some of the same diagnoses at similar ages.  Perhaps the statisticians behind this weekend's newspaper stories could find time to look into this hypothesis.

What the UK wanted

Frances Coppola links to this document, which the Telegraph tells us is the "UK protocol demand to the EU".  I reproduce here the demands, which in the document are interspersed with what look like briefing notes:
1. Unanimity on:-
  i) Transfer of powers from national level to EU agencies
  ii) Maximum harmonisation provisions that prevent member states imposing addditional requirements
  iii) Fiscal interests of member states and imposition of taxes, levies etc.
  iv) The location of the European Supervisory Authorities
2) General provisions for:-
  i) Requirement for executive powers of ESAs to be clearly set out and not replace the exercise of discretion by member states' competent authorities
  ii) Ensuring that 3rd country financial institutions that operate only in one member state are authorised and supervised in that member state if they do not want a passport
  iii) No discrimination within the single market for financial services on the grounds of the member states within which an institution is established.
[The most relevant ESAs here are the European Banking Authority, based in London, and the European Securities and Markets Authority, based in Paris.]

The note on 1iii) explains that "...measures which entail very sizeable levies on the financial sector, such as the Deposit Guarantee Scheme Directive, are being pursued under QMV [Qualified Majority Voting] legal bases"

Coppola comments:
...they amount to imposing a UK veto in areas pertaining to financial markets and regulation. Existing EU practice allows decision-making in these areas to be done by Qualified Majority Voting, which would in effect mean that a tighter, more unified Eurozone could consistently out-vote the UK and therefore impose on the UK's financial sector regulation and taxation against the will of the UK government. It isn't correct to suggest, as some commentators have, that Cameron was trying to evade tighter regulation of the financial sector, or prevent imposition of a Financial Transactions Tax (FTT). In fact paragraph 2 of the proposed changes would allow the UK to impose higher capital requirements than the EU requires and unilaterally implement the ring-fence recommended by the Vickers committee. And the FTT is not mentioned in the proposals at all - and it would require all 27 nations to agree to it anyway. No, this was simply an attempt to preserve the UK's authority over its financial sector, which dominates its economy.
 I'm not entirely convinced by Coppola's argument about the FTT.  First, not every leaked note is a complete and accurate representation of what was actually discussed.  Second, Cameron may have been concerned that some of the powers he wished to limit could be used to twist the UK's arm - "if you can't agree to the FTT we'll have to introduce a swingeing new levy instead".

Be that as it may, what are the deal-breakers in the demand?  Sarkozy spoke of a "lack of regulation of financial services".  Cameron claimed to have "protected Britain's financial services...from the development of eurozone integration".  1i) would seem to be relevant: the notes tells us that "agreed...restrictions are being tested routinely in new legislation seeking to extend the supervisory powers of the ESA".  Still, it's hard to see that if both parties wanted an agreement they couldn't reach a compromise on ESA powers.  I think we should take Henry Peterson's advice and follow the money, so my attention is more on 1iii) and 1ii): Merkel wants taxes and levies to be determined centrally, Cameron wants them to be in the hands of member states.

Incidentally, Paddy Ashdown in the Guardian asserts that a deal couldn't be reached because Sarkozy is utterly fed up with Cameron.  But I persist in believing that it's Sarkozy who talks, but Merkel who decides.  Because Merkel is the one with the money.

Update: Peter Mandelson thinks the EU will be able to force the FTT on us:
...EU financial regulation will be decided by majority vote and the majority will argue for strong regulation to curb the activities of the people who have done most to exacerbate, in their view, the eurozone crisis. The eurozone will introduce a financial transaction tax that will hurt the City and we will be powerless to halt it.

Monday, 12 December 2011

MBA

I read in the Guardian that Osita Mba, the HMRC whistleblower, has a master's degree from Oxford.  In fact he is a Bachelor of Civil Law - at Oxford this is a postgraduate course but not a master's degree.

Which is disappointing, in that I was hoping to find that he had a qualification as reported, in business administration.

Saturday, 10 December 2011

The capitalist yolk

The BBC has a story about the probability of getting six double-yolked eggs in a box of six.  It points out that young hens are more likely to lay double-yolked eggs, and those eggs are larger than normal, so if you buy a box of large eggs which happen to have been selected from eggs laid by young hens, the odds of their all being double-yolked are much shorter than the one in a million trillion they first thought of.

Well yes, but the odds must still be pretty long.  And there have been other similar stories, not least this one about a woman who opened 29 double-yolked eggs in a row from a box of 30.

When something wildly improbable seems to have happened, it's as well to consider alternative explanations.  Here's a clue from David Spiegelhalter, Professor of the Public Understanding of Risk, who was intrigued to find he could buy a box of double-yolked eggs from Waitrose: he mentions it in this article.

It turns out that double-yolked eggs can be identified easily enough by shining a light through them - "egg candling".  (It's safe to try this at home.)  Egg producers operate egg-grading machines: I suppose the machines include automated candlers as part of the grading process.  So it's not unlikely that some machines automatically separate double-yolked eggs. Some of those eggs will be sold as such for a premium, if demand exists.  And if the demand is less than the number of double-yolked eggs produced, they'll just get put into regular boxes for the appropriate size of egg.  I reckon the odds of this happening sometimes would be very close to one.

Questions from the EU summit

We are told that 26 out of 27 European leaders agreed to amend the Lisbon treaty, but David Cameron exercised Britain's veto.  So the 26 will do as they wanted, but it won't be an EU agreement.

The leaders' statements raise more questions then they answer.  Among them:

- Were they really hoping to revise the Lisbon treaty?
A revised treaty would have to be approved by all 27 parliaments, and the Irish would probably have to have a referendum (unless the Supreme Court decided the revision didn't substantially alter the character of the Union).  It took two years and two Irish referendums to get the original Lisbon treaty ratified.

Cynics, including me, will have at least a passing suspicion that none of the countries really wanted unanimous agreement.  It would suit everyone if the deal could be kept just this side of requiring an new referendum in Ireland.  The 26 countries want to be seen to be tough on banks, especially Germany which has spent much more (Figure 1.6) than the UK on bail-outs.  Cameron wants to be seen to be tough on the EU.  This looks like a separation made in heaven.

- What exactly was it Merkel and Cameron couldn't agree on?
Sarkozy has made the clearest statement about this: he says the sticking point had been Mr Cameron's insistence on a protocol allowing London to opt out of proposed change on financial services. "We were not able to accept because we consider quite the contrary - that a very large and substantial amount of the problems we are facing around the world are a result of lack of regulation of financial services and therefore can't have a waiver for the United Kingdom."  The word 'because' is being strained to breaking point there: there's nothing about the substance of the plan that requires new powers to regulate financial services.

This document gives an outline of the plan.  The first ten clauses are concerned with the enforcement of budgetary discipline in the member states.  The rest is about the new "European Stability Mechanism" (ESM).  Clause 15 provides for qualified majority voting when emergency assistance is needed.  There's nothing about financial regulation.

The major disagreement between the UK and the others has been on the proposed European Financial Transactions Tax: the rest of the EU wants a tax on business in London to be paid directly to Brussels and the UK doesn't.  It's plausible that (unpublished) details of the proposed agreement would remove the UK's veto on this, and Cameron wouldn't agree.

- Why wouldn't Cameron tell us?
He said "We have protected Britain's financial services, and manufacturing companies that need to be able to trade their businesses, their products, into Europe. We've protected all these industries from the development of eurozone integration spilling over and affecting the non-euro members of the European Union". Which seems to be a suggestion joining the other 26 would make it harder for the UK to sell them stuff. Colour me sceptical on that one.

If the real sticking point was that he wanted to retain a veto on a Financial Transactions Tax, why not say so?  OK, banks are not popular, but neither is giving money to the EU.  Couldn't he have said "they want to impose a tax on business in London that would be paid directly to Brussels.  I wouldn't agree to let them do that."?  The only explanation that makes sense to me is that the 27 agreed not to be specific about the problem, so that each could spin it in their own way.

- What is Merkel's plan to save the Euro?
No one seriously imagines that austerity alone is going to do it.  Cutting government spending never achieves the intended savings, because the government gets some proportion of its spending back in tax revenues.  The underlying problem is in the balance of trade: if each country had its own currency then FX rates would have adjusted to prevent the imbalances getting too large.  As it is, Italy, Spain, Portugal and Greece are all running large deficits.  It would in theory be possible for austerity to reduce imports enough (except perhaps in Greece which has a problem collecting taxes), but that's a theory that requires people not to mind having their living standards crushed.

Last time I wrote about this I guessed that the markets guessed that she was going quietly to allow the ECB to undertake a massive programme of Quantitative Easing, using the money to buy PIGS bonds.  There's still no sign of that.  The plan as it stands seems to be to calm down the bond markets with more or less believable promises of austerity, with the ESM - a slightly souped up EFSF - to contain any local difficulties: it's hard to see that's going to be sufficient to let Italy refinance its debt at affordable interest rates.

What might work in the medium term would be for Germany to spend (you might prefer to say 'invest') its trade surplus in the PIGS.  For example, it likes solar power: how about building solar cell factories in those countries, buying up land there where the sun shines a lot (this is not the hardest part) and covering it in solar power stations?  That would help meet any undertakings the EU may make in Durban, and I think it would be a lot easier to persuade German voters to spend money on saving the planet than on supporting pensioners in Greece.

- Whose hand did Nicolas Sarkozy want to shake in Le Snub?
Sarkozy air kisses the hand of Dalia Grybauskaitė, president of Lithuania, then seems to swerve a handshake with Cameron, in favour of Dimitris Christofias, president of Cyprus.  According to the Telegraph it was just a trick of the camera angle - "Mr Sarkozy was making eye contact with a man beyond Mr Cameron". According to me, Sarkozy was making a beeline for one man in the room whose hand he could shake without having to look up.  (It's interesting that none of the newspapers' European correspondents is able to identify minor European presidents.)

- Has Ed Miliband got a plan?
According to Miliband, Cameron "mishandled these negotiations spectacularly".  But how would he know, he wasn't there?  Miliband has got a real chance of becoming Prime Minister in three years or so: he needs to start behaving like someone who can be taken seriously in that role.  When commenting on international affairs, he should be saying something statesmenlike, along the lines of "it's unfortunate for Great Britain that the Prime Minister was unable to reach an agreement in the best interests of the country.  I will be meeting with Mr Cameron to find out why he was unable to do so."

Friday, 9 December 2011

More on marginal tax rates

In my post on optimum tax rates I mentioned as an afterthought that the 52% marginal direct tax rate in the UK goes through what seems to me to be a psychologically important level of 50%.  Comments on a blog that's easier to read than this one have led me to expand on the point.

The analysis of optimum marginal tax rates depends on how much taxable income changes when the tax rate changes.  Changes in taxable income can result from two causes: reduction of broad income and tax avoidance which reduces taxable income without the taxpayer actually earning less money.

Tax avoidance, through income timing or taking income in a different way, will involve careful planning, so all avoidable taxes should be considered.  But reduction of broad income by trying less hard to earn money, or by moving overseas to a friendly tax regime, will be caused not so much by considered analysis of what's worth it for the money as by one's gut reaction to the marginal tax rate - "Do I really want to do this piece of work just so that Osborne can get 52% of the reward?"

In that context it seems to me that 50% is an important level to breach.  It may be that the curve relating taxable income to marginal tax rate has a kink in it at about that level, so that either 42% or 62% might raise more revenue than 52%.

This is just speculation; empirical evidence would be hard to come by.  One can't simply experiment with tax rates from year to year: a temporary change will see more elasticity than a permanent change, because some top-rate taxpayers are able to advance or defer their income.

Optimum tax rates

Peter Diamond and Emmanuel Saez have published a paper which includes a calculation of the "optimal top marginal tax rate" in the USA, on the assumption that the only criterion is to maximize revenue - there is negligible social utility in letting rich people keep their income.  The calculation has attracted considerable interest from on-line commentators.  Nobel laureate Paul Krugman writes in the New York Times in defence of the criterion.  Brad DeLong observes that Adam Smith disagrees, partly because the rest of us take vicarious pleasure in the rich enjoying their wealth.  Richard Green fears that higher taxes on high earners might cause them to pay their servants less.  Kevin Drum reports with evident satisfaction that according to one number in the paper the peak of the Laffer Curve is at a (US) Federal income tax of 76%, far above the current top rate of 35%.

In the UK, the "#1 economics blogger" Richard Murphy quotes Drum at length, and concludes that we are comfortably below the peak of the Laffer Curve.  Murphy is not one to concern himself with details, but he seems simply to be noting that the top UK tax rate of 50% is a lot less than the 76% he's quoted.

Meanwhile, the UK's leading libertarian scandium oligopolist, Tim Worstall, asked his readers to calculate what tax rate in the UK, including employers' and employees' National Insurance and VAT, would be directly comparable with the tax numbers used by the paper for the USA, which includes the Medicare tax and state income and sales taxes.  He used the analyses they (I might say 'we') submitted to declare that Murphy is wrong (that's always Worstall's conclusion) and that the true optimum top UK income tax rate is 40%.

As you might expect, I am going to adjudicate.  First, an outline of what Diamond and Saez actually did:  They assume, in line with extensive empirical research, that income at the top end follows a Pareto distribution, that is it has a probability density function falling off according to a power law, p(z) = C/z^(1+a).  They find empirically that the parameter 'a' in the USA is 1.5 .  Then they assume, following various other authors, that taxable income is an elastic function of retained earnings (in the economic sense of 'elastic', i.e. a given change in the logarithm of the fraction of marginal income not taken in tax results in a proportional change in the logarithm of taxable income reported).  This is a convenient assumption, in that it means that a change in marginal tax rate leaves the power law shape unchanged, affecting only the value of 'C'.  As the tax rate increases the fraction of income retained falls, so that a given change in tax rate has a larger proportional effect on the retained fraction, and hence a larger effect on taxable income reported (e.g. a tax rate change from 0% to 1% takes away one hundredth of your net income, but a change from 90% to 91% takes away a tenth).  So with the elasticity assumption there is a critical tax rate at which the reduction in taxable income when the tax rate is increased balances out the extra tax raised by the higher rate - this is the optimum rate, which turns out to be 1/(1+elasticity.a), as the mathematically inclined reader may care to prove.  The difficulty now is to determine the elasticity parameter.  They report a mid-range estimate from the empirical literature of 0.25, but go on to use figures from another paper, also co-authored by Saez, which finds an elasticity of taxable income for top earners of 0.57, but only 0.17 for 'broad income', which they define as "Total Income less Capital Gains [and] Social Security Benefits".  The implication is that most of the elasticity is due to tax avoidance rather than reduced income.

How applicable is this to the UK?  The Pareto distribution seems to hold quite generally, but the power law may be different: this paper reports a=1.37 in the USA and a=1.06 in the UK.  I would expect the 'broad income' elasticity to be somewhat higher in the UK, because high-earning Britons are more likely than Americans to move overseas, if only because Americans are discouraged by the extraordinary geographical range of American tax laws.  But I would expect the taxable income elasticity to be smaller in the UK, because there are fewer deductions available.

What taxes is it appropriate to include in a UK calculation?  Income tax obviously, and the 2% top rate employees' national insurance contribution (it's the same for the self-employed).  This corresponds to the 1.45% employees' Medicare tax included in the US calculation.  Also included in the US calculation are the 1.45% employers' Medicare tax and 40% of average (state) sales taxes, which is 2.32%  Analogously, we should include employers' NI of 13.8% and some fraction of the 20% VAT rate.  But I'm unconvinced that this is right.  The relevant taxes in the USA are quite small, so Diamond and Saez may have included them just to avoid argument.  In the UK the issue is more important, and deserves some consideration.  It seems to me that tax avoidance schemes are chosen by careful calculation of their benefits, but scaling of effort in response to tax changes is more emotional: I doubt that many people would think of employers' NI as a consideration there.  However, the incidence of employers' NI is considered to be largely on the employee, so it may make working abroad relatively attractive financially.  Regarding VAT, I doubt that much of the marginal income of high earners goes on goods subject to VAT.  For the most part, a person earning well into six figures buys whatever retail goods they feel like already.  And psychologically, paying tax when you buy stuff does not affect your attitude to earning money in the same way as having to hand more than half of it over to the government as you get it.

My rough numbers: a=1.25, broad income elasticity = 0.27, taxable income elasticity = 0.4, optimum combined marginal tax rate = 67%.  Employer's NI contributing to elasticity effect = 2%, VAT contributing to elasticity effect = 5%, Marginal income tax rate net of 2% employees' NI to give 67% combined rate = 62%

There's a good bit of guesswork in the parameters I've used, so there's no reason why anyone else should get the same answer.  But I think it's pretty hard to defend the choice of elasticities of either 0.57 in the UK (Worstall) or 0.17 in the USA (Drum).

In the interests of full disclosure I should say that I've paid tax at the 50% rate ever since it was introduced.  I may not do so in the 2012-13 tax year.  I can tell you that there's a psychological impact from direct taxes exceeding half one's marginal earnings: it's OK for you not to care.

Tuesday, 6 December 2011

And then there were none

There used to be one tolerably sane candidate seeking the Republican nomination for the US presidency.  Not any more.  Here's John Huntsman revising his position on climate change:
there are questions about the validity of the science — evidence by one university over in Scotland recently
I think he means the University of East Anglia. It's reassuring to note that he's wrong about the geography as well.

Saturday, 3 December 2011

FTT and stock market crashes

Could a Financial Transactions Tax in Europe avert major falls in equity markets?  I'm going to consider this in the light of major equity index falls working backwards from now.



1) The Eurozone debt crisis sell-off starting in July 2011.
Between 7th July and 24th November this year the FTSE fell by 14.6% in reaction to the Eurozone debt crisis.  Its hard to see how an FTT on shares could have had much of an effect on that.  It's possible that an FTT on bonds (which is also part of the proposal) could have slightly reduced the falls in sovereign bond prices, but the underlying problem is the massive deficit and debt problems of several Euro countries.  (I've put an end date to the sell off at the recent market low, but I'm not promising the decline in equity prices is over.)

2) The Flash Crash of 6th May 2010.
Starting at about 2:40pm on the east coast, the S&P and other major indexes fell about 5% over 5 minutes, then recovered their losses over the next 10 minutes.  The exact causes are not definitely known, but it's certain that high-frequency trading played an important part.  It's probable that the crash wouldn't have happened had there been an FTT in the US.

However, the temporary crash had no effect on European markets, which were closed.  Had the crash occurred earlier in the day, there would have been some reaction in Europe, which would have been smaller with an FTT than without.  There's no way to quantify this.

3) The Financial Crisis sell-off between October 2007 and March 2009.
Between the end of October 2007 and 3rd March 2009 the FTSE lost 47.7% of its peak value.  This was one effect of a global financial crisis caused by the collapse of a credit boom built on the back of rising US house prices.  Banks had built up extraordinary levels of exposure to mortgage-backed securities, but an FTT would have affected this not at all.

4) 9/11
The FTSE fell 5.72% on 11th September 2001, in reaction to terrorist attacks in New York.  This was the only one of the ten biggest one-day percentage falls not to have happened in either October 1987 or during 2008.  The fall was a rational response to the information then available about the attacks (which occurred during the European afternoon).  An FTT would have been irrelevant.

5) The collapse of the tech bubble at the beginning of the third millennium
After reaching a new high on the last trading day of the century, the FTSE fell progressively, reaching a low on  12th March 2003, by which time it had lost 52.6% of its peak value.  A lot happens in three years, but the simple explanation is that there was a gradual re-evaluation of the true value of the internet market.  An FTT would have no bearing on this.

6) The Russian Financial crisis and the collapse of LTCM, July-October 1998
Between 20th July and 8th October 1998 the FTSE fell by 24% as a result of a financial crisis in Russia and in a reaction to the (not unrelated) failure of the hedge fund Long-Term Capital Management on 23rd September.  The market recovered the LTCM part of its losses over the following eight days as it became apparent that the damage had been contained.

It would probably have been impossible for LTCM to have executed its strategies in the presence of an FTT in the USA, so its boom and bust never would have happened.  An FTT in Europe would have made little difference to it.  So an FTT in the USA could have averted the last 6.6% of the fall.

7) Black Monday, October 1987
The FTSE fell 5.4% on Friday 16th October, and 5.7% on Monday 19th.  The major action happened in the US market, which fell precipitously after the FTSE had closed: the S&P lost 20.4% on the day.  As a result, the FTSE opened on the 20th down another 18.1%.

The causes of this one were complex.  The losses on the 19th seem to have been accelerated by program trading and portfolio insurance strategies in the US.  It's possible that an FTT would have discouraged the development of these strategies.

***

Of the seven market falls I've looked at, one, which had no effect in Europe, would probably have been prevented by an FTT in the USA, one would probably have been reduced by it by about a quarter, and one might have been reduced by an unknown amount.  None would have been significantly affected by an FTT in Europe.

It's not surprising that an FTT would have more effect in the USA.  For regulatory reasons, most share trading in the USA is done on (electronic) exchanges, which makes automated trading much more profitable.  In Europe, similar exchanges exist but most large share trades are OTC (over-the-counter).


***

While writing this, I came across this BBC analysis which covers many of the same events (without reference to a Financial Transactions Tax).

Thursday, 1 December 2011

Hypersensitivity

The BBC, in an article about AIDS funding, lists the ten leading causes of death in high, middle, and low-income countries.  It seems that about 2% of global mortality is due to "hypersensitive heart disease".  I think it's rather tactless of the BBC to say so - mightn't it hurt the poor darling's feelings?

This isn't spelling correction software doing its worst - "turberculosis" is also on the list.

But this is a serious subject, so on a more serious note: why won't the BBC link to its sources?  The data come from this WHO report (the WHO gets the names of the diseases right).  There's a list of countries by income group on page 170 here: the list is derived from a World Bank spreadsheet.

Update: The BBC has deleted the list.  Here's a screenshot:

The FTT, noise trading, and volatility

I summarized quite a lot of research in my post about the FTT and volatility with an airy "There are other high-frequency noise effects which have been theoretically analysed".  I'll say a bit more.

First, except in the special circumstances I discussed previously, speculative trading can increase volatility only if it loses money.  As Milton Friedman noted in his seminal 1953 paper The Case For Flexible Exchange Rates
People who argue that speculation is generally destabilizing seldom realize that this is largely equivalent to saying that speculators lose money, since speculation can be destabilizing in general only if speculators on the average sell when the currency is low in price and buy when it is high.
That does not mean that individual speculators cannot profit from activities that increase volatility, but they can do so only at the expense of other speculators.  Consider a market in which trend-following speculators are active.  An ingenious speculator might create an artificial trend by buying a stock in sufficient volume, causing the trend-followers to start buying into the trend, driving the stock higher.  When the clever guy judges the trend-followers have filled their boots, he'll dump the stock at the higher price, locking in a profit.  The stock will thereafter gradually revert to whatever it's really worth, and at some point the trend-followers will sell out, realising their losses.

But there is only so much money that unsuccessful speculators are willing to lose, so the capacity for speculators to increase volatility is limited.  (Bankers have lost apocalyptic amounts of money in the last few years, but not on speculative trading of the sort that might be discouraged by an FTT.)

Nevertheless, the review paper I discussed previously cited no fewer than twelve papers attempting to predict the effect on volatility of a transaction tax.  (The ante-penultimate link doesn't now work, and the paper, which is good, doesn't discuss volatility directly.)  Each of them sets up an a model of a securities market, and predicts how it will operate with and without a transaction tax by means of theoretical analysis, or by computer simulation of trading strategies, or by having humans playing a trading game.

An essential component for a transaction tax to be able to reduce volatility in these models is the existence of what Fischer Black (of Black-Scholes) called "noise traders".  These are traders who speculate in the market without having any information not already priced in, as distinct from "information traders".  In Black's conception, traders often do not know which sort of trader they are, which creates uncertainty essential for the operation of a liquid market.  The papers I list do not all follow the definition exactly, but they incorporate the concept in some form.  Their results seem to depend on to the extent that the way the market is modelled tends to cause the transactions tax to discourage noise traders more than others.

None of the papers has a set-up which is much like actual equity markets: this is partly because they are analysing something like the Tobin tax proposed for FX markets and partly because it's often easier to analyse something other than reality.  But it's the equity market that the proposed European FTT is mainly concerned with (the FX market is excluded).  And none of the papers includes the full range of trading strategies that operate in actual markets.

A noise trader whose decisions are indistinguishable from random will add a small amount of volatility, and tend gradually to lose money.  I am sceptical that there are many traders of this sort.  Most speculative traders follow some sort of strategy, broadly the strategies are either trend-following or contrarian.  (One of the papers listed explicitly includes both these strategies; others may do so implicitly by using human traders in their simulations.)  It's important to include the trend-followers to give a transactions tax a fair chance to reduce volatility significantly

If I were to attempt something like this I would want to include at least the following:
 - large trades being executed gradually.  This would feed a series of trades in the same direction, with the broker varying the size and timing in an attempt to disguise what he's doing.  These trades are profitable for trend-followers
 - trades being done for exogenous reasons.  These are not strictly noise trades, and will not be deterred by a small transactions tax, but their size and direction looks random.
- information trades (some authors call them fundamental trades).  These are done by traders who have used private aptitude to deduce fundamental valuations from public information.
- hedge trades.  These are trades done by option traders who are in aggregate either long or short gamma.  If option traders are short gamma their hedging tends to increase volatility, and vice versa.
- insider trades.  These are trades done using information that is not yet public, but is made public after some time delay.  These trades are profitable for trend-followers.
- trend-following speculative trades.
- contrarian speculative trades.
- speculative trades attempting to profit at the expense of other speculative strategies
- a stochastic process for the fundamental value.  All traders will be aware of the direction of large changes in fundamental value (corresponding to obviously important news).

That's a lot of things to put in, and a lot of parameters and relative weightings to vary.  However, there are only three sorts of trades which tend to increase volatility - short gamma hedging, unsuccessful trend trades (successful trend trades don't increase volatility, they just bring the price change forward), and trades parasitic on trend trades.  If things are set up so that trend trading is profitable despite being unsuccessful quite often then a transactions tax sufficient to make it unprofitable can decrease volatility significantly.  Note that trend traders do need to trade quite often, because they need to unwind their position quickly if a trend they've traded on fails to continue.

My guess is that with some care it would be possible to create a set-up where this happens.  More tentatively, I guess that the actual market doesn't match it.

Monday, 28 November 2011

Mann, sind die dick, Mann

The title is the Berliner Kurier's  expression of appreciation for Britain's chip-eating habit (the link is from kevin writing at Understanding Uncertainty, as is the basic point of this post).  The BBC carried a similar story, based likewise on a Eurostat report comparing the proportions of obese people across the EU: "UK Women are fattest in Europe".

Certainly obesity has increased over the last few years in the UK, and I think that's a bad thing.  But the comparison with the rest of the EU is meaningless, because the statistics are collected in a way that's simply not comparable.  For the rest of the EU, data come from the EHIS survey in which the interviewer simply asks the participant how tall and heavy they are: question 21 here.  In England (the UK data in fact come from England only) data come from the Health Survey for England, in which the interviewer actually measures the participant's height and weight: section 3.1 here.  What proportion of obese people do you think will under-report their weight (or over-report their height) sufficiently to be classed as non-obese?  One in ten?  What about answers of "don't know" or refusals to answer - how far will they tend to reduce the observed obesity rate?

My guess is that genuinely comparable statistics would put England in the top quarter of the table, but not at the top.  But I'm just guessing.  And so is Eurostat.

Teaching children to program computers

My first two years of secondary schooling were spent at a large and mediocre comprehensive.  Its computing facilities consisted of a Teletype connected to a mainframe at a polytechnic six miles away (this was at about the time the first personal computers were being developed).  And at the age of 11 or 12 what was certainly not called the top maths set was encouraged to write programs for it in FORTRAN 66 (being a heartwarming nerd I taught it to play noughts and crosses).

Decades later, there is more than one computer for every four children in UK schools (the ratio will be better in secondary schools).  And ICT - Information and Communication Technologies - is part of the school syllabus.  So how can it be that children are not learning computer programming?  The recent "Next Genreport has the answer:
...instead of building on the BBC’s Computer Literacy Project in the 1980s, schools turned away from programming in favour of ICT. Whilst useful in teaching various proprietary office software packages, ICT fails to inspire children to study computer programming. It is certainly not much help for a career in games. In a world where technology affects everything in our daily lives, so few children are taught such an essential STEM skill as programming. Bored by ICT, young people do not see the potential of the digital creative industries. It is hardly surprising that the games industry keeps complaining about the lack of industry-ready computer programmers and digital artists.
I had a look at the OCR's ICT GCSE examination - this is the exam 16-year-olds in most of the UK will take to certify their computing skills.  One optional "unit", making up 30% of the overall marks, concerns itself with coding.  Within that, at most 11 out of 60 marks are available for the actual code.  That's 5.5% of the overall marks.

It used to be the case that the nerds could be left to pick up programming skills for themselves.  But that's changed - why bother to write a noughts-and-crosses program when you can find a much better one on the internet in a few seconds?  We don't need everyone to be a programmer, but if we want to be a rich country we need enough programmers to support a thriving software industry.  The importance of literacy and numeracy is well recognized, but for most children learning how to use computers goes little further than the equivalent of learning how to take the cap off a biro.

There's been an understandable trend in education towards teaching the same subjects to children of all abilities, with the difference between abilities being in the level of attainment expected.  Appropriately enough, computer programming skills are more binary than that.  We have to take an elitist approach with the most able 10% or 20%, while doing everything we can to minimize the risk of leaving out the wrong children.

The government intends to publish its response to the Next Gen report today: I hope it takes the issue seriously.

Saturday, 26 November 2011

Insurance for cyclists

The BBC has a story encouraging cyclists to purchase insurance, apparently based on a press release from the Association of British Insurers (but the press release is not yet on the ABI's website).  The ABI's spokesman says:
Some 230 cyclists a month are killed or seriously injured on the roads so there is a good chance you are going to be off work for weeks, if not months, so some sort of insurance to cover you for loss of income makes sense
The statistic is accurate - according to the Department of Transport  111 cyclists were killed in Great Britain in 2010 and 2,660 seriously injured in accidents reported to the police, which combined comes to just over 230 a month.  And it's not surprising that British Insurers are in favour of Britons buying insurance.

But why focus on cyclists in particular?  The DoT statistics for pedestrians are 405 deaths and 5,200 serious injuries - twice as many serious injuries and nearly four times as many deaths (I wonder why the ratio of deaths to serious injuries should be so different).  A plausible estimate is that 27% of the adult population are cyclists, and I'm confident that less than 100% of the adult population are pedestrians, so the risks seem not to be very different.

The ABI spokesman goes on to say that third-party liability insurance is essential.  Well, whether on a bicycle or not all of us are at risk of somehow causing someone an injury.  Few of us sue one another on account of it - I suspect that legal actions such as this one would be much rarer if personal liability insurance did not exist, partly because most of us don't have enough money to make bringing the action worth the lawyers' while - Jack of Kent has some interesting thoughts on the subject.  If you're not rich enough to be worth suing for your own money, you might think it your civic duty to carry liability insurance, but that should not be for cycling only.

It seems to me that there's something unnecessarily discouraging about attitudes to cycling in Britain.  I'm reminded of the debate about wearing cycling helmets.  Helmets provide some protection, and I often wear one when cycling, but they would protect pedestrians and car passengers too: no one tuts at people walking down the street without a helmet on their head.

Thursday, 24 November 2011

Anthropogenic Global Warming and democracy

More hacked emails from the University of East Anglia's Climate Research Unit have been released, timed apparently for the COP17 conference starting in Durban on monday. There's nothing recent - apparently this is all material appropriated at the time of the original "Climategate" hack in 2009.  Nevertheless, high-profile disbelievers in AGW are besides themselves with excitement: here for example is James Delingpole in the Telegraph.  Just in case there's something in it I've looked at the first email Delingpole reproduces as an example of the revealed perfidy of UEA climate scientists, from Thorne/MetO (apparently Peter Thorne at the Met Office addressing Phil Jones at the CRU):
Observations do not show rising temperatures throughout the tropical troposphere unless you accept one single study and approach and discount a wealth of others. This is just downright dangerous. We need to communicate the uncertainty and be honest. Phil, hopefully we can find time to discuss these further if necessary [...]
Thorne seems to be saying that a claim Jones has made in a draft paper or report about "rising temperatures throughout the tropical troposphere" is not supported by the evidence and should be deleted.  You might or might not detect a slight note of reproach.  But there's nothing scandalous about this.  If Thorne decided actually to publish the claim while still believing it to be unjustified then that would be scandalous.  But neither Delingpole nor anyone else seems to have any evidence of that.  Whereas Jocelyn Fong at Media Matters has  looked into it, and concludes that this email was part of a discussion in February 2005 of an IPCC report eventually released in 2007.  The section about the upper troposphere, which is the only section discussing the troposphere directly, makes no strong claims at all: "the uncertainties about long-term change are substantial".

So the news story seems to be "old emails reveal no scientific dishonesty by Climate Scientists, in agreement with the conclusions of several enquiries".

What I find remarkable is the underlying assumption in much of what's published in newspapers and online, by believers in AGW as well as by disbelievers, that these questions can be settled by debate among people who  are not experts in the scientific issues (I'd guess there are at most a few hundred experts qualified to give first-hand opinions).  There are five questions to be answered:
1) Is the climate getting warmer?
2) If it is, is the warming caused by human activity?
3) If it is, do we expect warming to continue if we carry on as before?
4) If we do, what can we change to reduce or halt the warming?
5) Is it worth changing the things we can change?

Only (5) is a matter suitable for political debate, ultimately to be decided by democratic vote.  If the climate is getting warmer, no lobby of Telegraph readers asserting that it isn't is going to stop it.  Yet it seems to be question (1) that climategate enthusiasts are most anxious to argue about.  This is a strange choice of argument in view of the story of Richard Muller.  In 2004, Muller, a professor of physics, came out in support of a paper (there's a version of it here) claiming that the famous "hockey stick" analysis, showing global temperatures rising sharply during the 20th century, was based on fatally flawed statistical methods.  This argument met with vigorous rebuttal, but eventually Muller set up the Berkeley Earth Surface Temperature project to analyse temperature data using statistical methods he was satisfied with.  A month ago, BEST released its first results, which it summarized here.  Its conclusion is that it agrees closely with the previous concensus among climate scientists.  To his credit, Muller acknowledged in a piece in the Wall Street Journal the accuracy of the work done by prior groups "We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that".

How can one explain the extraordinary confidence of so many people who have no technical grasp of the issues that the scientific consensus on question (1) is wrong?  Perhaps it comes from a misapplication of libertarian thought: it's right that I should be allowed to do what I want, I want to burn fossil fuels, climate scientists are trying to stop me, so climate scientists must be wrong.  That is, they are confusing their self-entitling theory of justice with scientific fact.

Update: commentator Belette rightly points out that I have been unclear about what BEST's results so far actually say.   BEST has analysed temperature measurements dating back to 1800, but it has not yet reported on the proxy temperature data used to create the "hockey stick" graphs going back 600 and later 1000 years.  (This is an ice hockey stick: the graph is roughly horizontal (the shaft of the stick) until it starts rising during the 20th century (the blade).)  It's still entirely possible that BEST will produce a reconstruction of longer-term temperatures outside the currently accepted ranges: that would be a result to be evaluated on its own merits.

My point is that now that an avowed sceptic has independently confirmed the global warming trend, it is madness to allege that it's a fabrication or a mistake arising from a self-reinforcing scientific consensus.

Wednesday, 23 November 2011

The Financial Transactions Tax and volatility

The Institute of Economic Affairs has published a report by Tim Worstall setting out a case against the proposed FTT.  The article introducing the report asserts, among other things, that "it won't reduce volatility, a desired aim, it will increase it".  Richard Murphy, commenting on the report in the name of Tax Research UK, flatly contradicts this "If there was less liquidity in these markets there would, very obviously, be much less volatility than we are witnessing at present".  So who is right?

Neither writer troubles himself to say what he means by volatility.  When used by sombre-sounding financial news reporters, it tends to mean security prices going down a lot (prices going up a lot are just as volatile, but they don't evoke the same atmosphere of impending doom).  But as a term of art in finance, it means the standard deviation of the logarithm of price returns, scaled by the square root of the time interval over which each return occurs - Black and Scholes' seminal 1973 paper on option pricing uses the concept, describing it as the square root of the "variance rate".

In a simple model, security prices follow a geometric Brownian motion.  In this model, the observed volatility is expected to be same whatever time interval of returns is used.  This model is far from being an exact description of the real world - much published work on option theory is concerned with its flaws - but it is a useful approximation.  Under this model, what would be the effect on volatility of an FTT?  Assuming that trades become less frequent, but otherwise occur at the same prices, it would make no difference.  (The result is essentially the same if one introduces an additional drift to compensate investors for reduced liquidity.)

One minor flaw with the model is that if trading occurs at high frequency, prices may bounce between the bid and the offer, introducing additional volatility if the return periods used to observe it are very short.  An FTT would largely eliminate this effect.  But the effect does no one any harm, apart perhaps from causing some inconvenience to anyone engage in high-frequency analysis of market-price data.  Could this be what Richard Murphy means when he says less liquidity very obviously gives much less volatility?  I don't think so, I think he just means that traded prices don't move when no trades occur.  Which is true, but not to anyone's advantage.

There are other high-frequency noise effects which have been theoretically analysed.  The conclusion tends to be that some reduction in short-term volatility is possible, at least in theory, if some sorts of high-frequency trading can selectively be discouraged.  (More on this below.)

A more important flaw in the model is that in practice there are far more large price moves than it predicts, unless one allows very large process volatilities to prevail temporarily.  This effect can be modelled by introducing a jump diffusion term to the price process.  Although these large price moves may not be related to an underlying volatility process, they nevertheless contribute substantially to observed volatility whenever they occur.  So if the incidence of these large moves could be reduced, there could be a substantial reduction in volatility, of precisely the sort one would wish to see if concerned about market price instability.

Some large moves may be caused by speculative activity.  Holders of securities may be induced, or even compelled, to sell them if the price falls far enough.  For example, as I noted in another post, when sovereign debt yields get large enough margin requirements are made more onerous, making it still more unattractive to hold the bonds and leading to further selling.  This can give two distinct possible prices for the same instrument.  Speculators may be able to gain by pushing the price from one possible value to the other one.  An FTT would inhibit such speculation.  So there is at least a mechanism by which it could reduce volatility.  Whether this is significant could be determined only empirically.

Which brings us to Worstall's commentary.  Whereas his introductory article is emphatic that an FTT would increase volatility, his actual report is more measured: it quotes this from a report by the Institute of Development Studies at the University of Sussex:
The balance of evidence suggests that there is a positive relationship between transaction costs and volatility, although the size of this effect varies across different studies. Whether a Tobin Tax would affect volatility in the same way as underlying market transaction costs is not clear.
and concludes that "this suggests that a transaction tax would increase, not decrease volatility."  But the quotation seems to have been carefully selected to suit Worstall's position.  The discussion on volatility is much longer and more nuanced.  Ideally you should read the whole thing, but I'll offer a flavour of it by quoting the whole of the paragraph Worstall quotes from:
Nonetheless, the overall conclusion from the empirical evidence is more one sided than the theoretical work. The balance of evidence suggests that there is a positive relationship between transaction costs and volatility, although the size of this effect varies across different studies. Whether a Tobin Tax would affect volatility in the same way as underlying market transaction costs is not clear. The Swedish experience of imposing a tax on equity transactions may have increased volatility, but the size of the tax was large; there is no evidence that UK Stamp Duty had any effect on volatility, although it clearly affected returns on equity.
My summary is that the theoretical work tends to support the case that an FTT can reduce short-term volatility.  The empirical work suggests the opposite, but does not of course rule out the possibility that a carefully designed FTT could work as suggested by the theory.  But none of this matters very much because short-term volatility doesn't matter very much.

Importantly for my argument about jumps, the report notes that:
Unfortunately, to our knowledge, there are no papers which look at the impact of FTTs on the probability of a crash or adjustment taking place...We see this as a major gap in the literature.
To answer my original question, both Murphy and Worstall are wrong.  Murphy is completely wrong, except perhaps under some definition of volatility known only to himself.  Worstall is wrong in stating a definite answer to the question not supported by the evidence he cites.  The true answer is that we should not expect any great effect on short-term volatility, but that any small effect is somewhat more likely to be up than down. And that there is no way of knowing whether they would be a useful reduction in the risk of the occasional large moves that we really care about when we worry about volatility.

Tuesday, 22 November 2011

Median survival time from cancer diagnosis

The BBC has a story about median survival times from diagnosis for various cancers, and how they have changed in the last 40 years.  For some cancers there's been a big improvement, for others there isn't.  A spokeswoman for Cancer Research UK says that more research is urgently needed into cancers for which there has been little improvement.

The source is a research briefing paper by Macmillan Cancer Support.  Under the heading "Shocking Variation" the introduction says:
First the good news: overall median survival time for all cancer types 40 years ago was just one year, now it is predicted to be nearly six years. This improvement is testament to the improvements in surgery, diagnosis, radiotherapy, and new drugs. There have been particularly dramatic improvements in survival time for breast cancer, colon cancer and Non-Hodgkin’s Lymphoma – with many years added to median survival times.
But the good news is tempered by the woeful lack of improvement in other cancers. There has been almost no progress for cancers like lung and brain, where median survival times have risen by mere weeks. Shockingly pancreatic cancer median survival time has hardly risen at all. The NHS and cancer community must urgently look at why.
Apart from not being shocked, I don't disagree with that.  But there is something important left unsaid.  There are three ways to improve cancer survival time from diagnosis.
1) Better treatments
2) Earlier diagnosis, even if the treatment is ineffective
3) More effective treatment made possible by earlier diagnosis

Certainly treatments have got better for all cancers - medical science is a wonderful thing.  But there are few in which this has given us a really big increase in median survival time: Non-Hodgkin's Lymphoma is one.  I suspect that most of the improvement has been from much earlier diagnosis made possible by scanning technology invented in the early 70s, by endoscopy, and by testing for tumour markers such as PSA.  And it is hard to separate effects (2) and (3).

Screening programmes are likely to become widely deployed only if there is evidence that they decrease mortality: that suggests that treatment following early diagnosis reduces mortality even in asymptomatic patients, but it doesn't tell us by how much it increases median survival.  (There's a helpful discussion of how to evaluate screening programmes here.)

The Macmillan report notes that the prostate cancer should be treated with caution because of the increased "incidence" of low grade tumours following the introduction of PSA testing (they should say "diagnosis").  Similar caution should be applied to interpreting the data for all cancers.

Monday, 21 November 2011

A concocted quotation

Israel's first Prime Minister, David Ben-Gurion, wrote in 1937: " The Arabs will have to go, but one needs an opportune moment for making it happen, such as a war".
The quotation, from a 2008 column in The Independent by Johann Hari, gets "about 5,640" hits on Google.  Happily, after developments described below, the first few pages currently reported by Google are quoting it in order to point out that Ben-Gurion wrote no such thing.  But most of the pages using it treat it as authentic: it seems to be attractive to supporters of the Palestinian cause.

I was surprised when I read the column, because the quote seemed to contradict what I thought I knew about Ben-Gurion (not that I am an expert).  So I looked it up, and found that Hari had already used it, a year or so earlier, in a longer version "I support compulsory transfer. I do not see in it anything immoral ... The Arabs will have to go, but one needs an opportune moment for making it happen, such as a war."  At that time, Benny Morris, an Israeli historian generally sympathetic to non-Zionist perspectives on the founding of Israel, wrote to the Independent saying that whereas the first part of the quotation is genuine (more on that here), everything after the ellipsis - that is the quote at the head of this piece -  is an invention.  I left it at that.

A few months ago, I saw these screenshots from the 2010 film With God on Our Side, which speaks against Christian Zionism, and decided to find out the truth of the quotation.

As I write this piece, I find that I've been overtaken by events.  CAMERA, a media-monitoring organization sympathetic to Israel, has done its own investigation and has persuaded the director of the film to issue a correction.  Nevertheless, the quotation is still widely used, and I'm going to try here to convince any believers that it's a dud.

The source of the quotation is a 2006 book by Ilan Pappé, The Ethnic Cleansing of Palestine, on page 23:
Ben-Gurion himself, writing to his son in 1937, appeared convinced that this was the only course of action open to Zionism: 'The Arabs will have to go', but one needs an opportune moment for making it happen, such as a war [40].
Reference [40] is given on page 266:
40. Ben-Gurion's Diary,12 July 1937, and in New Judea, August-September 1937, p.220
This is already strange.  How can there be two sources for the quotation, neither of them the letter mentioned in the text?  And what's going on with the quotation marks around the first six words only?  Pappé used the quotation again, in an essay titled "State of Denial: Israel 1948-2008"; you can find it on page 6 here:
This link between purpose and timing had been elucidated very clearly in a letter David Ben-Gurion had sent to his son Amos in July, 1937: “The Arabs will have to go, but one needs an opportune moment for making it happen, such as war.”
 Pappé seems to have tidied up his earlier version.

David Ben-Gurion's diary is in the Ben-Gurion archive at the University of the Negev.  I asked them for a facsimile of the 12th July 1937 entry and they kindly sent it to me.  It's nearly four pages of unpointed typewritten Hebrew, and not easy to make out.  But I needn't trouble you with my own attempts at translation, because in checking for alternative readings I found that Benny Morris published one already, in his contribution to The War for Palestine, first published in 2001, on pages 41-43.  Morris doesn't say so, but, with minor elisions, this is all but the last half page (which is about what Ben-Gurion did that day) of the diary entry.

Morris is writing about Zionist interest in the idea of forced transfers of the Arab population of Palestine, and cannot reasonably be suspected of suppressing evidence favouring his argument.  Ben-Gurion discusses with some enthusiasm the proposals of the Peel Commission, including forced transfers, but he does not write the words Pappé quotes.  (Nor does he mention a letter to his son.)

The other source Pappé gives is "New Judea, August-September 1937, p.220".  The publication in fact calls itself "The New Judaea", and is available in Copyright Libraries.  I have a copy of page 220 in front of me; it contains a minute of Ben-Gurion's speech to the Twentieth Zionist Congress in August that year.  He is reported as saying "[Jews] would never dispute the rights of the Arabs in Palestine, and there was no contradiction between this and the principle that as many Jews as wished should come into Palestine".  There is nothing remotely like Pappé's quotation.

I am in no doubt that Pappé simply invented sources for his quotation.  I assume that he did so because no genuine source exists.  The quotation is a fabrication by Pappé.

(If you want to know more about Ben-Gurion's thinking on population  transfers, including in his letters to his son, Chaim Simons has some useful pointers.)

Does this matter?  Well, if you think you can advance your argument by quoting Ben-Gurion, you should quote Ben-Gurion.  If you make a documentary film your facts should be factual - a later statement of correction doesn't change the film.  If you base an argument in a serious newspaper on something somebody is supposed to have said, you should take care that they have really said it (Hari has since got into a lot of trouble over his freedom with quotations).  If a university cares about its academic reputation, it should make sure that its employees' "incisive thought" on the "methodology of historical enquiry" does not extend to making stuff up.  And if you're a historian, you should write about what people said, not what you would have liked them to say.

But I don't think the truth of this has got anything to say about what should happen now in Israel.  Whatever Ben-Gurion's private thoughts, the events of 1948 - the Nakba that befell the Arabs of Palestine - are as much in the past as the building of the Dome on the Rock on the site of the Second Temple.  The writings of Ben-Gurion, any more than the histories of the Caliphate or the Kingdom of Israel, are not going to help in finding a compromise for the future.