Wednesday 7 March 2012

Competition in the NHS: the evidence

I return as foreshadowed to the evidence for the effectiveness of competition in the NHS.

Carol Propper's article summarizes four papers to support its view that "the evidence gives a more positive picture".  First, regarding Choose and Book "There is no systematic evidence that the choice agenda harmed patients".  Second, "The wave of mergers that the Blair administration undertook when it first came to power...did not realise the gains that were promised before the merger. As mergers tend to reduce the potential for competition in a local market, these findings too suggests that there are benefits from competition in an NHS type system."  Third, "findings from a recent study of management in the NHS shows that better management is associated with better outcomes in NHS hospitals and that management tends to be better where hospitals compete with each other." And fourth, "The Netherlands has had a mixed system of provision for many years and has slowly introduced competition. There is no evidence that this has massively harmed equity and is thought to have led to improvements in service delivery."

Only the third of these claims is to the effect that competition in the NHS has made things better, so that's the one I'll concentrate on: it's supported by this working paper.  The paper's main aim is to establish that competition between NHS hospitals causes them to have better management.  It creates measures of competition, management quality, and clinical quality, and looks for associations between them.  To establish causality, it uses an Instrumental Variable.

I'll attempt a handwaving explanation of the method instrumental variables for any reader who happens not to be an econometrician, as I am not.  The problem it addresses is that whereas it's often possible to show some correlation between two variables X and Y, that does not imply that X causes Y, as one might want to demonstrate.  Correlations can arise instead because Y causes X, or because factor Z causes both X and Y, or because of coincidental trends in X and Y.  The approach is to identify an instrumental variable IV which can be shown by fundamental arguments (i) to affect X and (ii) to be correlated with Y only through its effect on X.  For example, suppose one wanted to measure how changes in the price of coffee affect coffee sales in England.  The problem is that if there's an independent rise in demand for coffee, the price will tend to go up.  One wouldn't want to interpret that increase in price as causing the increase in sales.  The solution here is to find an instrumental variable affecting coffee prices: for example, one might identify a measure of the suitability of the weather in various coffee-growing regions, with an appropriate time lag.  One would expect this measure to be negatively correlated to the price of coffee, and it might be positively correlated to coffee sales.  There are then some not very difficult statistical methods which can be used to derive an estimate of the sensitivity of coffee sales to prices.

Back to the paper about competition in the NHS: the authors find correlations between their three measures, and turn to the method of Instrumental Variables to try to demonstrate that competition is the causal factor.
Identifying the causal effect of competition is challenging, but the fact that exit and entry are strongly influenced by politics in a publicly run healthcare system, like the UK National Health Service (NHS), offers a potential instrumental variable - the degree of political competition.
It supports this theory of political competition by observing that
When Labour’s winning margin is small (under 5%) there are about 10% more hospitals than when it or the opposition parties (Conservatives and Liberal Democrats) have a large majority.
Figure 1 on the paper is a histogram of hospital density by English parliamentary constituency against Labour majority in the 1997 general election, in which Labour under Tony Blair gained a landslide victory.  It shows that the highest density is in constituencies with small Labour majorities: the authors say this validates their theory that the governing party will tend to open new hospitals and not close existing hospitals in the constituencies where it feels vulnerable.  (The title of the histogram is somewhat misleading: hospital density is defined here as the number of qualifying hospitals within 30km of the centroid of the constituency. The number is quoted "per million population": I can't tell exactly how this scaling to population has been done.)

I note that the second highest density is in constituencies with small majorities for another party.  How does that fit the theory?  Surely the Labour government after the 1997 election wasn't much influenced by the hope of winning a still larger majority next time.  I have an alternative explanation: typically inner-city constituencies return Labour MPs, and rural constituencies return Conservative MPs.  Marginal constituencies tend to be in the transition zone between city and countryside.  Compare that to the pattern of hospital building in the 20th century, especially in the 60s and 70s.  Hospitals were built or extended on the outskirts of cities, where the space was available.  So it's a fair guess that one will tend to find more hospitals in and around marginal constituencies, even if political interference has been negligible.

Incidentally, "When Labour is not the winning party, the margin [plotted in the histogram] is the negative of the difference between the winning party (usually Conservative) and the next closest party".  Why the margin between Conservatives and Liberal Democrats should be relevant in their theory the authors do not say.  It would make more sense for them to use the margin by which Labour trailed the winning party.

If they wanted a fair test of their theory, the authors would look at changes in hospital densities after 1997, comparing the numbers particularly between constituencies with small Labour majorities and the (presumably quite similar) constitutencies where Labour had finished close behind the winning party.

Having confirmed their theory, the authors tell us that
Using the share of government-controlled (Labour) marginal political constituencies as an instrumental variable for hospital numbers we find a significant causal impact of greater local competition on hospital management practices.  We are careful to condition on a wide range of confounding influences to ensure that our results are not driven by other factors (e.g. financial resources, different local demographics, the severity of patients treated at the hospital, etc.).
They are now taking as their Instrumental Variable the proportion of the political constituencies within 30km of the hospital which were Labour-held marginals.  The theory is that a hospital in a marginal area is likely to have other hospitals near it, because the government will be reluctant to close hospitals there, and keen to open them.

So is this a suitable IV according to the two criteria I listed?  First, how strongly can it predict hospital density?  There's a list of  winning margins here: I count 18 English Labour marginals according to the criteria in the paper, out of 529 constituencies.  There are large parts of the country with no such constituencies; the statistical test will be ignoring most of the data.  So this IV performs poorly on the first criterion: it can account only weakly for variations in hospital density.  Second, is there good reason to think that it can affect management and clinical performance measures only through its effect on hospital competition?  No, certainly not.  Marginal constituencies tend to have particular geographical characteristics which may well affect their ability to recruit staff, among other things.  And if it's true that politicians care a lot about hospitals in those constituencies, they will be all too likely to intervene in ways other than just changing the number of them.  For example they might encourage their favourite managers to work for such hospitals, or direct resources to improve key statistics.  So the IV fails to satisfy the second criterion also.

Beyond the discussion of why they think it should be related to hospital density, the authors give no consideration to the suitability of their choice of Instrumental Variable for the statistical analysis they present, and they make no mention of the scarcity of Labour marginal constituencies.   (To be fair, this is only a working paper.)  The fact is that this instrumental variable is wholly unsuitable for statistical purposes: no valid conclusions about causality can be drawn with it.

That makes the rest of the paper moot.  But I can't resist saying a few words about its measure of management quality.  The authors include a resigned note about responses to their questionnaire "it was harder to obtain interviews with the physicians than managers (80% of the respondents were managers)".  Quite so.  The authors tell us that these measures correlate well to firm performance, but I suspect they translate less well to hospitals, where the important work is done by technical experts whose response to polysyllabic management-speak is likely to be expressed in words of one syllable.

No comments:

Post a Comment