Denial of access in business banking and the abuse of economic power

The problem

Banks possess an extraordinary power over their business customers. They can deny a business access to its own funds, sitting in its account, thus preventing it from paying its bills, paying its taxes, and moving money to another account, including to an account provided by a competing bank.

This extraordinary power can be exercised at any moment, without notice, without any requirement to give reasons, and it can be sustained for an indefinite period of the bank’s own choosing. Moreover, to date at least, a bank can do these things without any ex post accountability for its actions.

When the power is exercised denial of access (DoA) to the relevant money transmission services can obviously cause great harm to the customer. We can see this from those occasions on which there have been denial of service attacks by cyber-criminals:  they cause great distress, only tolerable because the durations of the periods over which attacks have been sustained have been short.

Banks themselves can, however, choose to cause such distress for much longer durations stretching into weeks and months, and it is a matter of recorded fact that that has happened. For an individual customer therefore, the economic harm caused by a bank’s denial of access to an account can be much greater than that caused by denial-of-service cyber attacks.

This capacity/power to cause harms to customers is frequently justified on the ground that its existence serves anti-fraud and anti-money laundering purposes. Therefore, so the argument goes, it provides a more general, albeit more diffuse, benefit to bank customers, which is at least as great as the harms caused.

There is, though, a country mile to travel before concluding that a practice that is known to cause egregious harms is justified because it may have some unquantified benefits. The benefits have to be substantiated and there is need to establish that less harmful means of achieving them are not feasible. There are trade-offs here that stand in need of examination in the specific contexts of relevance and, as yet, no regulatory authority (in which category the UK’s Competition and Markets Authority (CMA) is to be included) appears to have engaged in detail with them. For banks an abstract ‘justification’ is a convenient assertion: for regulatory authorities it is a challenge.

How can a regulatory authority meet the challenge?

A good starting point for considering the relevant issues is to observe that the power in question is a ‘policing power’:  it is justified in terms of catching criminals and hindering criminality. That suggests using analogies drawn from the exercise of powers by the Police Force itself as opening footholds in an assessment process.

The obvious first point to note is that exercise of the Police Force’s powers is constrained by a range of checks and balances (see, for example, Police powers of arrest: your rights – GOV.UK (www.gov.uk). Those arrested/detained must be given a reason for the action, the period of detention ‘on suspicion’ is highly time-limited, and so on. 

Such arrangements explicitly recognise that the powers create risks of potential harms to the public as well as benefits, and they implicitly take account of the old wisdom that all power corrupts, and that extraordinary power greatly strengthens the effect.  The powers create risks of abuse – risks that are known to eventuate in practice on a recurring basis. (The ‘good cop/bad cop’ trope is a familiar one in the police procedurals we watch on screens and the media routinely report on ex post investigations of whether or not, in a particular instance, there has been some or other abuse of power.) The arrangements amount to a system of checks and balances (constraints) on what would otherwise be unconstrained actions.

No such checks and balances have, as yet, been established for the ‘detention’ of the liquid assets of businesses entailed by a bank’s denial of access to an account, notwithstanding the great harms that it can cause and the potential for abusive conduct that such power brings. 

In what follows, I argue that a process of establishing the required constraints can be initiated by using existing laws (although I think that there is call for new legislation as well) and that the task of doing this falls to those regulatory authorities with responsibilities to enforce the UK’s Competition Act 1998.

The argument is consistent with a message given to banks by the Financial Conduct Authority (FCA) in 2016, at a time when the FCA was increasing the priority to be given to Anti Monetary Laundering (AML) activities:  “We note that banks, like all firms, are subject to competition law, in particular the prohibitions on anticompetitive agreements and abuse of market power contained in the UK Competition Act 1998, and in the Treaty on the Functioning of the European Union. They should be mindful of these obligations when deciding to terminate existing relationships or decline new relationships.”

Developments over the last year or so provide the relevant regulators with a near perfect opportunity to get on top of the DoA problem. That is because, while casual empiricism suggests that most banks act have been acting responsibly in their use of DoA powers (i.e. not abusing them), at least one prime suspect for systematic ‘bad cop’ behaviour has come into view, namely HSBC UK Ltd. The conduct in question concerns both the nature of its ‘Safeguard’ programme, i.e. the nature of the chosen business policy itself, and the ways in which it has been implemented.   

I therefore believe that now would be a very good time for one of the competent regulatory authorities to step up to the plate and establish some ‘rules of the game’ for what might, if unaddressed, become a ballooning problem with systemic economic effects: money transmission is a fundamental pillar of a commercial society, since access to a means of payment is indispensable for participation in commercial life.  Denial of access is therefore a big deal.

The Competition Act 1998

Chapter II of the UK Competition Act 1998 (CA 1998) states that:

Subject to section 19, any conduct on the part of one or more undertakings which amounts to the abuse of a dominant position in a market is prohibited if it may affect trade within the United Kingdom.

(Section 19 provides for the exclusion of some activities from the general prohibition.)

The prohibition raises two immediate questions:

What is meant by a dominant position in a market?

What distinguishes abusive commercial conduct from non-abusive (normal, acceptable) commercial conduct?

What is meant by dominance?

In European and UK law, dominance has come to be defined as a position of economic strength (‘market power’) that confers on a company/undertaking “the power to behave to an appreciable extent independently of its competitors, its customers and ultimately of its consumers.”  Put in terms of perhaps more familiar language, a position of dominance is one in which there are only weak checks and balances (constraints) on the commercial conduct of a business in some or other economically important aspect of its activities. 

There is a question of degree to be settled here since all businesses make their own decisions and, in that sense, act ‘independently’. In practice, therefore, the notion of dominance comes into play when the power in question is judged substantial in its capacity to cause economic harms, where ‘substantial’ is assessed relative to some ‘normally observable’ or normative benchmark

What is meant by abuse of dominance?

The holding of a position of substantial economic power is not prohibited per se:  the purpose/intent of the legislation is, whenever and wherever such power exists, to prevent or hinder its use in ways that cause significant economic harms (‘abuse’). 

In the law as it has developed in practice the primary domain/reach of the harms of concern in a particular case are the customers of a business under examination and, if the relevant customers are themselves businesses, the downstream, end customers of the relevant supply chain, i.e. ‘consumers’. 

Chapter II can also be, and has been, applied to cases of harm to suppliers by the conduct of a business with substantial buying power and to cases of harms to competitive processes themselves.

In relation to ‘harm to competition’, business conduct can be judged abusive even if it does not cause immediate detriments to customers/consumers, if it can be expected to reduce the effectiveness of checks and balances (constraints) on harmful business conduct in the future. For example, predatory pricing might be good for a firm’s customers in the short run (lower prices), but, if it eliminates a significant rival, competitive pressures (a source of major checks and balances on power) may be weakened in the longer term.

Viewing its two, central concepts (dominance and abuse) together, Chapter II can be seen to be a policy implementation of a general, ages-old social norm or normative principle to the effect that ‘with great power comes great responsibility’, see  With great power comes great responsibility – Wikipedia.  The legislation serves to sustain and promote this ethical principle in everyday commercial conduct. 

Recognition of this linkage to ethical standards in commercial life is to be found in the case law. Thus, when faced with what are sometimes barrages of technical economic and legal points, in dealing with dominance cases Courts have talked of the ‘special responsibilities’ of dominant companies and have tended to proportion those additional/special responsibilities to the degree of dominance (economic power) itself. From this perspective abuse (which causes harms) can be seen as a significant deviation from these standards, whether the deviation is intentional or not.

Denial of service (DoA) in business banking

As indicated at the outset, denying access to its bank account for anything more than a few hours or a day or two can inflict significant harm on a business, to the point of it being an existential threat to the bank’s customer, i.e. it could put the customer out of business. Yet, right now, it is possible to observe DoA for periods of two months and possibly more, including for accounts that are manifestly unproblematic.

The domain/reach of the potentially abusive conduct of interest is all UK business bank customers, since access to their bank deposit(s) can be denied to each and every one of them.  They are all subject to the potential for unconstrained ‘police actions’. For CA 1998 purposes, this domain/reach is the most obvious first candidate ‘definition of the relevant market’.

In the business banking context, the harm caused by abusive use of DoA is not significantly mitigated by competition.  Once access to an account has been denied, a disgruntled or distressed customer cannot switch the deposit to, say, a competing bank.  This is, by and of itself, a manifestly anti-competitive effect.

Nor can it be said that the different practices in relation to DoA that are adopted by different banks can be expected to have a material impact on customers’ choices of banks.  In all likelihood, the great majority of customers are not aware of the DoA policies/practices of individual banks and there is little sign that banks advertise their differing policies on DoA in order to win customers from competitors.  

The bottom line is that, in the absence of effective enforcement of competition law, there are no effective checks and balances on abusive practices in respect of DoA.  Inter-bank competition fails in this respect. There is no legislation that addresses the issue.  There is no regulatory or supervisory framework that effectively constrains abusive practices. And to date there has been no enforcement of competition law by the relevant authorities. In a nutshell, use of the policing powers by banks has been inadequately supervised.

In respect of denial of access to money transmission services then, banks satisfy the criteria for dominance established in the Courts: each bank, or at a minimum each of the major banks, possess the power to behave to an appreciable extent independently of its competitors, its customers and ultimately of its consumers. HSBC’s ‘Safeguard’ programme demonstrates that. More generally in relation to DoA, the capacity to act independently and in a harmful way can be substantiated by a few, simple, observable facts.

It is an anomalous situation.  The Police Force operates within a framework of checks and balances. There can be disputes about its effectiveness, but the framework is there.

Likewise, regulatory authorities exercise strong economic powers (including what can reasonably be called policing and monitoring powers), but they too are subject to supervisory frameworks. To open an investigation in the context of Chapter II enforcement the relevant regulator – whether the Payment Systems Regulator (PSR), the FCA or the CMA – must have reasonable grounds for suspicion that an infringement of Competition Act 1998 has occurred. There are established processes for conducting an investigation, including consultation. Decisions can be judicially reviewed. Some types of decisions can be reviewed ‘on their merits’.

None of these safeguards — found in other, comparable contexts — exist in respect of bank decisions concerning DoA.  

So to the final point. As someone who has worked on a number of abuse of dominance cases and who, in a professional capacity, has been asked to provide answers to the enforcement Authorities’ first  question – ‘are there reasonable grounds for suspicion that there has been an infringement of Chapter II?’ – I can say that if, asked that question in relation to DoA in UK business banking, the answer would be an unqualified yes.

I would though also be inclined also to add a footnote, in recognition of the fact that enforcement authorities inevitably face the task of determining priorities when allocating limited resources to particular challenges.  The additional points would be reminders that:

  • To open an investigation is to do no more than that: the resources that it might entail later depend on what is subsequently discovered and resource usage can be cut back or enhanced at any time in light of information obtained. A stone is lifted and what follows will be determined by what is found there. That may be less than what might be expected ex ante , it might be more.
  • Abusive conduct thrives when blind eyes are turned, when stones are not lifted. Ending that state of affairs by and of itself inhibits the development of the unwanted organisational cultures, whatever the final outcome of a case taken on. ‘Rattling the cage’ can induce immediate behavioural changes and it can serve the more familiar ‘pour encourager les autres’ purpose of competition law.
  • Bank customers who have been harmed would not obtain any direct compensation from the financial penalties that might be levied if infringements are found, but such a finding would open the door to the possibility of class actions on their behalf for compensation.  It would therefore empower customers and that also is a relevant consideration for an authority or authorities to take into account.
  • The first case in any new economic context tends to carry more than usual influence on what follows.

Annex:  Examples of early-stage questions that might be asked by an investigating authority

What is the bank’s policy in relation to DoA?

If there is no explicit policy, what are the conventional practices that the bank typically follows?

In either case, what criteria are used when deciding whether to deny access to a particular account?

Are customers given reasons for the DoA?

How many DoAs for more than 24 hours have there been and what is the total value of funds affected?

What metrics are used in determining how successful DoA has been in preventing fraud?

Have potentially more customer-friendly approaches to fraud prevention been evaluated?

What are the arrangements for supervising, monitoring and evaluating the DoA decisions made?

What is the average time for an account holder to be denied access?

Is there a target maximum time limit on a period of DoA?

How are customer complaints handled?

Is there a customer compensation (for bad decisions) scheme?

Does the bank apply a ‘reasonable suspicion’ test before denying access?

Who in the bank makes these decisions?

Have the responsible decision makers received training in the implications of CA 1998 for their work?

Boris’s Onion

In one of the most striking chapters of Dostoevsky’s The Brothers Karamazov Grushenka tells Alexei a story, a parable of redemption:

Once upon a time there was a peasant woman and a very wicked woman she was. And she died and did not leave a single good deed behind. The devils caught her and plunged her into the lake of fire. So her guardian angel stood and wondered what good deed of hers he could remember to tell to God; ‘She once pulled up an onion in her garden,’ said he, ‘and gave it to a beggar woman.’ And God answered: ‘You take that onion then, hold it out to her in the lake, and let her take hold and be pulled out. And if you can pull her out of the lake, let her come to Paradise, but if the onion breaks, then the woman must stay where she is.’

It is a story grounded in a philosophy of the world rather different from those prevalent on social media today, a case of one good deed being sufficient to offer an opportunity for salvation, regardless of all else, rather than one bad deed or mistake being sufficient to warrant damnation, regardless of all else, as in the ideologies of the new sectarians of cancel culture.

For the latter audience the Prime Minister has, on multiple occasions, done more than enough to warrant damnation, but he does, I think, have a good claim to an onion in the way that he handled vaccine procurement decisions. The same might also be said of Sir Patrick Vallance, whose contribution to the establishment of the vaccine task force may have been indispensable.

By this singular decision-making process, many lives have been saved and many serious illnesses prevented, and one interpretation of recent movements in the opinion polls is that the old philosophy still has a considerable following in the imaginations of a large section of the public.

However, Grushenka’s story does not end with God’s judgment, it goes on:

“The angel ran to the woman and held out the onion to her. ‘Come,’ said he, ‘catch hold and I’ll pull you out.’ He began cautiously pulling her out. He had just pulled her right out, when the other sinners in the lake, seeing how she was being drawn out, began catching hold of her so as to be pulled out with her. But she was a very wicked woman and she began kicking them. ‘I’m to be pulled out, not you. It’s my onion, not yours.’ As soon as she said that, the onion broke. And the woman fell into the lake and she is burning there to this day. So the angel wept and went away.”

The moral of the tale now is that the vaccine episode offers only an opportunity for redemption, which can itself be cancelled by subsequent bad conduct. Empathy offered must be reciprocated, in the manner of Grushenka and Alexei in Dostoevsky’s novel. In contrast, the peasant woman reverts to type and thereby condemns herself.

So the question is: will Boris revert to type, or has he learned something important from the vaccine procurement exercise?  Will the empathy implicit in the Prime Minister’s instruction to Kate Bingham – “Stop people dying” – persist, or will what is to come be yet another illustration of the occupational mental disease of the powerful – hubris, the great destroyer of empathy.

In the Covid period, hubris has been the ever-present mentality in the development of those policies labelled as ‘non-pharmacological interventions’ (NPIs), a euphemism for politically repressive social policies. These have been mega-scale social experiments conducted without recourse to the normal methods of policy making – e.g. regulatory policy impact assessments are nowhere to be seen – and with apparent indifference to the sufferings they have caused, in many different ways, to millions.

Even now, with the onion lowered, the hubris continues. The Chief Medical Officer sees visions of 30,000 dead if the old ways are not maintained. Using a different metaphor, ‘the vaccines are not a get out of jail card’ says he, but that is precisely what they are for those who would play the card, a means of liberating a whole population from oppressive, harmful restrictions. It is understandable that a jailor might not be entirely happy about the emptying of his jail, but the rest of us should be able to see that that sentiment likely rests heavily on self-interest.

The hopes of those suffering most depend heavily now on being pulled from their troubles by Boris’s onion, at the fastest possible speed. Will he let them, or will he kick them away, letting them linger a while yet in the lake of fire?

First published by Reaction, 10 March 2021

The Lockdown Mysteries

The question I keep asking is:  Why, nearly a year after their first trialling, have there been no serious attempts by government or its advisors to assess the effectiveness of the packages of restrictive social and economic measures/interventions that, in their more extreme forms, are referred to as Lockdowns?  The absence of serious attempts by government and its advisors to engage with such a basic question of regulatory impact assessment is the first ‘mystery’ of Lockdown.

The position of those who strongly advocate Lockdowns seems to be to be like that of a physician who says to a patient: “I would advise you to take this medicine because I believe that it is effective, but I have to tell you that its effectiveness has never been subject to any rigorous testing and it does have very major, harmful side effects.”  There would no doubt be some takers – ‘believers’ who place heavy trust in the opinion of the physician and are willing to discount all that follows after the ‘but’ – but the second mystery of Lockdown is that there have been so many willing not only to take the prescription, but also to endorse its coercive application to all citizens. 

In response to the question of why the assessments have not been made, the principal answer seems to have been that it is all too difficult and that the exercises would be unlikely to produce credible results. That, though, would be of no comfort to the patient:  it simply emphasises that there is no evidential basis for the advice.

Anyone familiar with policy impact assessments might also have some follow up questions: “These types of assessments are routine, what is it about this one that makes it so much more difficult that it is not attempted?” “Is it lack of data (there seem to be lot)?” “Is it because the intended impacts are too diffuse (they seem to be clearly focused on infection rates)?” “Is it because the timing of the measures is uncertain (they seem to occur at well-defined dates)?” And a keen student of politics might ask: “Is it because the assessments might produce (politically) inconvenient answers?”

Assessing the impact of a package of policy measures/regulations

To see that impact assessment is feasible, consider the most basic of epidemiological model frameworks, the Susceptible-Infected-Removed (SIR) framework.  It starts with an identity: if I(t) is the number of people who are infected & infectious at any one point in time, then the rate of change of I(t) is the rate of new infections at time t less the rate at which the sub-population of the infected ceases to be infectious.

Next it hypothesises that the rate of new infections is the product of beta, I(t) and S(t), where S(t) is the number of susceptibles in the population at the given time (t) and beta is a positive constant, a model parameter:

Rate of new infections = beta*I(t)*S(t).

Here I(t)*S(t) is just the total number of possible binary contacts between individuals in the infected group and individuals in the susceptible group.  It is a measure of the virus’s maximum number of potential opportunities for transmission at a given moment.  Beta is the effective contact rate, the fraction of the potential opportunities for transmission that are realized.

More complex models take things further than this, including by disaggregating these macro concepts into sub-sets, but this basic framework suffices for current purposes. 

The hypothesis to be tested is that a set of restrictive social, economic and political measures introduced at a particular time will substantially reduce the rate of new infections. The measures are typically referred to in epidemiological circles as non-pharmacological interventions or NPIs, which euphemistically insinuates that they are some other sort of other medical intervention.  Here I will refer to them by what they are: socio-economic measures (SEMs) that place restrictions/constraints on individual human conduct.

Given the basic equation for new infections, the only way that SEMs can affect the rate of new infections via their effects on the effective contact rate, beta.  When considering SEMS, therefore,  beta ceases to be parametric and turns into something else, into what in economic modellers would call an endogenous variable, something that is determined within a modelling framework.  Thus, what stands to be examined is a chain of causality that runs:  SEMS -> beta -> new infections.

Within this wider modelling framework, which is necessarily socio-economic-political (as well as epidemiological), the first link in the chain remains egregiously under-examined.

At this point, before proceeding, let me make three side comments:

  1. An even wider, more sophisticated political economy would recognise that the timings of the SEMs themselves are characterised by significant endogeneity. The introductions of Lockdowns do not ‘come out the blue’ in a random or unpredictable way:  they tend to be first developed and then implemented as responses to upward surges in infections, cases, hospitalisations and deaths, particularly when those things are already high.  That matters because it should change, in rather fundamental ways, the specification and estimations of the models.  There is a reverse causality from infections to SEMS that needs to be addressed.  The most obvious issue is that the assessment & implementation lags involved can lead to measures being introduced after (unobserved) infections have already peaked.  It looks so simple – a Lockdown is imposed, infections fall, therefore the Lockdown cause the fall (post hoc, ergo propter hoc) – but what is being observed may be a Rain Dance. Closer investigations are required. 
  2. It is a debilitating limitation of the use-in-practice of this type of model that it slips in an auxiliary assumption that the only changes that significantly affect beta are induced by either changes in virus transmissibility or by the SEMs. That is not plausible:  it is directly contradicted by evidence indicating that the known presence of a potentially threatening virus is sufficient to induce major, risk avoiding adaptations in the behaviour of the public (what might be called ‘natural’ adaptation, where the word ‘natural’ is used, in a classic sense, for things that happen which are not at the will of Leviathan). To be blunt, this is extremely crude social theorising.
  3. The euphemism NPI serves to ‘medicalise’ the language in a way that insinuates that experts in epidemiological models possess expertise is assessing the social factors that co-determine the value of beta.  There is no reason to think they are, and some reasons for thinking that they may be inferior for the task than an intelligent lay person, because they bring their own, cognitive ‘expert biases’ to the problem.  The language also helps to foreclose the contributions to understanding the evolution of a contagion which scholars focused on those social factors could bring. Among the latter are economists, who tend to place a heavy emphasis on incentive structures as sources of influence on human conduct: most economists would, I think, tend quickly toward a specification that made beta a function of I(t), with beta lower the higher the level of infections. That comes from very basic decision theory in conditions of risk, a relationship that has been empirically mapped in large numbers of different decision contexts. Such a re-specification changes the set of differential equations driving the models, and hence changes the shapes of the epidemiological curves they generate. By recognising a negative feedback – an increase in infections lowers beta and hence reduces the rate of increase of infections – it flattens the curves.

There is little doubt that the intent of SEMs is to reduce the value of beta.  The question that I therefore want to put on the table, for the umpteenth time, is this:  What does the evidence indicate concerning the effects on beta of the SEMs in general and of those more socially restrictive bundles of measures referred to as Lockdowns in particular?

It is a straightforward empirical question and it is not impossible to address. It is therefore reasonable to expect that it would be an exercise undertaken as part of wider regulatory impact assessments.  However, as indicated, in nearly a year of Covid experience now, such assessment exercises are notable by their absence. 

How is this kind of exercise conducted?  Although new infections are not observable, there are correlates of them to be found in the measurements of cases, hospitalisations and deaths, which can therefore be used as proxy variables, with suitable adjustments for the lags involved.  Echoes or footprints of changes in the rate of new infections should appear in these latter measurements, particularly if the changes are of substantial magnitude: a greater effect would serve to make it easier to pick out the signal from the noise.  (I have been involved in at least three impact assessments where it was sufficient to note, by eyeball, singular kinks in otherwise smooth curves that corresponded exactly with the policy measures under investigation. The first dates back as far as the 1970s and concerned the impact of the nationalization of British Steel on the rate of diffusion of new steel-making technology.  Interestingly, the relevant diffusion models involved mathematics rather similar to those of basic epidemiological models: the diffusion curves are sigmoids, akin to those shown in charts of cumulative cases and cumulative deaths to be found in the Covid statistics.)   

The analyst has a set of high frequency (daily) data time-series available.  The dates of the interventions are known, so it is clear approximately when it is to be expected that echoes/footprints of a significant reduction in beta might be found. It is not a matter of looking for a needle in a haystack.  The techniques involved are bread-and-butter methodologies in the social sciences.  In financial economics and financial accounting alone there have been literally thousands of research papers covering the ground, before getting to applied economics more generally.

So, if the conjecture to be tested — that a bundle of harshly restrictive SEMs has a substantial downward effect on beta – is right, what does the footprint of an effective bundle of measures look like?  The diagram below illustrates.

It shows a highly magnified, short segment of an epidemiologic curve around the time, T, a bundle of restrictive SEMs is introduced.  Given the magnification, it suffices to represent the segment as linear (an approximation) to simplify the drawing (nothing substantive hinges on this simplification).  It is shown as a rising line ABC – new infections are increasing with time – but it could equally well have been drawn falling, or flat (as would be appropriate, if SEMs were introduced right at the peak of a new infections curve).

A reduction in beta at the given time will imply a discontinuous drop in new infections, shown as a drop from B to D.  A fall in beta of 10% would predict a 10% fall in daily infections, a fall of 20% in beta would predict a 20% drop in daily infections, and so on. 

But then what next, following the fall to D?  At D the evolution of the epidemic is moved to a different epidemiological curve, one with a lower beta but which passes through point D.  That lower curve will be subject to the same equation for new infections, but with a lower value of beta.  The equation also implies that the slope of the destination (lower) infections curve – the slope is the rate of change of new infections with respect to time – will also be lower.  The evolution of the curve from time T becomes, for a while at least, flatter.

There, then, is the signature of an effective SEM: a sharp and immediate drop in infections, followed by a flatter evolution of new infections thereafter.  Diagrams like this one did in fact appear in the public eye at the outset of the Spring 2020 epidemic, but they became rather harder to find in later periods.  The question is: have repeated signs of this pattern been found in the data?  The general answer is that, to the eye at least, they have not (which may account for the decline in production of diagrams showing discontinuities) and, more surprisingly, there appears to be an aversion to looking to see whether there are such effects by testing for them at a more sophisticated level than use of eyeballs. 

To repeat an earlier point, there is no lack of data available for the task, although there are major challenges in the fact that that new infections themselves are difficult to get at and extensive reliance has to be placed on proxy variables.  By their own natures the proxy variables will tend to smooth out the discontinuity to some extent, but, for cases and hospitalisations at least (deaths are more problematic) the smoothing can be expected to be limited to a period of a few days.  The discontinuous fall in beta shown in the diagram can be expected to be echoed in the proxies by a sharp turn downwards, followed shortly by a sharp return to a more normal patter. (Mathematically, the second derivative of the relevant, proxy data will turn sharply negative at some point after the introduction of the measures, followed shortly thereafter by a sharp, turn positive (followers on Twitter might notice again an old emphasis on the significance of second derivatives!)). So that’s what to look for.  

More than that, there are corollaries of the basic hypothesis that can also be tested.  The harsher SEMs have typically been imposed at regional or national levels, but there are time series of data at UTLA level. An effect might be difficult to detect from only one time series, but, major regional or national level SEMs should have simultaneous impacts with similar geometries across all UTLAs to which the apply.  The existence or non-existence of such simultaneous impacts is therefore a matter that could be examined.

Similarly, there are observations of occasions when SEMs have been removed, in periods when the epidemic has been on the wane (illustrating the endogeneity of the timing of changes in SEMs).  In these cases we are moving in the opposite direction, so the underpinning proposition – that SEMs have major, downward effects on beta – implies that their removal should lead to a discontinuous upward jump in beta and in new infections, followed by a steeper decline in new infections thereafter.

Given these (and other) ways of assessing the effects of SEMs, the lack of curiosity about testable effects is something I find astonishing, at least among people who would want to think of themselves as scientists.  More restrictive SEMs are not the normal sort of placebo measures often to be found in public policy when politicians want to do ‘something’ to reassure the public or a relevant lobby group, whilst reasonably safe in the knowledge that the chosen ‘something’ is very unlikely to cause any major harms.  Policy advisors can shrug at that practice in good conscience – no material harm done – but Lockdowns are a very different kettle of fish:  they cause very major (because so widespread) harms.

Why the lack of scepticism?

“Science is the belief in the ignorance of experts” said the physicist Richard Feynman, indicating the importance of a Humean scepticism in scientific endeavour.  As in Hume’s day, however, scepticism is anathema to many ‘believers’ (I use one of the possible antonyms to ‘sceptics’).

There are likely a range of reasons why so many have simply accepted the assertion that restrictive SEMs lead to substantial reductions in infections in the absence of any rigorous assessment to test that claim against the evidence.  The sentiment ‘the wish is father to the thought’ (we believe things we want to be true) is likely one of them.  In the face of a fear-inducing increase in risks and in the absence of vaccine, we can desperately want it to be true that there are things that can be done to substantially reduce those risks.

However, there are also advocates of lockdown who, in other contexts, would tend to look favourably on a desire to examine the evidence when developing and implementing public policies.  Why are they not their usual, scientifically-sceptical selves when it comes to this issue?

The explanation of it lies, I suspect, in those auld enemies of social scientists, intuitions.  Intuitions, an indispensable brain function for making snap judgments, can be helpful in suggesting hypotheses and theories, but they are best set aside when it comes to testing propositions.  Some may survive the tests, but very many won’t.

Two intuitions may be at work in relation to SEMs.  The first is that, other things being equal, social distancing reduces transmission (and hence the effective contact rate, beta), and there are good grounds for thinking that is true in most circumstances. The problem with this kind of intuition is simply a very general one:  other things rarely stay equal in the aftermath of major policy changes.

The second intuition, which is necessary to get from the first to a policy position, is much shakier.  It is that restrictive SEMs serve to increase social distancing in some aggregate sense. 

The problem is that the SEMs are ill-targeted, blunderbuss measures.  Similar restrictions are applied across a whole patchwork of different socio-economic contexts.  There can be contexts in which they will reduce transmission, others where they have little or no effect, and yet others where they have perverse (opposite to intended) effects. 

As a general matter, it might be posited that public spaces will be emptier, but that other spaces will be fuller (at any given time, people have to be located somewhere, they don’t just disappear for a while).  Shopping streets, restaurants and pubs might be emptier, but homes will likely be fuller for longer periods, and therein lies an illustration of the problem.  For millions of Britons, the home is one of those confined, relatively crowded spaces (which is maybe poorly ventilated in colder periods of the year) that Japanese Covid policy has advised positively avoiding.  Moreover this may be particularly true in areas of the country that rank high in tables based on metrics of social and economic deprivation where, as a matter of record, estimated infection rates have been highest.

Homes can also be locations where there are more face-to-face conversations in a given interval of time, and where those conversations can be louder:  it is a place where more intense emotions can be expressed loudly in ways that tend to be repressed in public, e.g. parents and their children have been known to yell at one another from time to time.  Again, face-to-face interactions and loud noise are things that Japanese Covid policy has advised avoiding as much as possible.

On top of these points is the fact that many millions of workers are still it work:  their activities are, for a variety of different reasons, deemed essential. Those workers return to homes when their day is done, connecting up the ‘at work’ and ‘at home’ contact-networks. Additionally, the critically important group of healthcare and care workers connect the other sub-networks to the set of those who are the most susceptible to the worst effects of the virus, the very elderly and those suffering from relevant co-morbidities.

Finally, of course, there are the usual, hugely important issues of compliance with the regulations, but enough said.  Contexts are almost uncountably varied and the detailed investigation of them would be an impossible task. 

We are left in the end, therefore, with the various time series of data that are available and the with question:  Is there anything here to suggest that the more restrictive bundles of SEMs have had material, downward impacts on the (aggregate) effective contact rate, beta?  

As a Bayesian – a disposition that requires frequent adjusting of (probabilistic) beliefs to new evidence – I keep an open mind.  A rigorous impact assessment of the evidence could go either way, and could even suggest different directional effects of SEMs at different stages of the contagions and in different seasons. However, looking at the data I would bet that, whatever the directional effect indicated by the exercise, it would be found to be rather small in magnitude.  Big effects tend to lead to patterns of observations that should be easy to spot (like the effects of nationalization on the diffusion of new steelmaking technology): major kinks in the curves of the predicted type would likely be apparent to the eye, and they are not.    

Has Lockdown increased Covid infections?

It seems like a silly question, doesn’t it?  Lockdowns are intended to reduce infections and that they do so may seem intuitively obvious.  Maybe they are less effective than claimed or intended and are not worth the costs.  But increase infections?  Surely not.

However, an intuition that is inadequately tested against observations is just another name for a prejudice. As I learned in a brilliant, motivational, undergrad lecture introducing me to social science more than fifty years ago, the social value of social science chiefly lies in its ability to root out  false intuitions/prejudices and replace them with something in greater conformity to observable realities.  So, let’s put some beliefs about the effects of Lockdown to the test.

Infections are not observed and are not realistically observable.  What we have are a series of indicators from which the time pattern of infections might be inferred:  cases, hospitalisations, deaths.  Each comes with its own limitations and all come with a lag: they are records of events that are consequential on infections in the past and they become available some time after those infections have happened. 

Those lags pose major problems for public policy in a context where events are moving very rapidly.  The speed of change is exemplified, for example, by the UK Government’s Covid dashboard, which on 23/11/2020 headlined an estimate of ‘R’ in the range of 1.0 – 1.1, accompanied by the statement that cases were growing at a daily rate of 0% – 2%.  Beneath the headline could be found the then latest estimate of the (past) 7-day rate of growth in infections.  It was minus 22.9%.  You could, accurately, say that the website authors were somewhat ‘behind the curve’.

Covid ‘cases’, i.e. tests yielding a positive, are the statistics nearest in time to the infections that give rise to them.  They come with numerous limitations, but I have always thought that they can at least serve as useful indicators of the evolution of the epidemic over time.  For example, in the UK the official cases numbers stand well below estimates of new infections derived from other methodologies, such as those used in the ONS’s weekly surveillance report or in the ZOE app’s symptom reporting system.  Yet all three systems of measurement/estimation find a similar time profile for the epidemic.

On a day-by-day basis the cases data are noisy, inter alia being subject to the vagaries of the testing and reporting systems in play, but some clear patterns can be observed nonetheless.  It is these observable patterns in the data that contain the information required by policy makers who seek to look forward, not backwards.

One of the patterns is a consistently low reading in the date-of-specimen data (relative to other days of the week) for Sunday cases. We have reasonable sight of why that is so.  Many of the Covid tests are conducted at home and involve both the postal receipt of a testing kit and the return of the sample taken, but the postal service quietly sleeps on Sundays and recipients of the kits are advised not to post them on that day.  That tends to induce test-kit recipients to perform the tests either before or after a Sunday (and the weekly spike in cases found in the date-of-specimen data for Mondays suggest it is mostly the latter).

It can be inferred, then, that Sunday cases are more tilted toward people who have turned up at testing centres on that particular day of the week.  We might guess that these people are more symptomatic than average, but, if so, that bias is of little consequence if the aim is to understand the time profile of the epidemic’s evolution:  there is no strong reason for thinking that the proportion of infections that turn out to be more symptomatic varies greatly through the course of a contagion.  So long as like is being compared with like, in similar ways though at the different times, the signals should be reasonably reliable.

There is clearly a trade-off in working with a highly-reduced dataset of Sunday observations only.  It takes out a lot of noise in the daily data and greatly reduces and simplifies the analytic load (allowing me, in a few hours, to conduct an essential exercise that has been neglected by a whole state apparatus over a period of more than eight months).  On the other hand, it does set aside large numbers of observations that can also be expected to carry relevant information. It is therefore appropriate to make at least some checks that the data reduction does not seriously mislead, although more comprehensive assessment exercises must necessarily be left to those with the resources to carry the greater burdens.

The Sunday date of specimen data, starting from the first Sunday on which the weekly ‘low’ appears clearly in the data record of the Autumn epidemic in the UK, is shown in the first chart below.

As can be seen, the data follow the classic pattern of the upswing of an epidemic curve:  cases rise quickly, then the rate of growth slows and the curve turns.  This is what the epidemiological models tell us to expect to happen and it is what we have observed to happen in curves from around the world. 

The eye is drawn to the rising numbers in the chart, but it is the less obvious curvature that is exhibited which contains the clues as to the future evolution of cases.   

Given the reduced dataset, it is comforting that the implied curve becomes flat between Sunday 25October and Sunday 1 November.  The ONS surveillance study (much the most direct measure of new infections that we have) puts peak infections sometime in the week ending 25 October.  Since cases tend to lag infections by several days, that is consistent with a peak in cases in the week defined by these two  Sundays (the peak of an epidemic is characterised by a zero slope in the daily infections curve).  The consistency between two very different assessment exercises suggests that nothing of major significance (for current purposes) is being missed by working with a reduced dataset (of Sunday observations).

The observed data pattern can be projected forward to generate a counter-factual in which there was no Lockdown (announced on 31 October, implemented on 5 November).  UK government ‘scientific’ advisors have consistently based forward projections on an assumption of sustained exponential growth in cases over the near future. However, the data in the chart above do not follow such a pattern and starting from an assumption that is falsified by the evidence has led only to nonsense.

In developing a data-consistent projection, I am going to work with logarithms of the cumulative cases data corresponding to the eight measurements in the chart.  This takes us back to the sort of curve that was much examined at the beginning of the Spring epidemic. These graphs showed cumulative deaths as a function of time on a logarithmic scale. They are still to be found on sites like Worldometer and Our World in Data.  The ‘cumulative’ curves go ever upward, but, past a certain point (which, empirically, is to be found at an early stage of the Covid epidemics around the world), they flatten as they rise. 

The reason for working with logarithms is that it facilitates examination of the curvature pattern in the data and, as already indicated, it is this curvature that contains the clues as to the possible future evolution of the epidemics.  It is a perhaps remarkable finding of fact that the rate of growth of cumulative cases/hospitalisations/deaths declines with time over almost the entire lifetime of an epidemic. It is a bivariate relationship between a growth rate and time that is amenable to estimation from a relatively limited data set.  Moreover, study of this type of relationship has been going on for more than two centuries in a range of different contexts.

Thus, in the early nineteenth century, a London actuary, William Gompertz, sat down with a whole set of mortality tables and found that a very simple relationship between the growth of cumulative deaths and time could provide a decent summary of the various, different records.  Gompertz’s proposition was that the rate of growth of cumulative mortalities was a negative exponential function. 

Now read on, keeping the next chart in mind.  It shows a closely associated relationship (not quite the same as Gompertz’s) for the pre-Lockdown Autumn Covid epidemic in the UK.

The negative relationship is there to see.  It is something that SAGE/PHE have never focused on or utilised, even though attention was drawn to it at the beginning of the European epidemics by a Nobel bio-physicist, Stanford’s Prof Michael Levitt, based on his own, early study of Wuhan data.  Subsequently, Prof Levitt has mentioned Mr Gompertz quite a lot, but, for his labours he has suffered the intellectual ostracism familiar to truth-tellers through the ages.

The irony is that, whilst SAGE/PHE have been eager beavers when it comes to projecting positive exponentials out into the near future, the idea of projecting a negative exponential (or, indeed, anything going downwards) appears to be an anathema, a heresy that challenges the Party Line.  Positive exponentials are scary things (Good), negative exponentials are not (Bad).

Looking at the data in the growth chart, any fitted trend is going to be heavily affected by that first observation.  It is the most distant from the time period of interest (the four Lockdown Sundays), occurring at a relatively early stage of the Autumn epidemic when smaller numbers of cases were liable to cause greater volatility in growth estimates.  On a judgment call, I will omit it for the purposes of the required counterfactual exercise, whilst noting that the effect of that omission will likely be to increase the counterfactual projections that follow below and, relative to a strict data read, bias things in favour of finding Lockdown effects that are more benign.  It’s the sort of judgment call whose wisdom can be tested out ex post.

That (dropping the first observation) done, the next steps are to fit an exponential trend to the remaining data (as an approximation) and to use projection of that trend to estimate future growth rates.  Moving back from logarithms to daily case numbers then yields the profile for Sunday cases shown in the next Chart. 

The four numbers, for the Sundays of 8, 15, 22 and 29 November are the counterfactual estimates of the evolution of the Autumn epidemic in the Lockdown period.  That is, they are estimates of what things could be expected to look like, if the Lockdown decision had not been made and the government had continued to rely on pre-existing policy.

The general shape is a familiar one:  the curve goes up, turns and then goes down.  The Autumn wave curve for Switzerland, a country that has not introduced a national lockdown, looks like this, albeit that the epidemic there has been more intense and the Swiss peak is more sharply defined (more Matterhorn than Cross Fell).

Data for the actual out-turns for first two Sundays in Lockdown are already available in finalised or near finalised form and these are included on the next Chart, showing both the counterfactual projection and the deviation of the actual out-turns from it (the orange bars).  It points to Lockdown having an initial, upward impact on cases. The upward impact is of quite substantial magnitude.

Given the relatively unfavourable conditions in which the virus is now attempting to spread, it can be expected that this initial impact effect will wash out over time, and the second of the orange blocks indicates that this is already happening.  An interesting corollary, which arises from the ‘overshoot’ estimated to have occurred on account of the Lockdown policy, is that the post-peak rate of decline in cases can be expected to be unusually fast. The washing out of the overshoot adds to the ‘natural’ downward momentum of a contagion in retreat.

As indicated at the beginning I expect that many will find these results counter-intuitive, but the numbers are as the numbers are. It is natural though to ask whether there are any plausible explanations for them.  That is beyond scope here, except to say that intermediate micro-economic theory can give a plausible account as to how and why the identified, unintended consequences could have occurred (it’s to do with the economics of the choice of timing for consumption and leisure activities) .  As a corollary of that, it cannot be said that, though unintended, the outcomes were unforeseeable. Some did indeed foresee them and drew attention to them as a likely negative consequence of Lockdown. As challengers of Groupthink, they were simply unheard, that’s all.

For those in authority it is an uncomfortable fact that a large research literature on the effects of regulation indicates that unintended consequences are near ubiquitous and that, not infrequently, those consequences turn out to be the opposite of what was intended, or at least it is claimed was intended, for the measures imposed.  For anyone steeped in that research base nothing in the above should come as a surprise. 

The above exercise cannot be read as definitive (nothing in science is), but I hope it will stimulate further work, not least by government advisers.  The critical first step is that to recognise that a foundational exercise in this type of assessment work is to develop reasoned counterfactuals that can be substantiated on the basis of whatever evidence is available, however limited that is.  Absent that, all claims about the effects of this or that policy are little different from muttering gollum, gollum, gollum.  Members of Parliament and members of the public are right to expect more than that from our Government.

Postscript

The above was written in November 2020, before reliable case data were available for the 22nd and 29th November. It was subsequently updated in Twitter posts to include those two dates, yielding the Chart below.

At the time of updating, the increased, positive effect on cases (the orange bar) for 29th November was something of a puzzle. However, it coincides almost precisely with an unexpected upturn in hospital admissions whose potential significance seems to have flown well above the heads of most commentators. Looking back, the most obvious explanation is the increasing, national spread of the UK’s Kent variant of the virus, about which the public and the WHO were only informed on 14 December, even though it has been subsequently revealed that it was a major presence in Kent as early as October 2020.

In November 2020 policy and media attention was heavily focused on the Lockdown measures and it appears that a real and present danger, which did not fit into the preferred narratives of the time, was largely ignored.

Against Covidism

Barring a miracle, this week will see the English public facing a further wave of oppressive regulation that has been ‘justified’ on the basis of the forecasts emanating from a set of epidemiological models. The earlier counterparts of these models have produced some wildly exaggerated predictions: they have failed to accord with subsequent observations and measurements, and by large margins. 

Models, or more generally theories, that fail such a test are normally discarded by scientists, in the same way as would a prospective vaccine that had failed in trialling and testing.  The culture of science requires continual checking of models/theories against observational data and, ultimately, a submission to those observations/measurements/data.

That no such submission of theory to evidence has been forthcoming is, I think, a reflection of the now ideological nature of Covid-19 controversies. In this context, the models themselves can reasonably be regarded as ideological constructs in an everyday sense of the meaning of the word ‘ideology’.  That is they comprise whole sets of assumptions or propositions that, when taken together, lead to a particular view of the world. They become an aspect of what can properly be called science when, and only when, (a) the chief preoccupation of their developers is to achieve conformity with observational data and (b) there is a ready willingness to set them aside when they fail to do so.

Very little of that preoccupation and readiness is observable in the Covid context:  untested assumptions (which are inputs into the models) abound and rather radical deviations of the modelling forecasts from actual out-turns seem to bother their proponents little. Sometimes they appear a bit like Tom in the Tom and Jerry cartoons: sliced and diced by Jerry, Tom is ever able to reconstitute himself and go on as if nothing much has changed.

At a more general level, beyond narrow questions about models, there appears to be an ideological tendency at work which I will call Covidism and define as follows:  it is the political belief that suppression of one, particular human pathogen, SARS-CoV-2, is of sufficient priority to justify far reaching and sustained disempowerment of a whole population by a political executive.

Before teasing out one or two of the extraordinary assumptions/propositions on which such a general view rests, it should be said at the outset that there is nothing necessarily amiss in those public officials who are charged with addressing problems arising from the spread of the virus become engrossed by that particular challenge.  That intense interest might simply reflect a division of labour within government, something that is necessary for effective governance.

The mischief arises when an obsession with suppressing SARS-CoV-2 becomes a dominant influence on the policy development process as a whole, or, put another way, Covidism establishes itself as a ‘hegemonic’ ideology in government.

This is, de facto, a breach of any principled division of labour in government. It doesn’t just say that virus infections are a highly important problem that needs to be addressed, a proposition with which most people would readily agree. Rather, it says that all other considerations are, in comparison, relatively unimportant. That is a judgment about relative importance and it necessarily engages much broader balancing assessments of the effects of public policy.  Only elected politicians, subject to an established system of checks and balances, command the legitimacy required to perform that task.

Whilst the origins of Covidism as a political force might be traceable back to a medical establishment, it has therefore only become a dominant ideology by converting senior politicians to its core, defining belief, or by at least convincing them that these are matters that they should leave to others, e.g. advisors. That this is what has happened is a proposition that can be verified by observing and comparing (a) the attention and effort given to the assessment of the infections, morbidities and deaths the virus causes and to measures/actions taken to reduce those things, with (b) the attention and effort afforded to all other medical, social and economic effects that the responsive measures/actions (not the virus) are liable to cause.

It is striking that no systematic regulatory impact assessments of public health measures/actions taken or contemplated have been conducted and made available for public discussion. ‘Experts’ in various other fields, but with no experience of policy assessment, have pronounced on the anticipated effects of this or that policy measure, but substantiation for those speculations has been notable by its absence. The issues are of huge policy significance, yet there has been a continuing resistance to exposing the speculations to critical examination against evidence and data.

From a political economy perspective, Covidism makes no sense.  The broad objectives of public policy are usually cast in terms of promoting some notion of human wellbeing in the round and, whilst there can be arguments aplenty as to how that should be more precisely understood and measured — as well perhaps as arguments as to whether it is too species-specific (too homo-centric) — that overarching aim carries wide support and consent. 

To explore and make more precise the basis of this ‘no sense’ judgment we might first think of a measure of wellbeing, W, as being dependent on three factors or sets of factors:

A) The SARS-CoV-2 infection rate, I;

B) Public health and social care factors more generally, H; and

C) All other relevant factors, X, which will encompass a very wide range of social and economic matters.

Second, in seeking to promote wellbeing/welfare, W, it can be assumed that policymakers have the power to choose one of a set of measures, m, from a feasible set of alternative combinations of measures, M.  

In general, the set of measures taken will affect each of I, H and X, which can be expressed by writing them as I(m), H(m) and X(m) to indicate a functional dependence. In a typical economics teaching context the resulting challenge of finding the best set of measures (the best policy) would be specified as:

               Choose m from a feasible set M to maximise W(I(m),H(m),X(m)).

That is a well-defined optimization problem. Although its implementation to actual policy choices will pose an array of difficult, practical challenges, the simplicity of the formulation directs the focus to the right issues concerning aims, constraints, and trade-offs.

Covidism does not, however, adopt this approach. Starting from the largely uncontested assumption that, other things being equal, an increase in the infection rate, I, will lead to lower wellbeing, W, in its extreme form the doctrine says:

               Choose m from a feasible set M to minimise I.

That is a very different optimization problem. A wide public policy objective has been displaced by a very narrow, objective: the minimisation of Covid infections.

The two, different optimization problems can be expected to lead to different, and potentially very different, solutions (i.e. to different sets of measures being adopted). And, since it does not seek to maximise wellbeing, it can safely be concluded that the pursuit of such a Covidist agenda will be harmful to social welfare in the round.

In sociology, the shift in the objective function from ‘maximising W(I(m),H(m),X(m))’ to ‘minimising I(m)’ could be classified as an example of ‘goal displacement’, a process in which a means intended to achieve the goals/ends of a wider social collectivity come themselves to become the goals or the ends to be sought, at least for some members of that collectivity.

Goal displacement is ubiquitous in large bureaucracies and one observable form of it, which happens coincidentally to have been the focus of my own, early-years’ research, is the displacement of a business’s or an organisation’s publicly legitimised goals by the goals of its senior management (a body of people who are typically in possession of the influence and power required to effect the transformation in the organisation as a whole). In the case of Covidism, therefore, to my mind the senior management of the NHS and their advisors can be classified as prime suspects in the goal displacement process.

However, as indicated above, for the displacement to occur within a political (rather than a business or bureaucratic) setting, it is, I think necessary for there to be a breakdown in the normal division of labour between policy developers, analysts and assessors on the one hand and political decision makers on the other hand.

What has been described thus far is the extreme form of Covidism, but ideologies are typically characterised by different degrees of zealotry among their followers.  In this case, a less extreme form might allow for some weight to be given in evaluating policy measures/actions to their effects on H and X, but only a small weight in comparison with that to be given to reducing the SARS-CoV-2 infection rate, I.

Even an administrative agency taking this less extreme view would, I think, have very great difficulty in defending its position in the event of any judicial review of its processes and decisions, unless it was compelled to do so by statutory duties.  The Court would immediately point to the failure to give adequate consideration to salient factors and evidence relevant to decisions, namely the effects of measures directed against SARS-CoV-2 infections on wider public health and on social and economic life more generally. The expected judgment would, I think, be that it had acted unlawfully.

Perhaps in implicit recognition of this potential constraint on the unlawfulness of what they advocate, leading Covidists in the UK have tended to resort to a further argument, although it is one that is usually not developed in any substantive way.  Rather it has mostly been put forward as a matter of simple assertion, not clearly reasoned and with no substantiating evidence offered.  The assertion is that getting SARS-CoV-2 infections down is a necessary condition for progress in promoting the other, wider public policy aims. This insinuates that suppression of infections remains a means to an end, and hence that is has not, in fact, displaced the general welfare objective.

There can, however, be no prospect of easy escape via resort to this piece of rhetorical trickery.  The necessity proposition itself has to be substantiated, which requires both (a) that the impacts that measures to minimise infections have on wider public health and on social and economic life be fully considered and evaluated and (b) that those effects have been appropriately weighted.  That has not been done.

The only viable defence against a claim of unlawfulness is, I think, to argue that minimising infections represents a social consensus, i.e. that public attitudes imply the relevant, wider welfare/wellbeing objective reduces to W(I(m)). Put another way, it requires that a substantial majority of the people are, by one means or other, converted to Covidism:  Covidism has to become a dominant belief system in wider society, not just in government,

In Britain there has been a quite explicit attempt to do exactly that, i.e. attempt a conversion of an entire population to a particular view of the world (ideology) and to the social conduct that it entails. As to the means, they have been a mix of persuasion, propaganda, deception, fear-creation and coercion that is broadly similar to those used in historical precedents for this type of exercise. 

Two types of exponential bias

Exponential growth bias (EGB) is one of a long list of cognitive biases identified by psychologists and perceptions of it and its prevalence have come to play a significant role in the development of policy responses to Covid-19. 

In brief, the bias is seriously to underestimate the future values of something that is growing at a constant proportionate rate. Its existence has been well attested in research stretching back over several decades and a classic illustration is an ancient puzzle:  Put one grain of wheat/sand on the first square of a chess board, double it on the second, keep doubling until the 64th square and then ask how many grains there will be on that last square. The numbers just race upwards in a way that most people cannot get their heads around.

At the opening of the encounter between the British public policy making establishment and the SARS-CoV-2 virus, a view was formed that the public would significantly underestimate the rate of spread of the virus and hence underestimate its risks.  In effect, there was an assumption that the public would exhibit EGB.  From this it was deduced that risk-avoiding social conduct taken of their own volition would be too low for the good of the community: the public would be insufficiently fearful.

The view that was adopted lacked any strong evidential support — it was not science as we know it.  It was an assumption/pre-judgment, but there were some justifying reasons for it. Most of the population are not familiar with exponential curves and their rather special properties.  Nor have they encountered repetitive exposure to contexts in which something or other is growing exponentially, from which experiences they might have (inductively) developed rules of thumb that could guide them in relevant circumstances (as coastal dwellers did in charting the tides & linking them to lunar cycles long before the arrival of Newton’s theory of gravitation).

However, these reasons were well short of sufficient for safe adoption of the assumption.  There are psychological studies that indicate tendencies to over-estimate risks in particular contexts, for example risks of criminal harm perceived by older citizens, or, more generally, risks of rare and unfamiliar events that are characterised by adverse and vivid consequences that grip the mind (‘sensational events’). And perhaps the crucial observation in all this is that, back in March, Covid-19 certainly did appear to grip the minds of those charged with its management: the fear was almost tangible.

More self-aware policymakers would have recognised that there is a familiar motes and beams issue here:  an assumption that the high level of fear of the ‘managers’ was appropriate, but that what the managers perceived (without much in the way of evidence) to be the fears of the hoi polloi were inappropriately low. But, not for the first time in human history, the beam was not spotted. The resulting, narrow and unbalanced assessment of the various cognitive biases that might be in play led the British Government, under the guidance of people identified as scientists, to deliberately adopt a policy of inducing greater fear of the effects of SARS-Cov-2 among the general public.

If the narrow goal was to enhance fear to a level judged appropriate by the fearful, the policy was spectacularly successful in achieving it: in comparative international studies Britons are now assessed as the most fearful of peoples.  Judged against wider goals, however, it has been a spectacular failure, a policy folly: Britain stands very high in the international league for excess death rates and in the bottom reaches of the league for adverse economic impacts. Moreover, a former judge of the UK Supreme Court, Lord Sumption, has characterised the effects of policy as “… the greatest invasion of personal liberty in our entire history” – and that is a long history, not free of a tyrant or two.

In reality, and like sorrows, cognitive biases tend not to come as single spies, but in battalions. Most of us are subject to several, but there is one, hitherto largely unidentified bias that appears highly relevant for understanding our current situation.  I will refer to it as Type 2 Exponential Growth Bias, to distinguish it from the familiar EGB (henceforth Type 1 EGB).  It is simply defined as a tendency tosee’ or infer an exponential curve (or something broadly similar, a curve that is rising and bending upwards) where no exponential curve exists or is likely to exist

An immediate implication of this definition is that it is a bias whose incidence is likely to be limited to that class of people who have some familiarity with, although not necessarily a full understanding of all, the properties of these curves/functions. To imagine an exponential where none exists it is necessary first to have some view of what an exponential curve looks like, and possession of such a vision is not ubiquitous,

I first became aware of the phenomenon back in 2007 when conducting a piece of research on obesity policy and hence was looking at a stack of data on the matter, as well as rifling through recent papers on the topic. Among the latter there was an editorial piece in a medical journal that contained a chart showing a hypothetical projection of obesity prevalence over time.  Given the availability of data, it was odd in that there were no units shown on the axes (not quite science as we know it), but even odder in being a rather sharply rising exponential curve. 

How could that be?  Percentage prevalence of obesity, however measured, is bounded above at 100%: that’s where the feasible per-cents run out.  If the rate of increase has gone up – as it had in the 1990s – it must later come down again, as it has done since circa 2005 on UK metrics. 

History rhymes, and almost daily in the papers over the past few days there have been accounts of medical ‘scientists’ in Britain warning, like the obesity medics of fifteen years ago, of exponential growth (or something approximating it) in infections to come. And today (20th September 2020) we have the Chief Scientific Adviser to the Government saying that infections are doubling around every seven days and conjuring up visions of 50,000 infections a day by mid-October unless, by a familiar recourse to politicians’ logic, something is done about it.

There could hardly be a more illuminating display of Type 2 EGB and I hope its vividness registers with Parliamentarians and the Public. In point of fact, there is no exponential curve to be found in the data and no such curve is predicted to exist by the epidemiological models. 

What is going on here is a probably a familiar phenomenon.  Type 2 EGB can be viewed as a variant of the more widely studied confirmation bias.  We see what we want to see or find it convenient to see, for one or more of a mix of motivations:  money, status, power, fifteen minutes of fame, ideology, economy of cognitive effort (the brain being a large energy sink of the human body), avoidance of discomforting cognitive dissonance, not being shown to have been wrong about something, and so on. 

‘Seeing’ an exponential hobgoblin can serve these purposes for some, including those with an interest in scaring people towards conduct that is congruent with the seer’s own agenda. For the rest of us though, Type 2 EGB is a dangerous cognitive bias for anyone in authority to possess, arguably more dangerous to human welfare in the round than the physical virus that is SARS-CoV-2. 

Physicians, rid thyselves of this delusional affliction is an appropriate conclusion.  Adjust your views to the changing evidence, don’t try to adjust reality to your preferred belief system.  The latter is the path of tyrants, not scientists.

So where did it all go wrong?

Following Boris Johnson’s press conference on Sunday 24 May, at which he made it clear he had no intention of dismissing his main advisor Dominic Cummings, Professor Stephen Reicher had this to say on Twitter:

“As one of those involved in SPI-B, the Government advisory group on behavioural science, I can say that in a few short minutes tonight, Boris Johnson has trashed all the advice we have given on how to build trust and secure adherence to the measures necessary to control COVID-19.
Be open and honest, we said. Trashed.
Respect the public, we said. Trashed.
Ensure equity, so everyone is treated the same, we said. Trashed.
Be consistent we said. Trashed.
Make clear ‘we are all in it together’. Trashed.
It is very hard to provide scientific advice to a government which doesn’t want to listen to science. I hope, however, that the public will read our papers (publicly available at https://gov.uk./government/groups/scientific-advisory-group-for–emergencies-sage-coronavirus-covid-19-response) and continue to make up for this bad government with their own good sense.”

These are strong words and with them the finger of blame for subsequent loss of public trust is pointed firmly in the directions of Messrs Johnson and Cummings.

This is, however, a highly incomplete account of things and the question that I want to raise is this: how was it that a massively important public policy could come crashing to the ground in consequence of events that, in and of themselves, appear minor and quotidian (the decisions made in one family to mitigate risks to a child and the decision made by a boss not to dispense with a key aid on account of the family decisions that were made)?

The general answer is straightforward: it’s the context, stupid. The same conduct in different contexts produces different effects. Shouting fire outside a burning building is one thing, shouting fire in a crowded theatre for the fun of it is another matter altogether. We are familiar with that sort of point. It is the context of the quotidian conduct that makes it significant.

That recognised, the next question is: what was it about this context that produced this scale of effect, this vulnerability to a seemingly small event? The answer can’t only be that one of the protagonists in the tale was Prime Minister. Heads of government are routinely surrounded by people who have or might have breached laws or regulations in minor ways. Receiving a fine for doing 36mph in a 30mph zone is not usually career threatening, although doing 70mph might be (because reckless and, by and of itself, causing much higher hazards to the public).

Similarly, whilst Dominic Cummings has many enemies in politics, attempts by them alone to make a mountain out of a mole hill cannot explain the kinds of public reactions observed. Most people have at some time or other infringed laws and regulations themselves, and they know it. That typically serves as a constraining factor on anger at others for similar, minor transgressions they think might have occurred.

The search for a sufficient cause therefore leads quickly on to consideration of another, obviously relevant contextual factor, the policy itself and the advice on which it was based.

The Government has claimed throughout that it has followed or been guided by ‘the science’. That is something of a nonsense statement: ‘science’ has no voice, let alone a single voice. Nevertheless, it is open to the generous interpretation that it means “following or being guided by advice from people who call themselves and each other ‘scientists’”.

Stephen Reicher’s Tweet is itself evidence that the Government did, in fact, follow the cited advice, right up to the moment that, in his view, it was trashed: it would be hard to trash something that wasn’t there.

The advice set out by Reicher is odd in that it doesn’t look at all like the kind of advice that governments would seek or expect from scientists. It looks much more like the advice of public relations/communications advisors. Two of the list’s five elements particularly struck my eye: “Ensure equity, so everyone is treated the same” and “Make clear ‘we are all in it together’”.

Take ‘equity’ first. There is an immediate elision from a general notion of fairness to an interpretation based on everyone being treated the same. It’s understandable that healthcare professionals might go to that interpretation of equity in their own trades, because the NHS is a distributor of benefits to the public in the form of the services it provides (and it does, in a literal sense, treat people). What that misses though is that the relevant context is not one where the issues concern the distribution of benefits. Rather, the issues raised are to do with the distribution of burdens among the population, and that is a very different kettle of fish.

That it is not necessarily at all equitable to treat everyone the same when it comes to the allocations of burdens seems to be pretty much a consensus position across major moral and political philosophies. St Luke wrote: “For unto whomsoever much is given, of him shall be much required.”  Karl Marx wrote: “From each according to their ability …”. There is no sense of ‘treating everyone the same’ in either of those sentiments. Capacities and abilities vary, and both writers link (normatively expected) behaviours to capacities/abilities. How many of us would be convinced that replacing a progressive income tax with a uniform poll tax was equitable because “everyone is treated the same”? Not many I expect.

The fact is that the advice and its implementation have placed disproportionately heavy burdens on very many British households, and that alone makes it bad advice, whether it has any claim to being based on ‘science’ or not. That it was followed by the Government was a poor judgment. Next time ministers would do better to follow the advice of the physicist Richard Feynman (given to teachers of science in schools): “When someone says, ‘Science teaches such and such,’ he is using the word incorrectly. Science doesn’t teach anything; experience teaches it. If they say to you, ‘Science has shown such and such,’ you might ask, ‘How does science show it? How did the scientists find out? How? What? Where?’”

Now consider the final piece of advice in the Reicher list: “Make clear ‘we are all in this together’”. It has been repeated multiple times by the Prime Minister, perhaps because of the Churchillian ring to it. Its provenance is not Churchill, however, and part of it is easily discoverable by virtue of the existence of an online book titled Together Apart, co-authored by Stephen Reicher. The book, which is addressed at Covid-19 issues, rests on and advocates a body of thought in social psychology known as Social Identity Theory.

The mission of the behaviourists seems to have been to render oppressive and inequitable measures more acceptable (and hence likely more readily enforceable) by modifying the social behaviour of an entire nation. The means to this – and this is where Social Identity Theory comes in – was an attempted creation of a whole-nation ‘we all’ identity, or at least the creation of the belief on the part of each individual that others in her/his social networks would identify as ‘we all’ in this way, opening up the possibility of her/him being socially sanctioned by those others for non-compliance.

Had ‘we are all in this together’ simply been expressed as a sentiment in a speech or two by Boris Johnson, it would have been little more than a politician’s encouraging rhetoric, but the advice and the policy that followed it was much more than that. It was something to be made clear and emphasised on a repeated basis, underpinned by rules that heavily stressed uniformity in behaviour, irrespective of individual circumstances. If, in consequence of the particularities of individual circumstances, there was a conflict between the prescriptive rules and the purposes of the laws (e.g. mitigate the harms caused by an virus), the message was to resolve the conflict by following the letter of the rules, not their purposes (their ‘spirit’).

It was, at bottom, a giant experiment in identity politics based on social theorising, sometimes explicit, sometimes implicit, and there are a few things we know from experiences with such exercises, including: (a) grand social theories are usually wrong and (b) when they are discovered to have been wrong, it is usually also discovered that they themselves have caused great harms.

So what can be learned from the experience? My view is that the most important lesson is that the approach was one more manifestation of a characteristic that has become embedded in British policymaking over the past ten to fifteen years or so: a cavalier neglect of considerations of robustness in the design and development of policy strategies.

Non robustness in a strategy means that, whilst the policy might work in achieving desired outcomes in particular contexts/circumstances, relatively modest changes in context/circumstances can comprehensively unravel it. It means that desired outcomes are at high risk from unfolding events. It is like a win-only bet on the Grand National.

Early on in the contagion the advice of the behaviourists did appear to come with warnings that the Lockdown option could likely only be sustained for a short period, and those warnings can be read as an appropriate recognition of the option’s fragility. But then came an unexpected finding: compliance with the rules turned out to be more comprehensive than anticipated. A large slice of the public was apparently more willing to ‘stick to the prescriptive rules’ than was expected, and less willing than usual to give weight in determining their behaviours to questions concerning the purposes of the rules and their effectiveness in promoting those purposes.

Rather than taking ‘excess compliance’ as a sign that the rules could be loosened by, for example, seeking to remove some of their most egregious injustices, advice and policy ploughed on. Consummate compliance with the rules morphed from being something that would assist achievement of the overarching aim of impeding the spread of a virus toward being a signifier of a ‘we all’ identity, an identity designed to be universal.

As implied by remarks above we have seen the non-robustness problem before in UK public policies, on multiple occasions now. Major examples include: the Electricity Market Reform (EMR) programme; encouragement of fragile supply chains; and hollowing out of frontline healthcare capacity. The stand-out example though was the financial crash of 2008, when an inbuilt tension between (a) policy-induced incentives to engage in activities that could be expected to increase systemic risk (risk of financial contagion) and (b) the regulatory capacity to contain the eventuation of such risks finally snapped. The problem is well encapsulated by the title of a book on those events and the policies that contributed to them: Fragile by Design.

In short, Covid-19 policy has been fragile by design. The ‘scientific’ advice summarised in Professor Reicher’s tweet would have been better ‘trashed’ at the outset.

A snippet from a commentary on the first four chapters of The Wealth of Nations

CHAPTER IV. OF THE ORIGIN AND USE OF MONEY.

1. When the division of labour has been once thoroughly established, it is but a very small part of a man’s wants which the produce of his own labour can supply. He supplies the far greater part of them by exchanging that surplus part of the produce of his own labour, which is over and above his own consumption, for such parts of the produce of other men’s labour as he has occasion for. Every man thus lives by exchanging, or becomes, in some measure, a merchant, and the society itself grows to be what is properly a commercial society.

Chapter 4 is concerned with the economic institution of ‘money’, but it opens with a summary of earlier arguments and ends with a sentence that is strikingly linked to the subsequent development of political economy and to matters that are highly salient in economic discussions today. Here Smith characterises and gives a name to the kind of society in which he lived. It is a “commercial society” and its defining characteristic is that every man “lives by exchanging, or becomes, in some measure, a merchant”. ‘Every man’ is not to be taken literally, but more in the spirit of ‘Everyman’, a character in a story or play who leads an ordinary, prevalent and ubiquitous way of life which audiences can instantly recognise and identify with.

Going back to the illustrations from antiquity in Chapter 3, it can be observed that the coastal cities and cities on navigable waterways might be characterised as being ‘commercial’, but, heading off into their hinterlands, in much of the surrounding territory the ubiquity aspect will typically be found lacking. Smith would therefore not want to characterise them as ‘properly’ commercial. As in the earlier chapters, Smith’s centre of attention is the labourer, his Everyman, and the test is whether he/she “lives by exchanging”.

Today the typical word to characterise the economic system of a country like Britain is ‘capitalistic’, and that different terminology, which does not reflect well on post-Smithian intellectual development, is much more than the substitution of one word for another. ‘Capitalism’ is a vaguely defined concept. In ordinary use of language, it refers to an economic system for which it easy to find advocates and opponents, often at vigorous odds with one another, but without any shared understanding of what precisely it is that they are arguing about.

De facto, and whatever its original meaning may have been, capitalism has become an ideograph, as defined by Michael Calvin McGee: “an ordinary-language term found in political discourse. It is a high order abstraction representing commitment to a particular but equivocal and ill-defined normative goal.” As to its origins, its first appearance in English occurred in a Thackeray novel and it was only used a couple of time by Marx. Marx typically referred to the capitalist mode of production, but a mode of production is a rather different thing to a whole society and the manufacturing sector, with which he was most concerned, was only the largest of the three major economic sectors – agriculture, manufacturing and services – for a relatively brief period of the 19th century. For most of the second millennium, including for most of the period that might be labelled the Industrial Revolution, services has been the sector that has contributed most to national output.

The superiority of Smith’s characterisation can be seen by asking two questions. First, is modern China a capitalist economy? As befits an ideograph, that will likely generate much heated debate – which might launch a new set of sects populated with zealots – but little light. Then ask: is modern China a commercial society? The appropriate answer is, I think: ‘Yes, next question please’. That doesn’t cast much immediate light on other characteristics of Chinese society, but it does at least provide the base for fruitful exchanges on those other matters, based on a common understanding of a term of art in political economy.

Social contact networks and the spread of Covid-19

Epidemiology is much to the fore at the moment and the curves that feature in the epidemiological modelling are to be seen in TV, newsprint and many a social media post. The models used vary according to the biological characteristics of the relevant bug and there is a good summary of a number of their various types here, where a general reader’s scan of the list will suffice to give a flavour: https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SEIR_model

They work via analysis of the interplay between variables that are defined by medical states, linked via a system of differential equations that may be deterministic or stochastic. The chief negative feedback mechanism is from the number of people who have been infected (N) – one of the medical state variables (one of the ‘compartments’ or boxes) – to the rate of new infections. The more people that have been infected, the fewer in number are those who remain susceptible to infection (S). The strength of the feedback pressure tends to gather force over time, becoming greatest in the later stages,

There is also a positive feedback loop which is dominant early on – the more people that are infected the greater the infection rate.  The changing balance between these two pressures determines the shapes of the contagion’s curves.

Looking at this picture, a social scientist will naturally ask: where are the social variables (where the word ‘social’ is to be construed broadly to include, for example, the settlement and movement patterns studied in human geography)? The bug is invading a highly complex, socially-ordered network of encounters between individual members of society. These encounters are very far from being random in general, though they may be in individual instances.

So, consider one the important parameters to be found in epidemiological models, the number of close (physical) contacts (CCs) that an infected person might have with others. This directly affects the basic transmission rate of the virus (i.e. Ro the number of infections of others that an individual might cause in circumstances where all contacts are with others who are susceptible to the disease) and we can expect that basic number (Ro) to vary significantly from one individual to another, in reflection of the heterogeneity of the circumstances in which individuals come into close contact with one another.

Starting from Ro, the ‘effective transmission/reproduction rate’,  R (the number of people that an individual can be expected to infect at any given, later time)  changes over time as the number of susceptibles falls.

Let me at this point, make a shift from epidemiological jargon to economic jargon and rename Ro as the ‘ basic propensity to infect’ and likewise rename R  as simply the ‘propensity to infect (PI)’. An average PI (API) can be calculated for the population as a whole and it is such an average that is used when people talk about getting to a point where the API or R < 1, in consequence of the negative feedback mechanism of a falling number of susceptibles (S).

What is often missing from the epidemiological models, however, are adequate examinations of the interactions between medical variables which can arise from the existence of social structures and from the social conduct that occurs within them.  This neglect has occurred despite an epidemiological  ‘recognition in the abstract’ of their relevance.  Thus, if the Wikipedia entry for ‘Basic Reproduction Rate’ is looked up, there it can be read that “Ro is not a biological constant for a pathogen as it is also affected by other factors such as environmental conditions and the behaviour of the infected population [my emphasis].”

The behaviour of members of a society is, of course, what social scientists study and it is a feature of the current situation that their contribution to an understanding of the contagion has been notable chiefly by its absence.

Let me look at just one such correlation (since the aim of this note is to encourage thinking about the contagion in a different way, not to expound any new comprehensive view of it). It is that the close contacts (CCs) of many with high propensities to infect (PIs) will not be random, but will be tilted toward CCs with other high PI people, because of the social contact networks in which they move.

This could be true for professionals who work and socialise with other professionals (and they do tend to network quite a lot), or for inhabitants of multi-occupancy buildings in socially deprived areas, or for London commuters using its mass transit systems (who mix closely with one another on a daily basis). An immediate implication is that the early stages of a contagion will ‘naturally’ select for people at each end of a particular transmission who are both high PI individuals, and hence that the early spread can be very rapid indeed.

To get a feel for the numbers, imagine the first person infected among those who regularly used London Transport in, say, mid February.  The average Ro for the UK might by 2.5, but what might it be for the community of London commuters. If ‘infectee one’ makes a tw0-legged commute twice a day, that would, in effect, amount to participation in 20 super-spreading events in a working week.  Ro could be a 100 or more, and at that rate the daily infections rise very, very quickly.

What taking this type of social factor into account requires is the addition of new equations to the differential equation system of an epidemiological model. That will obviously affect the solutions of the equations.  What the tendency for there to be high PIs at both ends of a transmission implies is required for analytical purposes is a socially structured disaggregation of the familiar negative feedback loop that reflects decreases in the number of susceptibles among close contacts.  One differential equation needs to become a set of several equations.

Once the contagion gets going, then, it immediately starts rapidly to thin out infections characterised as being between high PI individuals, and this can radically affect the dynamics.

To illustrate let’s take an extreme, hypothetical, ‘teaching’ simplification, which introduces social differentiation in a very limited way.  Suppose there are two types of people with identical, intra-type characteristics (e.g. London mass transit commuters and everyone else). There are ‘Highs’ with a PI of 10 and who make up 20% of the population, and there are ‘Lows’, with a PI of 0.8 and who make up 80% of the population.  It is assumed (and its the crudest of the simplifying assumptions) that the Highs and Lows never have inter-type close contacts. There are ‘two nations’.  How in this case does the infection go?

Not very far is the answer, if the virus only has a foothold among the Lows. It peters out. Given a foothold among the Highs, however, it will progress very rapidly at first and then slow as the number of Highs who are still susceptible to the disease decreases (because increasing numbers have been infected).

To track infections in the latter case it is necessary to solve out for the differential equations for the High part of the population, but it is easy to see that infection of 90% of the Highs would be sufficient to bring their API down to 1.  Nine of ten people who would be infected if susceptible, would no longer be infected:  only one would be, and hence the spread would no longer be advancing.

This illustrates an important point that  has major implications for policy trade-offs: The early ‘selection’ of high PI individuals clearly assists the virus  in moving swiftly in its first stages, but it comes at a price (for the virus): the spread may peter out when infections reach only a modest fraction of the total population, in this case 18% (which would reduce further in the event of the existence of sources of innate immunity such as T cell activity or cross-immunity from other coronaviruses:  if innate immunity were, say, at 30%, the contagion would stop advancing at a whole population infection level of 12%). The rapid early progress is bought at the price of the virus being less able to find powerful allies later in the process.

Setting the crude simplification aside, the more general point is that there is a social-contact, natural selection process at work, particularly prominent in the contagion’s early stages.  In a social setting the natural selection of the biology picks out high PI victims. But, as it does so it has the immediate and direct effect of reducing the population API much faster than would be the case if all humans led similar social lives.

What I think is best called ‘community resistance’, which is the combined effect of all the negative feedback loops in play (two in the illustrate example, more in a fully developed model), will be determined not only by the aggregate numbers of those infected and those still susceptible (the prime focus of the epidemiological work), but  also by the social characteristics of the heterogeneous individuals who make up the numbers. Who are they? Where do they live? What are their living conditions? Where do they work? What are their preferred lifestyles? What do their social (physical contact) networks look like? And so on. Without some knowledge of those things, it is, I think, impossible to understand the contagion in any detail. We are left playing guessing games.

A second general point that follows from all this is a classic health warning, given by all good advisors to politicians contemplating a perturbation to a social system: beware of unintended effects/consequences. In the case of Covid-19, it is easy to see that the social distancing effects of a general lockdown are bound to have a negative direct (all other things being equal) effect on the rate of spread of the virus. At least in its less mitigated forms, however, it is an undiscriminating, one-size-fits-all policy and, as such, it hinders the ‘natural’ selection process that, at least arguably, does most to inhibit the spread of the virus.  Under such a general lockdown, everyone is treated the same, irrespective of social characteristics.

It is easy to see that there might be political reasons for doing that, but it should be recognised that they come with a cost:  ‘Covid-management policies can be expected to be less effective.

Whether such a policy will, in the end, actually reduce the cumulative deaths attributable to the contagion is, therefore, very much an open question at the time of writing:  it cannot safely be presumed.  If a virus’s eye view of things suggests that finding its way quickly to the infection of high transmitters is good for it in the short term, but not so good for it in the longer term, that intelligence might be useful information for humans contemplating how best to counter the harms it brings.  Will extended lockdown simply be good for reducing transmission in the shorter-term, but bad for reducing transmission later, when it is relaxed?

Finally, to repeat a sentiment expressed above, this short piece is directed at encouraging policymakers to look at the issues in a different light, a light that illuminates the social factors that are involved in the contagion. Homo Sapiens is a highly social animal embedded in a highly complex social ecology, not a herd animal, still less like those atoms in the kinetic theory of gases that move about randomly, bumping into other atoms as they do so. We should not lose sight of that.

Regulatory Policy Assessment in the Covid-19 era: a Once and Future Pathway?

Back in the 1990s and the early years of the 21st century the UK government developed a relatively sophisticated handbook to guide the evaluation of alternative lines of regulatory policy development and implementation, largely under the stewardship of the Cabinet Office and the Business Department. This was part of an international movement in which it is fair to say that the UK played a leading role. By way of example, the UK promoted the establishment of a ‘Directors of Better Regulation’ group for Member States of the EU, outside of normal EU structures, where comparative experiences could be discussed and know-how could be shared. Other innovative institutional developments of the time included the establishment of the Better Regulation Executive and the Better Regulation Commission, itself a successor to an advisory Better Regulation Taskforce dating from 1997.

Disruption and resistance

The progress of this ever-increasing knowledge base was disrupted in the later part of the first decade of the new century (I tend to date it as ‘around 2007’, that year being a fateful one across a range of public policy matters, both locally and globally). Examination of the causes of that disruption is a complex exercise, but, in my own bailiwick at the time (energy sector regulation) the proximate cause was a growing tension between the implications of thought-through policy assessments and an emergent conventional wisdom on climate change policy. Behind that, and more fundamentally, was what in the Covid-19 era might be called a very major co-morbidity: Leviathan’s very powerful ‘immune system’.

By that I mean the resistance of our system of government to any intruding ‘intellectual virus’ that might significantly alter its own cellular activity, irrespective of the wider consequences, for good or ill, of such alterations for the public. In the most powerful, imagined conception of Leviathan’s immune response, George Orwell gave it the name ‘Crimestop’ in 1984’s Newspeak: “… the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.”

Back in reality, a much more utilised common-or-garden form of defence mechanism is what I have called convenient, selective myopia, roughly meaning “don’t look at things you don’t want to see, ignore them”, a form of wilful blindness.

The original intent of regulatory impact assessment (RIA) was to discover, structure and analyse information that was judged relevant to an upcoming decision. Importantly, the intention/purpose of the exercise was to better inform those who would take the decision. It was not intended to be a cost-benefit analysis in which benefits and costs are monetized and totalled, still less a procedure via which the decision itself should be determined on the basis of a net-benefit criterion.

There is good reason for that: only those to whom decisions are formally entrusted have legitimate authority to place values on the assessed effects of regulatory measures, involving, as they not infrequently do, quite difficult balancings of positive and negative impacts/consequences on/for different communities and interests.

From a wider public perspective, the notion of ‘informative’ regulatory impact assessment (RIA) may look like a friendly intellectual virus, conferring only good (better information for decisions), but that is not typically the view of decision makers themselves. The problem lies in the risk that the assessment process will likely make transparent things they do not want discovered and made transparent. Considered decisions are not necessarily, indeed arguably rarely are, the decisions that bureaucrats and politicians want to take, and that is in large part because decision makers tend to possess ‘partial’ (meaning partisan or ‘private’) agendas of their own, distinct and separate from a wider public interest agenda. Individual politicians almost invariably want to increase their own power, civil servants are partisans in the cause of the size and influence of their own departments or units, and so on. Leviathan’s senior servants can, therefore, be expected to be, to varying degrees, resistant to evaluation processes that might threaten these interests.

In this struggle the RIA virus has a weakness: it itself is a process that takes time, a virus that, to push the metaphor to its limits, mutates as new information is discovered and new thoughts occur before it reaches its final form. As used to be said of the core working document itself, it is a “living document”. This lapse in time gives, to those to whom its unfettered expression might be unwelcome, ample opportunities to insert themselves into the development process and re-engineer it. Indeed, over time, it becomes unnecessary to do even that. The assessors tend to come to recognise the likely negative responses to certain types of evidence and of certain lines of thinking and, to avoid inevitable later hassle and near inevitable defeat, self-re-engineer the process from an early stage onwards. That which was intended to inform decisions is flipped to become a process driven by the goal of justifying a decision soon to be made, without the intrusion of inconvenient facts and considerations.

The QuickScan

In this battle between impartial (non-partisan) assessment and the partial agendas of officialdom, one idea that came out of the EU Directors of Better Regulation Group, originating from the know-how of a Dutch regulator, was what researchers attached to the Regulatory Policy Institute (RPI) came to call a QuickScan. This was conceived as a first, short-duration exercise that would take a wide-angle look at the relevant issues, broadly asking “What things do we need to examine and explore in order to ensure that all relevant information is available for the making of upcoming policy choices?” Doing that at least gets a wide set of considerations placed on the table in the form of a first, speedily produced document that can later be used as a checking mechanism: if something identified in the QuickScan is later ignored or omitted, there should be substantiated reasons for so doing.

Such a process provides at least some degree of defence against disruption by Leviathan’s antibodies, and the wide vision/perspective of it is perhaps to be particularly stressed. It is interesting to note, for example, that criticisms of Government responses to the Covid-19 crisis have increasingly been based on a perception that relevant authorities have taken overly narrow views of the crisis and have become unduly fixated on a narrow range of issues, problems and questions. Moreover, exactly that same critique has been made of the conduct of banking and financial supervision ahead of the 2008 crash. It was, for example, a key part of the confessional letter sent to the Queen by a group of Fellows of the British Academy in response to a (rather good) question of Hers when visiting the LSE in the post-crash period: “Why had nobody noticed that the credit crunch was on its way?” And the critique of narrowness can, I think, be generalised.

Regulatory measures can be viewed as perturbations to a complex, adaptive, socio-economic system, i.e. as perturbations to an ecosystem. It cannot be generally assumed, therefore, that the effects of such perturbations will be limited to a narrowly confined part of the system, the parts of which are generally interconnected in one way or another. It is clearly an impossible task to identify all the consequences of a measure, but there is requirement for a sensibility that calls for identification and consideration of at least the most salient effects and for recognition that these might be neither localised nor intended by those ultimately responsible for decisions.

It is rare for a decision maker to want to see harmful consequences of a policy with which he/she is associated: it is much more common for them to want to claim positive consequences for their actions, whether or not they have any causal links with the perturbation. Therein lies an obvious problem. If negative consequences exist, the bias will be toward leaving them unexamined, for, if they are assessed, it will no longer be possible to claim later that they were unanticipated consequences and, as such, that less blame for them is merited.

This is what convenient, selective myopia (wilful blindness) looks like. The Competition Appeals Tribunal has referred to the approach as ‘pixelated’ – likening it to a propensity to focus on some blocks of pixels in a digital image and to ignore others – and that is a term that the RPI has also taken up in some of its work and thinking.

The concept of a QuickScan has some affinities with a 2006 proposal from the Better Regulation Commission (BRC) to establish a unit that it called the Fast Assessment of Regulatory Options (FARO) Panel, to examine calls for urgent government action in the event of some major health or safety risk such as an epidemic or a major rail crash. The BRC was, however, itself a victim of the disruption of the time: it was soon abolished and the proposal, like the QuickScan, was not taken up. The two approaches did, however, differ in at least two important ways.

First, the BRC sought the establishment of a distinct, new institution/unit, described as follows: “The Panel should be independent, politically neutral and external to government. It should provide timely advice to ministers on appropriate, cost-effective responses which have a real impact, having considered all aspects of the risks involved, trade-offs, priorities and policy alternatives.” In contrast, the QuickScan concept did not call for the establishment of a ‘sitting panel’ external to government, but rather for a capacity and willingness to assemble teams within departments or government agencies, including the sectoral regulators. Such teams would be determined on the basis of the requisite skills and knowledge, i.e. skills and knowledge that would be of value in the specific context of the relevant policy issue. Even within a department or agency specialised in a particular area of policy, such as communications, transport, energy or health, the wide variations in contexts almost necessarily imply some rotation of members from case to case, including specialists brought in from outside government.

Second, FAROP was explicitly designed to respond to situations in which heavy pressures on politicians had led them already to conclude that, with high probability, ‘something must be done’. As its name implies, fast assessment of options was intended to focus only on options, on the ‘something’ that it would be best to do. The proposal sought a pause for thought between the first political conclusion and the eventual response. The hope was that this would help foreclose disadvantageous, knee-jerk policy responses. The focus is therefore narrowed at the outset (to options) and the evaluation process is necessarily ‘pixelated’.

The QuickScan, in contrast, is much less constrained. The whole purpose is to develop a wider field of vision so as to be able to better respond to the incoming results of the discovery process that is entailed by good regulatory impact assessment, of which it is the starting point. It is the ‘good start’ of what became the RPI’s unofficial motto, an Irish proverb, Tús leath, na hoibre, a good start is half the work/journey. Consistent with RIA guidelines, that always encompasses the option of doing nothing or, more accurately, doing nothing for now. More importantly, it starts with a detailed analysis of the problem or challenge to be faced and, in particular of its context.

Verstehen, Verstehen, Verstehen

At this point I come to the most important of the concepts in play in policy evaluation, namely ‘understanding’ or ‘verstehen’, the core notion in just about all the RPI’s own work. The first step is to understand the issues and their context, which in the latter case involves understanding the workings of the relevant parts of the ecosystem so as to be able to assess how their functioning might be affected by any potential policy/regulatory perturbations.

The concept of Verstehen comes to today’s social sciences chiefly via a German scholastic tradition, associated in particular with Max Weber, where it means an approach that examines a socio-economic question from the perspective of each of the potentially many actors who might be involved or affected, asking how do/will they see things? In other words, it asks that the analyst be able to ‘put themselves in the shoes’ of those likely to be affected by policy measures, or by the lack of such measures, and see things from their point of view, taking account of their attitudes and their behaviours. It is therefore an activity that engages both empathy and imagination.

The approach is older than these 19th and 20th century developments, however. For example, in explaining his own work the great 17th century Dutch philosopher Baruch Spinoza said: “I have diligently tried not to laugh at human actions, nor to mourn them, nor to abhor them, but to understand them.” Perhaps surprisingly to many, particularly given that it is a foundational document in the history of political economy (and hence economics) in the English speaking world, the most systematic exposition of the approach is to be found in Adam Smith’s Theory of Moral Sentiments (TMS). Throughout the work, Smith invites us to put ourselves in the shoes of others, both actual others and, central to moral and social judgments, a hypothetical ‘impartial spectator’, someone who does not bring any private/partial agendas interests to their judgments.

A flavour of the reasoning can be gleaned from one of the most cited passages of the work in which Smith criticises the approach of what he called a ‘man of system’. Today this term might be used to characterise someone with a proclivity for central planning or with fixed ideas about how things should be done. Since the TMS receives little coverage in modern economics courses, it is, I think, worth quoting the passage in full.
“The man of system is nothing like that. He is apt to be sure of his own wisdom, and is often so in love with the supposed beauty of his own ideal plan of government that he can’t allow the slightest deviation from any part of it. He goes on to establish it completely and in detail, paying no attention to the great interests or the strong prejudices that may oppose it. He seems to imagine that he can arrange the members of a great society as easily as a hand arranges the pieces on a chess-board! He forgets that the chessmen’s only source of motion is what the hand impresses on them, whereas in the great chess-board of human society every single piece has its own private source of motion, quite different from anything that the legislature might choose to impress on it. If those two sources coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is likely to be happy and successful. If they are opposite or different, the game will go on miserably and the society will be in the highest degree of disorder all the time.”

The importance of understanding is here in part highlighted by considering the consequences of its absence. The ‘man of system’ has fixed ideas to which he is attached. He pays no attention to the ‘great interests or strong prejudices’ of those who might be affected by his plans. Other social actors are assumed to be passive (a false assumption), like chess pieces, sitting waiting for the legislature (or, more likely today, the executive branch of government) to move them around. There is no understanding that each member of society (an assumed chess piece) has her/his ‘own private source of motion’. In consequence “the game will go on miserably” and the society will be in the highest degree of disorder all the time”.

It can be noted as a general point that Smith here is not calling for a laissez faire approach to public policy, something that he never did. Rather he is calling for an alignment of policy to the data, as that word might be used in philosophy (‘things known or assumed as facts, making the basis of reasoning or calculation’), where, critically, that data includes the views, beliefs, attitudes, intentions, heuristics and understandings of others who are caught up in the relevant situation (the ‘social facts’) and hence of their own likely conduct in the light of these things. The argument is that such alignment will lead to an outcome in which ‘the game of human society will go on easily and harmoniously, and is likely to be happy and successful’.

This is much closer to the ancient Chinese Daoist concept of wu wei (roughly ‘effortless effort’) than to laissez faire. Smith uses the word ‘natural’ to signify the functioning of the complex, adaptive socio-economic system without the perturbations of Leviathan’s hand, and he calls for that hand to be applied in ways that are complementary to and support that functioning, that ‘go with the flow’, not in ways that seek to substitute for it or stand in opposition to it. And experience teaches that, all too often we see regulatory policy interventions that create negative feedback loops that resist the intended effects, because they go against the grain of the system to which they are applied.

Verstehen as it appears in modern social science, then, emphasises seeing things as others see them, but that is only part of the relevant data. Typically, what all social and economic agents are looking at, from different angles, is the same context, the same ecosystem, of which all are part. For the political economist, therefore, the understandings of the actors provide only part of the picture of interest, which is the functioning of the system as a whole. If the aim is, so far as possible, to align policy with something, it is a good idea to understand how that something functions.

Since the relevant context invariably features the functioning of a complex system, a commonality of these different aspects of understanding is that each requires that matters be examined from a variety of different perspectives, from different viewpoints. In political psychology the ability of any one individual to be able to do this is captured in the notion of a cognitive style measured by ‘integrative complexity’: ‘a research psychometric that refers to the degree to which thinking and reasoning involve the recognition and integration of multiple perspectives and possibilities and their interrelated contingencies. Integrative complexity is a measure of the intellectual style used by individuals or groups in processing information, problem-solving, and decision making.’

Elsewhere in the Theory of Moral Sentiments Smith explains why it would be impossible for any ‘man of system’ fully to comprehend the workings of any significant part of a socio-economic system: the information required is too vast. Hayek would later call a belief otherwise “the fatal conceit”. An individual naturally inclined to integrative complexity, such as Smith’s notional ‘wise sovereign’, can get a little bit of the way. A team of people, with diverse skills and experiences, can probably get as far as can be got. From these points two rules of thumb for assessment purposes might be inferred (a) first do no harm and (b) it takes a team, a team committed to achieving the best feasible understanding of things. And that’s a high-level intellectual exercise, not a routine bureaucratic task.

Thoughts on the novel coronavirus contagion

The above reflections were triggered by observing governmental responses to the arrival of Covid-19 in the UK. As in relation to Smith’s chess metaphor, it provides an example of being able to see the importance of Verstehen by reference to events that might occur in its absence. It is, however, a different context to Smith’s example, in which a ‘man of system’ was not the source of major problems: that term would not be an appropriate characterisation of the UK Prime Minister! Rather the Government found itself in an immediate cloud of uncertainties and struggled for want of early but considered, wide-vision advice.

The Government did, naturally, turn to experts for advice and (also naturally) first to epidemiologists and experts in medicine and the functioning of the National Health Service (NHS). Such were clearly required, but they by no means encompassed the full diversity of expertise and experience relevant to the relevant assessments. Notably absent were social scientists – I do not count the behavioural specialists here, since their focus is on behaviour modification, not on understanding — and also those with knowledge and know-how concerning the characteristics and functionings of networks.

The latter may seem to be unusual, suggested additions to a team, yet, putting oneself in the shoes of a hypothetically intelligent virus, the first thing the bug might have considered in its own strategy development was the networked structure of the ecology it was about to invade, with all its weaker and stronger lines of defence. That ecology is a socially and economically constructed network of connections between the multitudinous individual, potential hosts of the virus. It is not an animal herd, which has a much less structured and much less differentiated pattern of interactions between individual animals.

Just stating this obvious point is enough to demonstrate how easily thinking can go wrong at the very outset. The very language used can, almost instantly, lead to neglect of relevant socio-economic data, constituting a major failure of assessment. The vision is narrowed and a blind eye is turned to relevant evidence.

A QuickScan process would/should have had no fundamental difficulty in addressing the uncertainty problem: the uncertainty is a blindingly obvious feature of the context. Its existence is a relevant piece of data: we can see a fog from a stationary position, even if we can’t see through it, and seeing it will affect how we proceed.

As a quick, wide vision, first-take of a challenge ahead, the regulatory assessment is intended to be a way of mapping out areas in which the discovery of new information can be expected to be particularly valuable and a way of charting the first steps forward. A first map is necessarily devoid of great detail, but it can identify areas where further, important discoveries might be made. The difference between Covid-19 and other issues for which a QuickScan might be deployed is largely one of degree, arising from the particularly high salience that a Covid-19 assessment would attach to the acquisition of new knowledge (the discovery process) and to the speed at which it would need to be achieved.

Moreover, a widely drawn team would, like a hypothetically intelligent bug, surely have recognised that the phenomenon at hand was a network contagion, with at least some characteristics in common with, say, a banking collapse or a power failure in an electricity system. Understanding of the latter problem, which runs deep among the relevant experts (not least because of the regularity with which it has to be addressed) would likely have particularly helped in framing thinking. The notion of ‘circuit breakers’ is central, both to contain the scope of a particular failure and, if that cannot be done in time, to protect those parts of the system where most harm could be done, for example by isolating a very localised part of the network and allowing on-site generation of power to replace public, grid supplies.

In fact, policy development in the face of Covid-19 did get to ideas of first suppression, then delay – a stage not to be found in electricity systems because the speed of the contagion in that case makes the speed of Covid-19 look like the slowest of snails – then ‘shielding’ (the hospital with its own, on-site back-up generators). Where discovery (or re-discovery) has been less successful, however, concerns the regulatory principles of targeting and proportionality, which the early developments in RIA had so clearly established, but which appear forgotten now.

It is to be expected, for example, that a QuickScan would have quickly recognised that, if a ‘shielding’ strategy was potentially needed at a later stage, it’s success would require urgent, not-to-be-delayed measures to put in place the necessary back-up arrangements, e.g. for hospitals, care homes (with circa 500k inhabitants) and households that are co-habited by older (at high risk) and younger (at much lower risk) generations (they account for about 15% of all households in the UK, but a significantly higher percentage than that in some communities). Hospitals may have back-up electricity generators, but there has been no equivalent attempt to develop emergency operational requirements for shielding in the face of events like the Covid-19 contagion. Shielding arrangements therefore require de novo development and that would pose a very major challenge.

The benefits of a QuickScan team dedicated to the pursuit of integrative complexity are partly illustrated by the electricity example above, but they are of more general value in answering an early diagnostic question that should be asked when seeking to meet a new, complex challenge: have we seen a problem like this before? It is a question that should always come with a health warning: in answering, don’t over-privilege the previous experience. The purpose of the question is to provide initial lines of attack for thinking about a new problem, not to create ready-made options for tackling it. Experts with different experiences will be in a position to come up with different, analogous problems that they have seen before, each potentially, partially informative. In this case there is one, immediately obvious answer, SARS. Looking at pre Covid-19 summaries of the SARS experience, the similarities are evident, and both viruses are part of the same family. Having failed to use previous experience to prepare for the next pandemic, the next best thing might have been to have looked quickly at nations that had learned from previous experience and were more fully prepared for the appearance of the novel virus, the most notable being South Korea,

Instead of these things, and not alone in the world in this, the UK Government has come to rely upon general lockdown and social distancing measures, which take no account of targeting and proportionality principles. At the time of writing, exit from these measures appears highly problematic. There is, therefore, indeed a threat that “the game will go on miserably and the society will be in the highest degree of disorder”, if not for all of the time, then at least for some time to come.

How to win incremental votes in the forthcoming UK General Election

The forthcoming General Election (GE) will not be exclusively concerned with Brexit, but Brexit issues can be expected to loom large. These issues are widely viewed as having disrupted party loyalties and moved electoral politics into a wholly new context, and they have had a disorienting effect on many.

In this essay I want to develop the argument that (a) there is a serious mismatch between the Brexit policies being offered by UK political parties and public preferences/attitudes on relevant Brexit matters, (b) that part at least of the mismatch is attributable to the ‘strategic incompleteness’ of the policies on offer, which in turn appears to be linked to a continuing weakness in analysing the sequencing of policy choices, and (c), as a corollary, there are potential votes to be won by filling in some of the strategic gaps.

The policy stances

First, consider the major national parties’ policies as they appear to stand at the time of writing (noting that they have shown a tendency to move around somewhat):

• Conservative Party (CP): Pass the Withdrawal Bill; sign the Withdrawal Agreement; withdraw from the Treaty of Lisbon in January; negotiate & implement a fairly standard FTA with the EU by the end of the transition period (currently set at 31/12/2020).

• Labour Party (LP): Renegotiate the Withdrawal Agreement and/or the Political Declaration to point them to a ‘softer’ Brexit that currently lacks any clear specification, but comes with a strong indication that it should include a CU; then put the provisional agreement to a confirmatory referendum (likely requiring a further extension of the A50 period of around 6 months or more); indicate that the LP, or at least most of its MPs, might vote against the new agreement and in favour of Remain.

• Liberal Democrats (LDs): Revoke the Article 50 notification and remain a member of the EU.

• Brexit Party (BXP): Put the Withdrawal Agreement in the trash can; negotiate a further extension of the A50 period to 30 June 2020; negotiate and implement an FTA with the EU by 1 July 2020; if that proves impossible, simply leave the EU without any overarching trade agreement in place (the No Deal outcome).

Even recognising the necessity of presenting policies in simplified forms, the public is ill served by these offerings. They are saturated with fantasies (about what could be realistically achieved and when), fail to specify policy relating to important elements of the form of separation, and are suffused with vagueness. By way of examples:

• The notion that a new FTA could be negotiated and implemented by end 2020 (CP) or, a fortiori, by end June 2020 and in the absence of a Withdrawal Agreement, is not credible (BXP). It is simply wishful thinking.

• No LD policy is specified for the realistically possible circumstances in which the Government succeeds in achieving withdrawal from the EU by end January 2020 (it’s one of the few aspirations in the list which is realistically attainable), yet those circumstances could eventuate within a few weeks of the General Election. That would render ‘Revoke’ meaningless, leaving the LDs with no policy at all.

• The LP’s notion of the ‘soft’ Brexit it wants to negotiate is ill defined, its feasibility is unexamined/unexplored, and its own position in any subsequent referendum is left vague. It is, in effect, a policy not to have a policy other than rejection of the No Deal possibility, an extended exercise in can kicking.

Part of the general problem is that, when discussing Brexit options, there has been a constant tendency to (i) conflate withdrawal issues and future relationship issues (despite equally constant warnings from experts in the wings that the two should be carefully distinguished), and (ii) conflate the transition period defined by the WA with what I have elsewhere called the ‘interim period’ stretching from Brexit day to the implementation of any new future relationship agreement, see https://gypoliticaleconomy.blog/2018/09/15/brexit-sequencing-and-the-interim-period-problem-limbo-in-our-time/ .

For example, there is constant reference to the WA as ‘the Deal’, when much the more important policy issues are connected with the future relationship agreement (or absence thereof).

Public attitudes

This dog’s breakfast menu put before voters hinders the tasks of (a) informing the public about the relevant issues and trade-offs and (b) inferring public preferences from polling results that, all too frequently, pose chalk and cheese alternatives.

In the general confusion, I continue to rate the attitudinal studies of researchers at King’s College London as the benchmark for sound analysis. It is focused unambiguously on future relationship matters which, in shorthand form, are encompassed by the general question: how close a future relationship do you want to see between the UK and the EU?
(A summary is to be found here, and it is well worth a read in the current context:
https://ukandeu.ac.uk/we-asked-the-british-public-what-kind-of-brexit-they-want-and-the-norway-model-is-the-clear-winner/ )

The potential answers to that question are obviously not binary, i.e. very close or remote: the degree of ‘closeness’ in not to be measured by a single, 0-1, bit of information. The safest inference from the referendum result is simply that a majority of those voting, whose number is almost certainly underestimated by the actual Leave vote (because significant numbers of remain votes will, rationally, have reflected an explicit or implicit assessment that change would just not be worth the hassle), were of the view that a relationship less close than that defined by the Treaty of Lisbon would be preferable.

The KCL study delves deeper into this issue, examining public attitudes on the various, detailed trade-offs that are involved. It does so by working within a conceptual framework that has been a standard part of economics teaching and research for many decades. By way of example, when assessing the likely value of a prospective, complex product that might be put on the market (and policy offerings can be viewed as political ‘products’), the approach seeks to define the main, component characteristics of the product (for a car: engine size and type, fuel efficiency, diesel/petrol, seating capacity, …) and to set about discovering evidence on the values placed on the individual characteristics by consumers. The results can then be used to assess whether the new combination of characteristics in contemplation would be a winner in the market.

Thus, instead of asking the public about their views on, say, a CU – a concept about whose entailments people are likely, like Mr Clarke, to have highly limited knowledge – the main characteristics of a CU are first identified (e.g. ‘how important do you rate the ability of the UK to negotiate its own FTAs’) and respondents are asked about attitudes to them, without mentioning the concept of a CU itself. One major advantage of this in current circumstances is that the meaning of terms like Customs Union and Single Market have become heavily polluted with political associations that influence responses. For example, ‘Mr Mogg dislikes CUs, Mr Clarke likes them, and in general my views are much more closely aligned with Mr Clarke’. (Here, Mr Clarke would be afforded ‘epistemic authority’, even though he is close to clueless on the economic issues at stake and took the CU idea on board for reasons of political expediency).

The kinds of characteristics examined in the KCL study encompassed areas like freedom of movement rights, ability to conduct an independent commercial policy, budget contributions, regulatory influence, etc. Intensities of preference of respondents were then aggregated into four bundles that matched four future relationship outcomes: Remain, EEA (Norway), a CU, and No Deal (meaning no future relationship agreement of any significant depth). Each respondent was then allocated to the bundle/label for which that individual’s valuation of the combined characteristics was highest. Results were as follows.

Rohr et al

Two points to note immediately are:

• Although there is a substantial body of support for the characteristics of the two end-of-spectrum options (Remain and No Deal), over half the sample had preferences more closely matched with one of the other two options (EEA/Norway and a CU).

• A significant volume of ‘switching’ was recorded in the two years covered by the research, although the totals in the two years are not much different. An immediate inference is that is that there are substantial number of voters who could easily be tipped into another category, i.e. they are not deeply attached to just one option. The thickness of the bands running from Remain to EEA, from EEA to Remain, and from no deal to EEA is striking in this regard. Public attitudes appear much less polarised between outcomes than parliamentary, party activist and media attitudes.

This last point is underpinned by more familiar polling results that indicate that the EEA is viewed by a substantial (not a marginal) majority of respondents as an ‘acceptable’ way forward. Moreover, the results come with an obvious intuition. The underlying trade-offs involved are many and public attitudes to each can be expected to be differentiated. Aggregating over these trade-offs can be expected to lead to a spectrum of valuations of ‘closeness’, not a simple binary division. The latter (the binary division) comes not from public attitudes, but rather from the nature of the referendum question: should the UK remain in or leave the EU?

Given the underlying data on more disaggregated public preferences, it is possible to construct and evaluate public attitudes to other broad policy positions, such as EEA+ (aka Norway+ or CM2.0), which is a combination of EEA/Norway + a Customs Union (CU).  The researchers report that they have done this and the results indicated by that exercise are a good demonstration of the value of the approach adopted. Prima facie it might be expected that Norway+ would be closest to the preferences of a larger number of people than unadorned Norway, but the reverse is true. The proportion of the sample for whom the EEA + CU would be the nearest approximation to their preferences falls significantly from the 42% shown for the EEA in 2018.

The reason is that many respondents placed a significant value on the prospect of the UK once again having an independent commercial policy and EEA + CU, unlike EEA alone, would not offer that. These people would, if Norway+ were the only middling option on offer, tend to be switch chiefly to the No Deal category, which in a three-option categorisation, would be the only one offering an independent commercial policy.

That may come as a surprise to some politicians and commentators who read the plus as signifying something that would add value. It is not at all a surprise after a moment’s pause for thought. It implies the delegation, by a major (and ex hypothesi independent) trading nation, of its international trading policy to politicians and bureaucrats of another jurisdiction, and that is not at all a normal occurrence.

The bottom line is that a CU option is a vote loser, which is likely one of the reasons why some of those who have thought about the matter in Paris and Berlin tend to the view that it is unsustainable as a longer term future relationship between the EU and a non-EU UK (another is experience with the Turkish arrangements). Would France and Germany delegate their international commercial policies to a foreign power, with only marginal influence on the conduct of such policies? No, they would not.

Matching policy stances and public attitudes

If the attitudinal results are mapped into the binary question of Remain/Leave, with the additional assumption that the CU category would tilt to the Remain side of the binary, the implication is that, other things equal, the public splits around 65/35 in thinking that the EU is ‘too close’ a relationship for the UK, i.e. that the UK is better out. That, I think is in line with the wider evidence. The UK is, after all, opted out from the EU’s major project, monetary and fiscal union (and the separating effects of that are increasing over time), there is little joy at the prospect of an EU army, and a positive case for EU membership has been noticeably lacking over the past three plus years of Brexit discourse (it has all been very defensive – ‘hold on to nurse for fear of something worse’).

The elevation of the CU issue has been a product of an inward-looking, factional domestic politics, accompanied by large dollops of wilful ignorance and bad faith, but that is only one example of how domestic parties have found themselves wandering in blind alleys and dead ends, remote from where the public would like to see the country. The prospect of a GE, however, allows scope for breaking out of the deadlock.

Given the size of the ‘unserved’ or ‘unrepresented’ (by any major political party) middling view – roughly that the EU is too close a relationship, but considerations of history and geography indicate that the UK should still seek a relationship that is significantly closer than that which it has with most nations on the planet — it would be normal in a democratic system for parties that are more polarised to reach out into that middle in the search for votes. In the business analogue noted above, a company that spotted a large, unserved demand for a product with a combination of characteristics that it could feasibly offer, would likely be leaping at the opportunity to acquire new customers.

What we observe instead is the BXP and LDs going full throttle for the extremes of the ‘closeness/distance spectrum’, the CP being pulled toward the No Deal end of the spectrum in its competition for votes with the BXP, and the LP wandering in space, but drifting slowly toward the Remain end. Given this, the nervousness that is palpable on all sides is understandable. Each strategy is vulnerable to a pivot toward the middle by a competitor (in the business analogue, a rival firm could get to the new product first).

That pivot is most easily achievable by the LDs, because the current LD strategy suffers from an obvious and very major ‘incompleteness’: it contains no statement as to how it would position itself in the (realistically possible, perhaps even likely) event that, in less than three months’ time, the UK will have left the EU. All it has to say is this: ‘Whilst Revoke is our first priority, if that should fail our policy will be to #rEEAmain, i.e. we will fight to keep the UK in the EEA. That exploits the fact that the Treaty of Porto (the EEA Agreement) is a separate treaty from Lisbon and, whilst the WA would see the UK out of the latter, it does not provide for exit from the former. It is the dog that has never barked.

It would also be not that difficult an adjustment for the LP, which has expressed the view that it would like its ill-defined ‘soft Brexit’ outcome to be as close as possible to Single Market arrangements. That could also be crystallized in #rEEAmain, although internal opposition from its now dominant Remain tendency might make that pivot less easy than it potentially is for the LDs, and it appears to be impaired by misreadings of EEAA provisions on state aid, competition policy and freedom of movement on the part of its Marxist ideologues.

These two possibilities make things tricky for the CP. It cannot easily reach out to the unserved middle without risking substantial leakage of votes to the BXP and a revival of its ERG faction. The vanilla FTA in the Political Declaration is there to prevent those things happening: it points to ‘No Deal’ as those words are to be understood in the context of the future relationship, i.e. that there will be no ‘special closeness’ to the EU.

The CP’s one great strategic advantage, on which it almost inevitably has to focus, is that it is the only party in a position to respond to a more immediate, widely shared public desire to get Brexit (in the sense of withdrawal from the Treaty of Lisbon) ‘done and dusted’ as soon as possible. That advantage has just been reinforced by the BXP’s repositioning to a policy that calls for Brexit to be delayed, yet again, until 1 July 2020 (which looks like a mis-step, but opinion polls will soon confirm whether or not that is the case).

The CP’s weaknesses are that (a) the vanilla FTA prospect is publicly unpopular and (b) getting it done by end 2020 is a unicorn: it can be expected to take substantially longer than that. If it could achieve the victory it seeks on 12 December, it could, like the LDs easily pivot toward the centre of gravity of public attitudes (and, in their heart of hearts, ERG members know this, but will likely maintain their tradition of sub-ordinating realities to wishful thinking in the interim). But it is a big ‘if’, possibly conditional on whether the LDs and/or the LP seize the opportunities opened up to them by the existence of a hitherto unserved middle.

Those parties have not done so thus far, but there is nothing like a GE (when politicians do tend to need to give more consideration to voters’ opinions than is the norm) for concentrating minds. With contingency planning in mind, Number 10 might advisedly have a chat with one of the Government’s own Ministers, George Eustice, who has already thought these things through. (Hint: the EEA is a far better place to be, on almost all counts, than in a protracted transition (Limbo), and that is something that can be said early on, without abandoning the first priority of an eventual, new FTA.)

Brexit: the ‘Sunderland Option’

Below are to be found:

(a) the introduction to the paper “Brexit and the Single Market”, which was first published in July 2016. https://www.researchgate.net/publication/305721352_Brexit_and_the_Single_Market/link/579bc67808ae6a2882f1aa08/download 

(b) a section of the paper’s text located under the heading ‘Response speeds and asymmetries of power’, included now (28 July 2019) because of its potentially very high salience in current circumstances (on first publication it was scarcely noticed), followed by a few comments on the reasons for its suddenly elevated salience.

In my mind I think of it as the ‘Sunderland Option’ because it was conceived in the minutes following the Sunderland referendum declaration at around 00.20 on 24 June 2016.  The sentiment was: “OK, that’s a definitive answer from my home city and the task now is to figure out how to satisfy these aspirations in the best way possible.”

Looking back now (July 2019), the one big thing I would change in the text would be to make a very firm distinction between what are, in fact, two variants of the ‘Single Market’ that have subsequently been conflated: the EEA and the EU Internal Market. Speaking roughly, the former is aligned with Mrs Thatcher’s vision of the Single Market and the latter with Mr Delors’s vision.  The UK will exit the latter automatically upon withdrawing from the Treaty of Lisbon, but will not automatically exit the former.  The act of Brexit will therefore automatically separate the wheat from the chaff . The ‘Sunderland Option’ in effect says “hang on to the wheat: it will be of considerable value in seeing us through the next period”. 

 

Preface to “Brexit and the Single Market”

Summary of main points

The UK is currently a Contracting Party to the European Economic Area (EEA) Agreement, and exit from the EU does not necessarily imply exit from the Single Market (i.e. withdrawal from the Agreement). Exit from the EEA would require that extra steps be taken, either unilaterally by the UK or by the other Contracting Parties to the Agreement.

There is no explicit provision in the Agreement for the UK to cease to be a Contracting Party other than by unilateral, voluntary withdrawal, which requires simply the giving of twelve months’ notice in writing (Article 127). A commonly held assumption that only EU and EFTA members can be Parties to the EEA Agreement – and hence that the UK has to be a member of one or other of these two organisations to be in the Single Market – is not well grounded, although UK consideration of an application for EFTA membership is an option well worth exploring in its own right.

In the absence of a prior withdrawal notice or of steps by other Contracting Parties to try to force UK exit (of a nature not yet identified and not necessarily feasible in the light of the Vienna Conventions on the Law of Treaties and on Succession of States in respect of Treaties), on Day 1 of the post-Brexit era the default position appears to be that UK would still be a Party to the EEA Agreement.

This has major implications for any future negotiations. For example, with continuing UK participation there would be no requirement for an application for “access” to the Single Market.

Should the UK choose not to withdraw from the EEA there would be need for some textual adjustments to the Agreement, if only to reflect the UK’s changed status as a non-EU Contracting Party. The more substantive implications of continuing participation concern the operation of the institutions supporting the non-EU Contracting Parties – Iceland, Liechtenstein and Norway – not the EU institutions. Early discussions with the governments of these three countries are indicated: they need not await Article 50 Notification.

Continued participation in the EEA following Brexit would see substantial repatriation of powers covering the areas of agriculture, fisheries, trade policy, foreign and security policy, justice and home affairs, taxation, and immigration, consistent with the strong desire of many Leave voters to ‘take back control’. It would, for example, give the UK freedom to negotiate its own trade deals and set its own tariffs, as well as dispensing with the egregiously protectionist common agricultural policy.

Immigration is the most vexed issue, not least because of the difficulties in establishing a reasoned discourse on relevant matters. The underlying problem concerns the interpretation and application of the principle of free movement of persons, in respect of which EU political leaderships tend to favour a rather fundamentalist, ‘non-negotiable’ position, motivated by the goal of political union.

The EEA Agreement does not treat the ‘four freedoms’ (of goods, persons, services and capital) as absolutes. In each case it provides for limitations to be imposed when justified by some other aspect of public policy. Importantly, for non-EU Contracting Parties to the EEA Agreement the ‘decision maker of first instance’ is the relevant State, not the European Commission.

The commercial aim of the EEA Agreement affects the interpretation and application of the free movement principle in consequence of its significance when determining what measures can or can’t be justified. A ‘political’ interpretation and application of free movement of persons is not well adapted to the aim of the EEA Agreement set out in Article 1(1). Given this misalignment, free movement of persons is almost inevitably a highly contested issue.

Nevertheless, the Agreement provides scope for unilateral action on free movement of persons that is not currently possible for the UK as a member state of the EU. Post-Brexit the Agreement would allow scope for at least some degree of re-alignment of interpretation and application of the free movement principle to better fit with commercial policy objectives.

In relation to budgetary payments by the UK, the default position appears to be a zero contribution from Day 1 of the post-Brexit era, if the UK opts not to withdraw from the EEA Agreement. This again affects the negotiating position. If the UK subsequently agrees to make financial contributions it should expect a quid pro quo, for example increased influence in the rule-making process for the Single Market and/or more explicit recognition of greater flexibility in the interpretation and application of the free movement of persons principle (whilst still pledging allegiance to the principle itself). Such developments would also be of benefit to the other non-EU Contracting Parties, and arguably to EU Contracting Parties as well.

Crucially, the EEA Agreement does not foreclose future policy developments of the types suggested by those who favour immediate exit from the Single Market: it simply leaves those other options available for future consideration and possible adoption, allowing ‘market testing’ of new policy approaches in the interim. On this basis continued participation in the EEA Agreement can be said to be sufficient unto the day.

Such optionality coupled with the faster response speed of the UK governance system amounts to an asymmetric competitive advantage over the EU in policymaking, which serves to counteract the asymmetric disadvantage of smaller market size. The overall imbalance in power in Single Market rule-making is therefore somewhat less than it might appear at first sight.

The aim of the EEA Agreement (set out in Article 1(1)) is highly consistent with the longstanding aims of UK commercial policy, steady and dogged pursuit of which could be a stabilising factor that, inter alia, serves to reduce political uncertainty in markets.

 

From the section headed ‘Response speeds and asymmetries of power’ 

…  A second identifiable factor is the relative response speeds of different Contracting Parties in the face of unwanted conduct by others, i.e. the speed with which they can adjust their own policies in reaction to unwanted conduct by others. A fast response speed is a distinct advantage to whoever can command it. If the possessor is a dominant economic agent, it tends to reinforce the asymmetry of power and provide greater incentives for the use of that power; if the possessor is a weaker economic agent, it tends to mitigate the asymmetry of power and give rise to weaker incentives to use that power in the first place.

The UK’s governance arrangements can, when needed, be remarkably speedy by international standards: the system has what might be called a low inertia mode that can be switched on when circumstances dictate. In contrast, EU rule-making is a relatively cumbersome and slow process, which is unsurprising and difficult to avoid in a structure with so many members with differing interests. …

Comments

Game theory provides a conceptual framework for strategic thinking in the military arena as well as in economics and indeed one of key readings for my own postgraduate lectures in the 1980s used to be Thomas Schelling’s ‘The Strategy of Conflict’.  That highly readable book (with a central theme deriving from Odysseus and the Sirens) was concerned mostly with US cold war strategy, but the shared conceptual framework is indicated by the fact that Schelling was subsequently awarded the Nobel Prize in Economics.

In the military field the importance of differential response speeds in determining outcomes has been particularly emphasised by the US Air Force strategist John Boyd, who argued that a key strategic advantage flows from being able to create situations in which it is possible to make appropriate decisions more quickly than opponents. 

‘Brexit and the Single Market’ pointed out that such a situation was already established at the outset of Brexit negotiations: it didn’t need to be first created by other strategic manoeuvring and that fact allows for a great simplification.  To be turned to advantage though, it would be necessary first to ‘flick the low inertia switch’ (since in most circumstances UK governance sits in a high inertia mode). The tragedy of the last three years is that that was never done.  

There are two immediate reasons why these points are , quite suddenly, highly salient:

  1. With arrival of the new Government the switch has been flicked (arguably because it became necessary for the survival of the Conservative Party in anything like its current form).
  2. Dominic Cummings is deeply familiar with the work of John Boyd.

 

What does the UK want?

The governance system of the UK has suffered Brexit paralysis for 30 months in the face of a parent-to-child question “What do you want?” It has been asked many times over in the capitals of Europe, without any apparent, great success in eliciting a clear response. It is time to end the statis by testing out the preferences of MPs.

With a little bit of help from basic social science the initial exercise could be simple and very quick.  The aim is limited — to discover more about preference orderings — not to invent a voting mechanism that will be determinative in relation to major decisions.  It is the classic aim of regulatory impact assessment: to inform decisions, and to do no more than that.

When an issue is binary (Remain/Leave), a preference can be revealed by asking the subject to choose between the two options. When there are multiple options, preferences are discovered by asking for a multiplicity of binary choices to be made. A more comprehensive preference ordering is then built up from these binary comparisons.

That’s hard to do if there are myriad options available. But suppose there are only four, broad-brush options that require immediate assessment: A, B, C and D. There are then only six binary rankings/choices required: A vs B, A vs C, A vs D, B vs C, B vs D and C vs D. That entails entering a cross in six out of 12, paired boxes.

Such a questionnaire could fit on a sheet of A4, with plenty of white space, and is a lot less complex than a Californian ballot paper. The full dataset for the UK House of Commons can be coded in a 650 (approx.) x 6 spreadsheet, most cells having 0 or 1 entries. That is a small-scale data exercise.

Some of the returns may, of course, violate the axioms of rational choice theory (e.g. by showing ‘cyclicity’ or ‘non-transitivity’), but most human decision-making does that anyway (don’t panic: it just means that the theory fails and that should disturb no-one other than those invested in it). The data will be what they will be.

The four broad-brush Brexit options I would suggest are: (A) Remain in the EU, (B) ‘Norway’ (Leave the EU, but remain in the EEA), (c) a Stand-alone WA (‘stand-alone’ because it is the only framework Agreement sought), and (D) No Framework Agreement (‘No Deal’, but allowing for the possibility of specific agreements).

The stand-alone qualification is important because, for example, ‘Norway’ can be combined with a WA that covers matters not encompassed or not adequately addressed by the EEA Agreement itself, the most obvious of which are trade in agri-foods, trade in fish, and customs arrangements. Aspects of one approach can potentially be used to support or complement another, broad-brush approach.

The individual polling is not, however, a stand-alone exercise. The intent would be to form a view of opinions in the House of Commons as a whole. Knowledge of this intent may tempt respondents to play games with their individual responses.

MP X may prefer A to B to C, but, if B is thought to be the closest ‘competitor’ to A, he/she may be tempted to rank C above B in the binary choice between those two (less preferred) options, in hope of increasing the prospects for A at the collective level. We are all well familiar with this type of game playing from observing the Brexit process so far.

One defence against this is to make the dataset publicly available so that constituents, spreadsheet nuts, researchers, journalists et al can interrogate first the data and then the individual MP. In a representative democracy, MPs owe us their judgments for sure (Burke), but they also owe us some level of explanation for those judgments.

The prize in all this is an improved first mapping of individual and collective preferences, deliverable very quickly. There is a vote in the Commons next Tuesday. If the WA is voted down, an exercise like this could potentially be completed by the end of the week. If Government and Parliament won’t do it, a polling organisation or think tank could.

Will it work and be helpful? We don’t know. It is definitional that the outcomes of discovery processes are unknown: they don’t come with guarantees. We are already in ‘uncharted territory’, but, trained as a geographer, Mrs May should know that it might be a Good Thing to start charting it.

Conservative philosophy, then and now.

The Xmas and New Year break is a good time to catch up on background reading that has sat around on a ‘to do’ list for longer than it perhaps should have done. I’m currently about half way through a re-acquaintance, after several decades, with Edmund Burke’s Reflections on the Revolution in France, motivated by a passing thought that the current Parliament has seemed to exhibit some distinctly Jacobin tendencies.  There are many striking passages in what can reasonably be described as Burke’s long, sustained rant against the political developments in Paris toward the end of the 18th century, but the following passage had particular resonance for someone with strong interests in public policy.

“The science of constructing a commonwealth, or renovating it, or reforming it, is, like every other experimental science, not to be taught à priori. Nor is it a short experience that can instruct us in that practical science: because the real effects of moral causes are not always immediate; but that which in the first instance is prejudicial may be excellent in its remoter operation; and its excellence may arise even from the ill effects it produces in the beginning. The reverse also happens: and very plausible schemes, with very pleasing commencements, have often shameful and lamentable conclusions. In states there are often some obscure and almost latent causes, things which appear at first view of little moment, on which a very great part of its prosperity or adversity may most essentially depend. The science of government being therefore so practical in itself, and intended for such practical purposes, a matter which requires experience, and even more experience than any person can gain in his whole life, however sagacious and observing he may be, it is with infinite caution that any man ought to venture upon pulling down an edifice, which has answered in any tolerable degree for ages the common purposes of society, or on building it up again, without having models and patterns of approved utility before his eyes.”

The contrast with the sentiments of the current government is striking.  The referendum result gave an immediate mandate for extensive institutional demolition:  over 70% of the EU acquis was to go, including not just legislation focused chiefly on political integration, but also in areas with high economic salience such as the EU customs union and the common agricultural, fisheries and commercial policies.  EU legal supremacy was to be ended, and in future Britons would be governed by laws and regulations made exclusively by their home governments.

An immediate question for the Conservative Government was whether there should be a more comprehensive ‘pulling down’ than the extensive institutional demolition entailed by  the referendum result itself. Burke’s ‘infinite’ caution was no doubt a rhetorical exaggeration, but ‘proceed with considerable caution’ might reasonably have been expected to have entered the minds of traditional conservatives.  In the event, at the very outset of the Brexit process, a decision was taken to seek to pull down, in its entirety and not just in part, an edifice that has served a shared purpose to a ‘tolerable degree’ for some time.  It is the structure of European trading rules, including, but going well beyond, the tariff rules to be found in less deep Free Trade Agreements.  This institutional structure has functioned to reduce intra-European tariff and non-tariff trade barriers, covering approaching 50% of the UK’s goods trade.  It is commonly referred to as the Single Market [1].

This decision appears to have been taken by only three people (May, Hill and Timothy), exercising no caution whatsoever.  There was no thought-through assessment prior to the decision, and no realistically attainable ‘patterns of approved utility’ capable of ‘building it up again’ were considered in any depth.   Wishful thinking appears to have been judged sufficient:  “We know how to do this” is reported to have been the sentiment of the moment [2].  If so, that was Jacobin hubris, not Burkean prudence.

There is an interesting contrast here with that most economically radical of twentieth century Conservative Prime Ministers, Margaret Thatcher. The process of liberalization, privatization and regulatory reform associated with her name proceeded over the full 11+ years of her tenure of Downing Street, on a step-by-step basis, starting with easier policy exercises at its beginning and moving on to more difficult areas (like electricity and water) at its end (and even then not touching the yet more problematic cases of railways and postal services).  That allowed for sequential experimentation and  learning along the way.  At each stage an extensively considered new institutional structure was developed ahead of the abandonment of the old. There was no early equivalent of “we know how to do this”.  There was certainly boldness and innovation, but sagacity and prudence had not gone AWOL.

Despite the best efforts of the new Jacobins, the institutional edifice of the EEA Single Market  has not yet been destroyed.  The EEA acquis was established by means of an international Treaty, the EEA Agreement, which the UK made a solemn promise to observe when it first signed the Agreement (in Porto on 2 May 1992) and then ratified “according to [its own] constitutional requirements”.  As the Attorney General recently advised Cabinet colleagues in the context of the Ireland / Northern Ireland Protocol to the Withdrawal Agreement (another international Treaty), under international law such commitments do not simply melt away.  Concrete actions are required to end them.

It may be that the Government is now hoping to evade its EEA Treaty obligations by means of the Agreement just struck with Iceland, Liechtenstein and Norway concerning arrangements designed to follow a future, currently hypothetical, UK withdrawal from the EEA. The draft Agreement was published surreptitiously on 20 December 2018, possibly with the intent that, at a future moment of choice, it will serve as an agreement that supersedes the EEA, rendering parliamentary consent to EEA withdrawal otiose.  However, the new Agreement must also be ratified (see its Article 71(1)) and, if Parliamentarians smell a rat, they can forestall any such subterfuge.  Parliament therefore has the power to ensure that UK membership of the EEA is not ‘pulled down’ before the means to build it (or something similarly functional) up again are very firmly before its eyes.  A Jacobin government versus a Burkean parliament?  That would be an interesting conjunction.

 

[1]  There are in fact two ‘Single Markets’:  the EU’s  Internal Market and the EEA’s Single Market, each defined by a distinct international Treaty.  The former has a wider policy scope, including the EU customs union and common commercial policy, agriculture, and fisheries, which are not covered by the EEA Agreement.  The two, market governance structures are also substantially different.  Internal Market rule-making includes majority voting procedures (consent to which Mrs Thatcher came later to regret), and a single supervisory system based on the European Commission and the CJEU. In contrast, the EEA rule-making and supervisory system is dualistic in nature, relying on consensual agreement and reserving a right of veto for each of the non-EU States.

[2]  See Stewart Wood (2017), https://medium.com/@stewartwood6887/theresa-mays-mistaken-precedent-for-a-brexit-based-on-cherry-picking-1e2e6a3b9985 and Tim Shipman (2017), Fall Out (page 5).

 

 

The logic of the Attorney General

This note focuses on a narrow front in the Brexit wars, the Attorney General’s (AG’s) advice on the Ireland/Northern Ireland Protocol to the Withdrawal Agreement and the implications of that advice for a different issue, the UK’s status as a party to the European Economic Area Agreement (EEAA) immediately following Brexit. Though narrow, the front is nevertheless one of enormous significance for Brexit policy. Mr Cox’s letter to the Prime Minister of 13 November 2018, headed “Legal effect of the Ireland/Northern Ireland Protocol”, should, when duly considered, be a game changer.

The question of interest

Brexit has created circumstances that were not anticipated at the time of the drafting of the EEAA. The emergence of such circumstances is a very familiar, recurring phenomenon in the operation of complex agreements of all kinds, including international agreements.

These agreements typically take the form of incomplete contracts, meaning that they do not attempt to specify, with any great precision, the performance requirements of parties in all contingencies (all sets of circumstances). Hence, they frequently contain provision for arbitration arrangements to resolve the significant ambiguities that can easily arise and to settle the disputes that those ambiguities can trigger.

For international agreements, where the arrangements established by a Treaty are inadequate for resolving an ambiguity/dispute, the function of settling matters is served by the processes of international law. Brexit is such a case and the question of interest is: what are the implications of withdrawing from the EU for the UK’s participation in the EEAA?

The AG’s reasoning on the ‘indefiniteness’ of the Protocol

Mr Cox opens his substantive remarks at paragraph 3 of his letter of 13 November 2018 by noting that the Protocol on which he was asked to advise is part of an international agreement (the Withdrawal Agreement) that is binding on the parties and that must be performed by them in good faith and in accordance with the ordinary meaning to be given to its terms in their context and in the light of the Treaty’s object and purpose.

The words here are taken straight from the interpretive principles of the Vienna Convention on the Law of Treaties (VCLT, Article 31(1)), signifying the relevance of international law to the AG’s task. That is an important point to note, because very many of the blogs and comments on the EEAA issue have entirely ignored the implications of international law. In contrast, it is where the AG starts.

The references to context and to object/purpose are critical because international agreements rarely give rise to disputes that are easily adjudicated by reference to very narrow snippets of text alone. Thus, when it comes to the critical question of whether or not the Protocol is of an indefinite nature, the AG proceeds on a ‘wider-look’ basis as follows.

In the relevant sections of the advice (paras 12-16) he makes reference to Article 50 TEU, to Article 5 (good faith) and Article 184 (best endeavours) of the Withdrawal Agreement, to Articles 1.3, 1.4 and 2.1 of the Protocol, and to the Protocol’s preamble, thus covering material from a range of textual contexts. He first notes that different readings of this material can lead to different conclusions concerning ‘indefiniteness’. Hence there are ambiguities to be resolved.

The AG resolves them by reference to the overarching object or purpose of the Protocol, as summarised in the preamble where it recalls “the commitment of the UK to protect North-South cooperation and the UK’s guarantee of avoiding a hard border, including any physical infrastructure or related checks or controls”. Thus, the reasoning goes, if it turns out to be the case that the parties have negotiated in good faith and used best endeavours to reach an agreement, but have nevertheless failed to reach an agreement, that does not give grounds for disapplication of the Protocol, notwithstanding that there exist snippets of text suggesting otherwise. In a nutshell, that is because the disapplication of the Protocol in those (no-agreement) circumstances would work against the achievement of the Protocol’s overarching object/purpose. Only if another agreement is reached that commensurately serves the Protocol’s overarching object/purpose could the Protocol ‘fall away’.

The VCLT’s words “in the light of the Treaty’s object and purpose” are clearly central here: they are the taken as the touchstone for resolving ambiguity.

The same reasoning applied to the EEAA

There has been a long-running debate since the referendum as to whether the UK will continue to be a party to the EEA Agreement following the UK’s withdrawal from the EU. The relevant question can be put as follows: will the UK’s EEAA’s obligations and rights continue ‘indefinitely’ until such time as a VCLT-compliant exit from the EEAA has been achieved, or will the obligations and rights end, ‘definitely’ and ‘automatically’, in consequence of Brexit?

This is not the time or place to rehash those arguments in detail, but a general feature of the interchanges can be noted. Those who conclude in favour of ‘indefiniteness’ – what I have referred to as “EEA continuity” – tend to emphasise the centrality of international law and of the object and purpose of a Treaty in resolving ambiguity, whereas those who argue for ‘automaticity’ tend to look at things through the lens of European Law and to rely on inferences that could possibly be drawn from narrow snippets of the EEAA’s main text (EEAA Articles 2(c), 126(1) and 128 are favourite sources of cited text).

In his approach to the Ireland/Northern Ireland Protocol, it will be obvious from the earlier remarks that Mr Cox has relied on the first of these two approaches, focusing on the primacy of international law (even for a Treaty so intimately linked to European Law as the Withdrawal Agreement) and on object or purpose as a touchstone for resolving the ambiguities that arise from different readings of snippets of text. His central conclusion on ‘indefiniteness’ was “… in international law the Protocol would endure indefinitely, until a superseding agreement took its place, in whole or in part …” [his emphases].

The relevance and centrality of international law are, I think, even clearer for the EEAA than for the Withdrawal Agreement. The EEAA is a multilateral Treaty that was originally drafted to accommodate seven, fully sovereign states that were not members of the EU and which declined to accept the judicial authority of the CJEU. In contrast the Withdrawal Agreement, if it is signed and ratified, will be a Treaty between the EU and a state that, at the time of its signing, was one of the EU’s own members and subject to the authority of the CJEU.

The EEAA issues also appear clearer when it comes to using the object and purpose of a Treaty to resolve ambiguities. The immediate questions here concern (a) the nature EEAA’s object and purpose and (b) the implications of alternative interpretations of the Treaty’s text for the achievement of that object or purpose.

The object/aim/purpose of the EEAA is admirably succinct and is of an economic nature. It is specified in Article 1(1):
“The aim of this Agreement of association is to promote a balanced strengthening of trade and economic relations between the Contracting Parties with equal conditions of competition, and the respect of the same rules, with a view to creating a homogeneous European Economic Area, hereafter referred to as the EEA.”

I have emphasised “between the Contracting Parties” because the UK is one of those parties, each and all committed to pursuit of the shared aim/object/purpose. Under international law, it achieved that status by signing and ratifying the Agreement. All the other parties have therefore made commitments to promote a balanced strengthening of trade and economic relations with the UK, just as the UK has made the same commitment to each of the other parties. It is a basic principle of international law that these commitments must be met (pacta sunt servanda).

Given the overarching aim set out at Article 1(1), if there are conflicting interpretations of parts of the text of the EEAA, resolution of the ambiguities can proceed by asking the question: how would adoption of each of the ‘candidate interpretations’ bear upon the capacity to achieve the specified, shared aim/object/purpose?

If the object or purpose is of economic nature, as it is in the case of the EEAA, then judges or arbitrators necessarily have to make economic assessments. Explicit statement of this fact can make even judges nervous (something I have directly experienced as a participant in judicial training exercises). But it is unavoidable: they do not get to pick and choose the factual matrix with which they are presented when their judgments are sought.

In the EEAA case though, there should be no reason for judges/arbitrators to doubt their own capacity to undertake the relevant assessments: the economic points are very simple. All that needs to be assessed in the event of a dispute about interpretation is which of the contested ‘automaticity’ or ‘indefiniteness’ interpretations better serves the overarching aim of Article 1(1), and that is something of a no-brainer. It is very difficult to see how the UK falling out of the EEAA could do anything but weaken, rather than strengthen, the trade and economic relations between say, the UK and Norway, given that the EEAA serves as an FTA between the two countries. Similarly, if WTO tariff schedules are taken as a relevant comparator, the ‘automaticity’ interpretation would lead to the sudden appearance of a 10% tariff on autos for all UK trade with the 27 remaining EU Member States. That again would serve to weaken trade between the Contracting Parties of the EEAA.

At a general level, it would I think be difficult to conclude that the aim of promoting trade and economic relations across an economic zone or area would not be significantly harmed by the removal of the zone’s second largest economy.

To avoid any misunderstanding, this is not an argument for the EEA as a policy choice. Some would no doubt prefer an immediate exit from the EEA in the UK’s own interests. However, the UK’s interests are not the relevant criterion in answering the question posed: the criterion is rather the impact on the Agreement’s shared purpose, its Article 1(1) aim. Moreover, those who favour EEA-exit on policy grounds should properly have sought to trigger the EEAA’s Article 127 (the exit provision), which would have led to UK withdrawal in a way that would have been compliant with the Agreement itself and hence with international law. The UK gave a solemn commitment to do things that way when it signed and ratified the Agreement, but the current Government has, de facto, not honoured the commitment, most likely because of a perception that nothing close to a majority could be mustered, whether in Parliament or among the public, for that course of action. Pacta non sunt servanda now.

Following the AG, therefore, it might be said, perhaps with greater force than for the Ireland/Northern Ireland Protocol, that in international law the EEAA will endure indefinitely, until such time as a withdrawal process that is not in breach of the Agreement is completed.

On that basis, in the absence of a consensus among the Contracting Parties, including the UK itself, that the UK should withdraw on 29 March 2019, a ‘deep and special’ trade and economic cooperation agreement with the EU, its Member States, Iceland, Liechtenstein and Norway will remain in place on 30 March 2019.

Comments

In the post referendum period the Government has been faced with two questions on which legal advice has been sought, each of great importance for Brexit policy and each engaged with the same basic issue, whether or not an international Treaty is ‘temporally indefinite’. In one case (the Ireland/Northern Ireland Protocol) the Government, has received advice from the Attorney General – now in the public domain and with its reasoning fully laid out for all to see and examine – that the answer is in the affirmative. In the other case, concerning the EEA Agreement (where prima facie the AG’s reasoning leads more quickly and more definitely to the same answer) the Government has proceeded on a presumption that the answer is in the negative.

The much earlier advice on the EEAA, which the Government has claimed justifies its view that that the UK’s Treaty rights and obligations will be extinguished, definitely and automatically, on Brexit Day, has never been disclosed. Its authors have not been identified. The basis of the conclusion (which could have been provided without disclosing the advice itself) has never seen daylight. We can’t even be sure that the advice actually exists in any written down or formally presented way. There is no indication in political speeches or articles that the object or purpose of the Agreement has been considered to be a relevant factor in reaching a conclusion on “EEA Continuity” (i.e. no indication that VCLT interpretive principles have been followed). Nor is there any indication that the Cabinet has been able to see and discuss the advice. It all seems to be locked away in a secret garden.

In these circumstances, there would be great merit in asking Mr Cox quickly to provide the PM and the Cabinet with a review of the integrity and robustness of any earlier advice on ‘automaticity’. If that were done, and if he were to reach the same conclusion as he did in relation to the Ireland/Northern Ireland Protocol, Brexit prospects and options ahead would come to look very different by the opening of the new year. Moreover, if that did indeed turn out to be the case, the Prime Minister, Cabinet and Parliament would then have the advice of Lord David Owen to turn to for one, immediately relevant and manifestly feasible answer to the question: What is to be Done?

31 letters making 8 points

In the Sunday Times today, in the form of a letter to Parliamentarians, David Owen summarises a suggestion that we have both advocated since 2016. It is an approach that has been consistently blocked by Mrs May’s red-line against participation in the European Economic Area (EEA).

If the Withdrawal Bill is defeated in Parliament on 11 December, it would pass beyond the bounds of sanity to maintain the red-line. In such an uncertain moment it would be certain folly to refuse seriously to contemplate and consider an option that could be of significant value in the new circumstances. And by serious consideration I mean an assessment stripped of myths and erroneous ‘assumed facts’ when engaging with issues such as rule-taking, budget contributions, and free movement of workers provisions in the EEA. If the facts change substantially, only the most inflexible of minds would fail to contemplate a change of opinion.

Lord Owen poses the issue in a very concrete context. The Withdrawal Bill has been defeated and we are faced with Lenin’s question: What is to be Done? The answer he gives is very specific: ‘this’ is what should be done.

Almost identical letters should be sent to each of the other 31 contracting parties of the EEA Agreement (EEAA) making 8 points, as follows:

• The UK reaffirms its full commitment to the Article 1(1) aim/purpose of the EEA Agreement and intends to continue its membership of the EEA from 29 March 2019.
• The UK assures all other parties that it will continue, post Brexit, to perform its obligations under the Agreement, recognising that these obligations will expand in scope as EEA competences currently lying with the EU are transferred to the UK (an automatic consequence of the transfer of sovereignty that Brexit entails).
• The UK reaffirms its commitment to the existing territorial scope of the application of the EEAA to the territories for which it has responsibility.
• The UK expects all other parties to the Agreement to continue to meet their own obligations to the UK under the Treaty, again recognising that, for the EU, these will be diminished by the transfer of competences that will occur in consequence of Brexit.
• The UK expects its obligations to be equivalent to those of Iceland, Liechtenstein and Norway.
• The UK notes that a switch of EEA governance pillar status occurred, with relative ease, when Austria, Finland and Sweden ceased to be ‘EFTA States’ and became members of the EU in 1995. Though it would have been arguable on a narrow reading of EEAA Art 128(1) that those countries should have re-applied to join the EEAA, that is not what happened.
• In the event of any serious dispute the UK will seek arbitration under international law, for example via the Permanent Court of Arbitration.
• The UK formally gives notice that it reserves its EEAA Treaty rights under international law, recognising that, after 29 March 2019, it will be international law that will be relevant for the settling of any disputes.

Taken together, these points amount to an exhortation to follow international law in relation to the issue of the UK’s immediate post-Brexit EEA status. No action has to date been taken by any party to change the contracting party status of the UK: the UK has not given Article 127 notice to withdraw from the EEAA, no other party has taken steps to remove the UK from the Agreement. There would be modalities to settle as to the operation of the Agreement in the new circumstances, but no issue of membership to settle. The letters would resolve any uncertainties surrounding the UK’s intent,

The proposal merits some additional commentary. First, keeping minds focused on the proposal (and not letting them wander immediately off on to other issues), there is no question of feasibility. The government can write 31 letters making 8 points, if it so chooses. It can’t of course control the responses, but it has 100% control of the act itself. That may look to be a trivial point, but I suspect the sending of the letters would itself feel liberating. The people yearn for a government capable of taking initiatives and it would be liberating in the very real sense of releasing policy thinking from the choking grip of an irrational red-line.

Second, the act itself has near zero cost. It might be argued that it would eat up valuable time, but, if the alternative is further delay and dithering with no other Parliamentary consensus is in sight, it could also speed things up. And there is good reason to think that an EU first response would not be long in coming.

The EU negotiators have tracked the Article 127 debate in the UK since the Autumn of 2016; they have developed temporary (ultimately unsustainable) holding positions on the issue that can and have been repeated by officials and friendly lawyers in those debates, and have been played as a straight bat when questioned by inquisitive journalists; they have considered what to do on 29 March 2018, the last day on which Article 127 notice could have taken the UK out of the EEA on Brexit Day. They will have contemplated the situation that has now arisen and have likely formed views on how to respond.

As to what the response will be, we simply do not know, although over-confident assertions will no doubt abound, as they always do (it is amazing how many people think they have possession of a reliable crystal ball). There is a range of possibilities, at least some of which would be highly favourable to both parties, and that is enough to make the exercise valuable. At a minimum we will discover something new, getting more insight of how things would stand in the absence of the limiting red-line.

The EU could say something along the lines of “Our position remains fixed, see you in Court”. Then again it may not. It might be the case, for example, that Michel Barnier spoke truly when he offered the EEA as an option in the past and when he said that, if the UK’s red-lines were changed, that would draw a positive response from the EU (and I think he did speak truly, although the lateness of the hour would provide some ground for resiling from that position now). Or that the EU will behave as it usually has in the past by being willing to keep talking until nearly the last moment in search of a better outcome. Or that it will be very wary of being seen to be operating beyond the limits of international law in its response: the parties to the EEAA having made promises/ commitments to each other in signing an Agreement to work together in pursuit of the Treaty’s Article 1(1) aim: and, as the Vienna Convention on the Law of Treaties states firmly, Pacta Sunt Servanda.

For the non-ideological empiricists who make up the bulk of the UK population, I think that the best way to look at David Owen’s proposal is to think of it as an experiment. That may sound scary, but it’s not. The situation calls for adjustments and adaptations and the only realistic way to find out what will and won’t work is to experiment, to try something new and different. The great bulk of human knowledge has accumulated in this way. Every significant advance, including in economic policy, has been made without foreknowledge of the full consequences of the step to be taken. (To which it might be added that where innovations are perceived as having potentially substantial effects, they are generally opposed by a much more numerous band of naysayers.)

What is more certain is that: we will at least learn something new; the response will not be long in coming; and there are potentially valuable things to be discovered. Defeat of the Withdrawal Bill therefore presents the UK Government and Parliament with an opportunity and, as Sun Tzu said, and as all followers of the former ECJ Judge Franklin Dehousse will know, “Opportunities multiply as they are seized”, to which I will add “and diminish as they are not seized”.

Finally, I think the saddest type of domestic response to the proposal would be – and I anticipate there will be responses of this kind – ‘it just won’t work, so its not worth trying’. That is defeatism pure and simple: it is way out of line with what is advised by a cost-benefit analysis and, I suspect, with public attitudes too.

An explanatory note to accompany the letter is to be found on the website http://www.lorddavidowen.co,uk 

Mrs May’s assault on freedom: an intellectual error?

The Prime Minister’s letter to the nation of 24/11/18 reveals again the over-riding priority she attaches to reducing immigration, which might reasonably be described as an obsession that has developed since she first became Home Secretary in 2010. For some this will be viewed as a moral failing, but I want to argue here that it is much more of an intellectual failing.

The issue is the fallacy of composition. For those who don’t know it, it refers the false belief that something that is true for a part of a whole is necessarily true of the whole itself. E.g. if I stand up at a football match I see better, but it can’t from that be inferred that if everyone stands up they will all see better.

Just about all students of macro-economics become immunised against it when they learn that, if they decide to save a higher proportion of their income they can expect that their savings will increase, but if everyone makes that same decision at the same time, aggregate savings for the economy as a whole may very well fall.

The key point to recognise is that economic freedom, of which free movement of persons is a dimension, is an attribute of individual parts of a whole. Thus, the provisions of the EU and EEA Treaties afford rights to individuals to move around freely within given sets of territories, unhindered by many (though not all) of the constraints that might otherwise be placed upon them by public authorities, e.g. by the Home Office in the UK. This is what freedom of movement means in these Treaties: they limit the degree of control over an individual that a national government might otherwise seek to exert.

It would, though, be to fall under the spell of the fallacy of composition, if it were to be inferred that, because the control of a government over individual migration decisions is limited, that its control over aggregate migration is similarly limited. That is a logical error, and policy practice shows that it just ain’t necessarily so.

Consider, for example, the EU Emissions Trading Scheme (EU ETS) for controlling the aggregate emissions of greenhouse gases, an arrangement that was heavily influenced by British, liberal economic thinking of the time. The scheme does not constrain the freedom of an individual emitter of CHGs to increase or reduce its emissions, but it does constrain the aggregate emissions in the jurisdiction (the EU) as a whole.

How then does this reconciliation between the part and the whole occur? The answer is simple: by recourse to the price mechanism, the most familiar method of balancing supply and demand in a commercial society like ours. EU ETS makes available a given number of ‘carbon certificates’ that each entitle an emitter to discharge a defined (carbon equivalent) level of CHGs into the atmosphere. That first exercise determines a cap on emissions for the EU as a whole. The certificates are, however, tradeable, meaning that their holders are free to buy and sell certificates among themselves. If the owners of a particular production facility covered by the scheme want to increase or decrease their emissions in a given month, year or period of years, they are not constrained by public authority from doing just that.

EU ETS is a relatively recent regulatory innovation (2003), but the underlying practice has been obvious for centuries. Land is in relatively fixed supply from year to year (it might be said that the whole is highly constrained by nature). However, markets in land mean that individuals can vary their own, individual holdings from time to time via quotidian buying and selling transactions.

And so it could be for migration policy. A government with the sovereignty that a post-Brexit UK will possess could impose a cap on the aggregate number of residence permits on issue, i.e. on the whole, but allow trading of those permits, i.e. allow individuals the freedom to buy or sell. This tradability right would render the overall cap consistent with individual freedom of movement.

The UK cannot do this as a member of the EU, because it doesn’t yet possess the necessary sovereignty, although the EU as a whole does (which it might one-day use, for example, because this type of arrangement could be expected to significantly reduce the incentives for illegal people-smuggling and simultaneously render the policing of illegality a smaller-scale challenge). As a member of the EEA with the same status as Norway, however, it could, which is why I looked at this approach in a little more detail in an earlier blog, “A cap and trade system for residency”. https://gypoliticaleconomy.blog/2018/05/05/the-journey-begins/

The bottom line is that there is no fundamental trade off between membership of the EEA’s Single Market and a capacity to control aggregate migration flows.  Rather, it is a matter of ensuring that the chosen means of control are EEAA-compliant. The fact that the Home Office’s currently preferred administrative methods, entailing gross interference in individual decisions, would be non-compliant means only that they should be abandoned in favour of market-based approaches that do not impede individual economic freedom.  The risk of a protracted civil division over this issue is, in my view, largely attributable to a lack of (a) imagination and (b) relevant expertise in our own, not-so-dear, Home Office.

Article 129(1) of the draft Withdrawal Agreement: a modest suggestion

When first posted this blog referred to Article 124(1) of the draft Withdrawal Agreement of March 2018.  It has been adjusted to reflect the Article numbering in the November 2018 version. End comments have also been added in the light of the current (19/11/18) situation.

In current circumstances, the EU Brexit negotiators must necessarily consider the likely effects of their own conduct on the state of affairs in British parliamentary politics. This is not a question of interfering in British politics, it is simply a recognition of realities. Any new offer made by the EU affects the political balance in Britain, which in turn affects the prospects for a satisfactory outcome for the EU itself. The balance is simply part of a causal chain linking EU actions to EU consequences.

With this in mind, let me run a suggestion up the flagpole to see if it catches any wind. It is directed chiefly at EU negotiators, since things in the NW archipelago seem a bit stuck in a groove at the moment. It could possibly transform the state of play in Westminster in a way that would open up a path to a Brexit outcome that would be judged satisfactory on both sides. By the ‘British side’ here is meant majority public opinion, not the opinions to be found in the ideological factions of a fractured politics. The suggestion concerns Article 129(1) of the draft Withdrawal Bill.

Article 129(1), which appears under the heading Specific arrangements relating to the Union’s external action, reads as follows:

Without prejudice to Article 127(2), during the transition period, the United Kingdom shall be bound by the obligations stemming from the international agreements concluded by the Union, by Member States acting on its behalf, or by the Union and its Member States acting jointly, as referred to in point (a)(iv) of Article 2.*  (The end asterisk points to a footnote.)

These international agreements include the much discussed Free Trade Agreements with third countries, egs Canada and Korea. Amongst them is the sui generis European Economic Area Agreement.

Article 129(1)’s confirmation of the EU’s support for the continued applicability of the EEA Agreement without recourse to any need for international dispute resolution has been of comfort to many in the UK, but there is one snag. The footnote to Article 129(1) says that:

The Union will notify the other parties to these agreements that during the transition period, the United Kingdom is to be treated as a Member State for the purposes of these agreements.

In relation to the EEA Agreement, it is the footnote that gives rise to the ‘vassal’ or ‘colonial’ status that has created and is creating significant opposition to the proposed Withdrawal Agreement. It obviously fans the flames of the nationalistic sentiments that can cause ruptures in the fabric of international cooperation, but that is not its only effect. It entails the following of market rules without any ability to shape or influence those rules. Its repudiation would therefore likely be supported by at least some Parliamentarians with strongly democratic, but not particularly nationalistic, sentiments.

My suggestion is therefore this. Provide for the sui generis EEA Agreement to be an exception to the general rule established in the footnote (that the UK is to be treated as an EU Member State). Specifically, introduce an option that the UK can instead choose to be treated as an EFTA State, subject of course to the consent of Iceland, Liechtenstein and Norway.

In respect of trading arrangements at least, that would make ‘vassal’ status optional and, when in a tight spot, additional options are nearly always good to have.  Although the option would be exercisable by the UK, the amendment would have benefits for the EU too.  The European Commission has a big agenda and is heavily stretched in terms of technical resources.  Particularly if the transition period is to be extended, the retention of UK experts in the ‘engine rooms’ of regulation could be of significant value, just as Norwegian officials have added significant value in a number of important, regulatory areas over the past years.

Brexit sequencing and the ‘interim period problem’: Limbo in our time?

Withdrawal from the EU (Brexit) will occur at an instant on 29 March 2019 and that moment divides the policy questions and processes entailed by Brexit into two periods. The post-Brexit period will itself be divided into two stages since any new, long-term trade arrangements will not be in place on the day after Brexit. It will take time for them to be negotiated, then ratified and implemented. The time sequence we face is therefore:

Article 50 period -> Interim period -> Operational long-term agreement period

Over the two years since the referendum the great arguments about Brexit have revolved largely around the first and third intervals of this sequence, but the interim period is important too. It will be when the substantive long-term negotiations take place and some of its features will be important influences on the outcomes of those later negotiations. The two that I will focus on are the likely length of the period and the degree of control over rule-making that the UK will enjoy during it.

Even now, close to its opening though we are, it is impossible to forecast the detail of how things will pan out during the interim period: there are too many possibilities to contemplate and assess. Even if it were feasible, there is little value in trying to plan out now, in any great detail, what the UK’s ‘positions to take’ on particular issues should be. As Field Marshall von Moltke (the Elder) put it: “No plan of battle survives first contact with the enemy”. Flexibility is required to adjust to changing realities and to have influence on them. It depends on having (a) a menu of options to choose from and (b) the power to exercise those options. This is what is meant by ‘control’ or, in broader terms, possessing ‘sovereignty’ over decisions.

Since ‘taking back control’ was the major theme of Vote Leave’s referendum campaign, it is natural to ask the following question: In the early part of the interim period, what progress in ‘taking back control’ can be expected? The ‘taking back’ aspect of the question implies that the assessment is benchmarked against the current status quo, in which the UK is a Member State of the EU.

More specifically, consider first what the UK’s degree of control over market rules and regulations will look like on the day after Brexit. Ask of each of the two long-term Brexit proposals currently receiving most attention (Canada+ and Chequers): Will it lead to a stronger or weaker position for the UK on the 30 March 2019? Will it provide more or less control/influence over rule-making?

Approaching things in this way, it is immediately apparent that the terms of the current, draft Withdrawal Agreement (WA) imply that Brexit Day will see a reduction in UK control. That is, in response to a popular injunction to ‘take back control’ the Government will have delivered a surrender of control (judged relative to the status quo ante, the EU system). This can be called the “interim period problem”, since it is likely that this initial control deficit will persist throughout the period (although its severity may vary over time). On the control/sovereignty agenda of the referendum campaigns, the UK will have paddled backwards.

That is a major point, because other things equal very few voters would favour a reduction in sovereignty.  Remainers are no different to Leavers in this respect: they might argue for a lesser sovereignty, but only if it was accompanied by the prospect of compensating benefits of greater value.  Taken by and of itself (i.e. ‘other things being equal’) a loss of sovereignty is a negative factor.

As things stand under the draft WA of March 2018 there is a plan for a ‘transitional period’ that will last for 21 months, but ‘no plan survives its first contact with realities’. At the start of almost any complex process of negotiation it is difficult to be confident about how long it will take. Benchmarked on international FTA experiences, a four-year start-to-implementation length would be a very impressive achievement for an agreement of the depth, scope and complexity anticipated by the Canada+ and Chequers proposals. These things look simple in abstract, but they invariably turn out to be more challenging in practice. It has, after all, taken the Government more than two years even to come up with only a very broad outline of its own aspirations (Chequers).

The WA in its present form would therefore see a division of the interim period itself into two: (a) the first 21 months and (b) a yet to be agreed extension, necessary to bridge the remaining gap until such time as a new long-term agreement is operative. There would likely be a further price for the UK to pay for the extension and the closest, identifiable benchmark appears to be around £10 billion per annum (the sort of payment that the draft WA indicates has been agreed for its 21 month transition period, although things are not put that way in that document).

There is an underlying three-dimensional trade-off between depth, speed and cost at work here. Mrs May opted at the outset for a “deep and special relationship” and that aspiration remains UK policy. The interim period could be shortened by giving up on depth and opting for a shallower, simpler agreement, but that would entail a major shift in government policy and the likely benefits of the agreement would be lower.

A more basic Canada-style agreement, without pluses, might shorten the interim period, but even then it is not just a case of replicating an existing FTA template. The value of UK-EU trade is many times the value of Canada-EU trade and the goods and services mixes involved are rather different. A UK-EU agreement would be a significantly bigger thing than Canada-EU from the outset.

Then there are the customs issues to consider. The operation of even a basic agreement would require a major upgrade in systems and businesses throughout the land would have adjust to rules-of-origin reporting. These adaptations are perfectly feasible, but there is another trade-off to consider: the faster things need to be done, the higher will be the costs, including costs arising from operational failures.

The UK cannot unilaterally determine the length of time things will take. EU systems will need to be adjusted too and there is an obvious question to ask about the incentives of EU Member States regarding speed of progress. The transitional arrangements contemplated in the WA are very comfortable for the EU: the longer the interim period the larger the financial contributions of the UK are likely to be. A UK request for greater speed could be expected to elicit a request for higher financial contributions to cover the EU’s own incremental adjustment costs that a greater pace would entail. There is also the issue that 28 counterparties with differing interests will have influence in the negotiations, each of whom will be unlikely to stay silent on matters that touch on its own, economic and political sensitivities.

In the case of the Chequers proposal there is a very real question as to whether it is realistically feasible at all. By and of itself the first sentence of the list of Chequers proposals raises enough difficult operational questions to indicate that this would be an administrative snake pit. For the EU and the other EEA contracting parties, the UK discretions (rights) sought in the proposal would serve as an ever-present risk to the well-functioning of the Single Market rule-books (which work as systems of rules – the addition or subtraction of a rule can affect the way the other rules function). The detail here can be left aside for current purposes: the only point that matters is that the interim period entailed by a negotiation based on Chequers could be expected to be particularly protracted.

The problem in all this is obvious. It can be reasonably be expected that it will take four years or more, possibly several more years in the case of Chequers, to settle long-term future trading and commercial arrangements. The UK will face four or more years in a sort of fee-paying Limbo. The interim period could be expected to end in 2023 at the earliest, which lies at the far side of the next scheduled General Election.

A double, public stocktaking of how well things are going on the ‘take back control’ agenda can the be expected: the first around the time of the Brexit (29 March 2018), the second at a subsequent General Election. On both occasions the government of the day will likely have to acknowledge that, relative to where the UK stood when in the EU, the UK will have surrendered control, the opposite of the Leave injunction to take back control. No doubt the word ‘temporary’ would be used a lot in the Conservative Party campaign and better things would be promised soon, but, at a General Election in 2022, a record of nearly six years of promising benefits that had not yet arrived would likely pose something of a credibility problem.

Fortuitously, history has presented the UK with a potential solution to the ‘interim period problem’. Evaluated against the principles of best-practice policy making (which in their regulatory version are enshrined in domestic statute) it is pretty much a bull’s eye: it comprehensively deals with the problem by eliminating the interim period. Whilst, being precisely targeted on the ‘interim period problem’, it causes minimal collateral economic harm and has a minimally foreclosing effects on other paths of policy development. Since it rests on an extant international trade agreement, already ratified by all its contracting parties, it can be implemented immediately upon a consensual agreement. Its most accurate shorthand descriptive label is “Norway First”.

The opportunity arises because of the existence of the European Economic Area Agreement (EEAA) and seizing the opportunity is facilitated by the fact that the UK is itself an existing contracting party to that Agreement, and indeed was one of its founding parties. Although not heavily advertised, the draft WA signifies acceptance by both the UK and the EU that the EEA should continue operating for at least the first 21 months of the interim period.

As things stand, the anticipated rollover of the EEAA will be on the basis that the UK continues to be treated as an EU Member State. If that remains the case the UK will fall within the EU governance pillar of the EEA Agreement. There it will be subject to decisions about EU and EEA legislation and about its operation that are taken on the UK’s behalf by the European Commission, supervised by the European Court of Justice, without any significant UK role in the making of that legislation or the taking of decisions.

The position would change, however, if the UK transitioned to the EFTA governance pillar of the EEA Agreement, in which sit Iceland, Liechtenstein and Norway. These sovereign nations do not share EEA competences with the EU as EU Member States do, nor are they subject to the authority of the ECJ. The EFTA States have their own supervisory arrangements.

This is the ‘Norway option’ in its full sense and, if the transition between pillars occurred simultaneously with Brexit (UK withdrawal from the Treaty of Lisbon), it would eliminate the interim period entirely, for trading and regulatory arrangements at least (there would still be need for transitional arrangements in other areas such as customs). If the transition to the EFTA pillar occurred at a later date (than 29/3/19), the interim period would end on that later date, for example three or six months after Brexit Day.

Adopting this pillar-switching approach, the question of the precise nature of the longer-term relationship would be a can that is kicked down the road until after Brexit Day, which is something that will likely happen anyway given the compressed timetable of the Article 50 process. What Norway First would add is UK empowerment during the interim period: in respect of trade-related matters, UK sovereignty would be increased, not diminished, relative to the EU benchmark.

It can be noted at this point that Norway First has a strong resonance with the strategy adopted by the Leave campaigns before the referendum: in a sense it can be viewed as the natural continuation of that strategy. As a matter of conscious choice, those campaigns did not try to specify a ‘plan’ for what should happen in the event of a Leave victory: that would be a matter for democratic determination in the post-referendum period. The electorate voted knowing that the basis of the choice was ‘Leave First and let Government and Parliament determine the future EU relationships later’ vs Remain (although I suspect that few of us expected such hapless governance to follow).  Nothing about the strategy was concealed: the can labelled ‘what next?’ was, transparently, to be kicked down the road.

Given that, the only mandate that properly needs to be met by 29 March 2019 is withdrawal from the Treaty of Lisbon. There is no mandate to leave the EEA: it was one of the matters that was put in the kicked can at the time of the referendum. Norway First is therefore a full and complete response to referendum vote: it would deliver the Leave First result that the majority voted for.

A Prime Minister speaking on the day after Brexit could then truthfully say “We have delivered the Brexit mandate to withdraw from the EU and have already achieved the greater part the implied injunction to take back control. We have done that in these areas: [insert list of areas here].” The list of areas would include free movement of workers, which may come as a surprise to many, but is no less true for that: the EU Treaties themselves allow for limitations to be placed on freedom of movement and the EEA Agreement provides greater scope to do that than do the EU Treaties (Lisbon and TFEU), but the really big point is that Norway First would transfer the competences to make the relevant decisions from the EU to the UK. This transfer of competences can be most easily seen at the final sentence of Article 113(3) EEAA, but it runs through the entirety of the EEA Agreement’s provisions.

At the time of writing, Norway First is receiving attention because of interest in a NorwaythenCanada Brexit strategy that has been advocated by the Conservative MP Nick Boles. The strategy comprises an aspiration for a long-term Canada+ agreement, but with the earlier, interim period problem resolved by Norway First. It has very obvious attractions for a significant group of parliamentarians who favour Canada+ or some shallower type of FTA for the longer term, but its key aspect should command a wider support. That key aspect is the elimination of the interim period, i.e. the avoidance of a potentially protracted period of Limbo.

Norway First is directed solely at the interim period problem and can be combined with any of several alternative approaches to the operational long-term agreement period. These include, in increasing order of depth and scope: a bare bones FTA, Canada, Canada+, EFTA, EFTA v2.0, Chequers, Norway (EEA only), Norway (EEA+EFTA), and return to the EU.

The can labelled ‘long-term arrangements’ should, advisedly then, be kicked down the road, and in all probability it will be anyway. It is a can that has given rise to lots of noise, game-playing, and bitter personal rivalries, all of which have distracted attention from the second can in play, which carries the label ‘the interim period’. The contents of the latter can should be dealt with immediately, because the opening of the interim period is getting very close now. The most important question the can contains is: Empowerment or Limbo?

It is not the most difficult question a government has ever faced, but for some reason I cannot suppress a picture in my mind of a UK Prime Minister returning from Brussels with a document that says “Successful Agreement” on the cover, but whose content implies “Limbo in our time”.

The bespoke agreement option

This is an extract from a submission to the Scottish Parliament made on 15 August 2016.
One argument in circulation at the moment is that the UK should withdraw from the EEAA in order to negotiate a better, bespoke agreement with the EU. There are two points that I would make about this policy position.

First, there is the timing issue already raised. Bespoke arrangements may take a long time to be negotiated and hence might be expected to contribute to a protracted period of political and economic uncertainty. Added uncertainty can be expected to have adverse effects on investment. Such negotiations also tend to absorb significant administrative resources.

Second, whilst it is highly likely that there are arrangements that would be better for the UK/Scotland than the existing terms of the EEAA Agreement – the Agreement was, after all, negotiated and drafted a quarter of a century ago and I think that would fair to say that it is not one of the finest pieces of legal draftsmanship in existence – it should always be borne in mind that the possibility of achieving something better is accompanied by the possibility that something worse could be negotiated. One of the maxims I have used in my working life in public policy is “never underestimate the capacity of well-intentioned government to make matters worse” (and governments are not necessarily always well intentioned).

In current circumstances there are also some severe doubts about the availability of negotiating skills on the UK side. This is not just a matter of a dearth of experienced trade negotiators: the number of old-fashioned trade unionists (brought up in a culture of hard bargaining on behalf of their members) now to be found in front line politics, and who might in other circumstances have served, is much diminished.

A concrete example illustrates the possibility of ending up with something worse. In a referendum the Swiss rejected membership of the EEA at its outset and subsequently negotiated a series of bespoke agreements with the EU (reported to total over 120), including in relation to the free movement of persons. Much more recently, in referendum on 9 February 2014, the Swiss have voted to impose stricter immigration controls, but, under the terms of the relevant agreement, this has to be negotiated with the EU. Two and a half years’ later the negotiations are still ongoing. In contrast, as a Contracting Party to the EEAA, the relevant actions could have been taken unilaterally and without significant delay.

Negotiating bespoke arrangements could pose particular issues for Scotland. For example, I understand that Scottish fishermen have already expressed anxieties that the potentially beneficial effects of repatriation of fisheries policy powers will be bargained away in Brexit negotiations. My general view is that the Scottish Government and Parliament will have an easier task in monitoring developments and influencing outcomes in the context of a negotiation based on making “necessary amendments” to a relatively simple, existing Agreement than in staying abreast of the more complex, more protracted negotiations that starting from scratch would likely entail.

This last point is reinforced by the fact that the ‘off-the-shelf EEAA’ has been previously scrutinised by the Parliaments of Iceland and Norway, countries whose interests have a more than average degree of alignment with Scottish interests in some major policy areas.