The Lockdown Mysteries

The question I keep asking is:  Why, nearly a year after their first trialling, have there been no serious attempts by government or its advisors to assess the effectiveness of the packages of restrictive social and economic measures/interventions that, in their more extreme forms, are referred to as Lockdowns?  The absence of serious attempts by government and its advisors to engage with such a basic question of regulatory impact assessment is the first ‘mystery’ of Lockdown.

The position of those who strongly advocate Lockdowns seems to be to be like that of a physician who says to a patient: “I would advise you to take this medicine because I believe that it is effective, but I have to tell you that its effectiveness has never been subject to any rigorous testing and it does have very major, harmful side effects.”  There would no doubt be some takers – ‘believers’ who place heavy trust in the opinion of the physician and are willing to discount all that follows after the ‘but’ – but the second mystery of Lockdown is that there have been so many willing not only to take the prescription, but also to endorse its coercive application to all citizens. 

In response to the question of why the assessments have not been made, the principal answer seems to have been that it is all too difficult and that the exercises would be unlikely to produce credible results. That, though, would be of no comfort to the patient:  it simply emphasises that there is no evidential basis for the advice.

Anyone familiar with policy impact assessments might also have some follow up questions: “These types of assessments are routine, what is it about this one that makes it so much more difficult that it is not attempted?” “Is it lack of data (there seem to be lot)?” “Is it because the intended impacts are too diffuse (they seem to be clearly focused on infection rates)?” “Is it because the timing of the measures is uncertain (they seem to occur at well-defined dates)?” And a keen student of politics might ask: “Is it because the assessments might produce (politically) inconvenient answers?”

Assessing the impact of a package of policy measures/regulations

To see that impact assessment is feasible, consider the most basic of epidemiological model frameworks, the Susceptible-Infected-Removed (SIR) framework.  It starts with an identity: if I(t) is the number of people who are infected & infectious at any one point in time, then the rate of change of I(t) is the rate of new infections at time t less the rate at which the sub-population of the infected ceases to be infectious.

Next it hypothesises that the rate of new infections is the product of beta, I(t) and S(t), where S(t) is the number of susceptibles in the population at the given time (t) and beta is a positive constant, a model parameter:

Rate of new infections = beta*I(t)*S(t).

Here I(t)*S(t) is just the total number of possible binary contacts between individuals in the infected group and individuals in the susceptible group.  It is a measure of the virus’s maximum number of potential opportunities for transmission at a given moment.  Beta is the effective contact rate, the fraction of the potential opportunities for transmission that are realized.

More complex models take things further than this, including by disaggregating these macro concepts into sub-sets, but this basic framework suffices for current purposes. 

The hypothesis to be tested is that a set of restrictive social, economic and political measures introduced at a particular time will substantially reduce the rate of new infections. The measures are typically referred to in epidemiological circles as non-pharmacological interventions or NPIs, which euphemistically insinuates that they are some other sort of other medical intervention.  Here I will refer to them by what they are: socio-economic measures (SEMs) that place restrictions/constraints on individual human conduct.

Given the basic equation for new infections, the only way that SEMs can affect the rate of new infections via their effects on the effective contact rate, beta.  When considering SEMS, therefore,  beta ceases to be parametric and turns into something else, into what in economic modellers would call an endogenous variable, something that is determined within a modelling framework.  Thus, what stands to be examined is a chain of causality that runs:  SEMS -> beta -> new infections.

Within this wider modelling framework, which is necessarily socio-economic-political (as well as epidemiological), the first link in the chain remains egregiously under-examined.

At this point, before proceeding, let me make three side comments:

  1. An even wider, more sophisticated political economy would recognise that the timings of the SEMs themselves are characterised by significant endogeneity. The introductions of Lockdowns do not ‘come out the blue’ in a random or unpredictable way:  they tend to be first developed and then implemented as responses to upward surges in infections, cases, hospitalisations and deaths, particularly when those things are already high.  That matters because it should change, in rather fundamental ways, the specification and estimations of the models.  There is a reverse causality from infections to SEMS that needs to be addressed.  The most obvious issue is that the assessment & implementation lags involved can lead to measures being introduced after (unobserved) infections have already peaked.  It looks so simple – a Lockdown is imposed, infections fall, therefore the Lockdown cause the fall (post hoc, ergo propter hoc) – but what is being observed may be a Rain Dance. Closer investigations are required. 
  2. It is a debilitating limitation of the use-in-practice of this type of model that it slips in an auxiliary assumption that the only changes that significantly affect beta are induced by either changes in virus transmissibility or by the SEMs. That is not plausible:  it is directly contradicted by evidence indicating that the known presence of a potentially threatening virus is sufficient to induce major, risk avoiding adaptations in the behaviour of the public (what might be called ‘natural’ adaptation, where the word ‘natural’ is used, in a classic sense, for things that happen which are not at the will of Leviathan). To be blunt, this is extremely crude social theorising.
  3. The euphemism NPI serves to ‘medicalise’ the language in a way that insinuates that experts in epidemiological models possess expertise is assessing the social factors that co-determine the value of beta.  There is no reason to think they are, and some reasons for thinking that they may be inferior for the task than an intelligent lay person, because they bring their own, cognitive ‘expert biases’ to the problem.  The language also helps to foreclose the contributions to understanding the evolution of a contagion which scholars focused on those social factors could bring. Among the latter are economists, who tend to place a heavy emphasis on incentive structures as sources of influence on human conduct: most economists would, I think, tend quickly toward a specification that made beta a function of I(t), with beta lower the higher the level of infections. That comes from very basic decision theory in conditions of risk, a relationship that has been empirically mapped in large numbers of different decision contexts. Such a re-specification changes the set of differential equations driving the models, and hence changes the shapes of the epidemiological curves they generate. By recognising a negative feedback – an increase in infections lowers beta and hence reduces the rate of increase of infections – it flattens the curves.

There is little doubt that the intent of SEMs is to reduce the value of beta.  The question that I therefore want to put on the table, for the umpteenth time, is this:  What does the evidence indicate concerning the effects on beta of the SEMs in general and of those more socially restrictive bundles of measures referred to as Lockdowns in particular?

It is a straightforward empirical question and it is not impossible to address. It is therefore reasonable to expect that it would be an exercise undertaken as part of wider regulatory impact assessments.  However, as indicated, in nearly a year of Covid experience now, such assessment exercises are notable by their absence. 

How is this kind of exercise conducted?  Although new infections are not observable, there are correlates of them to be found in the measurements of cases, hospitalisations and deaths, which can therefore be used as proxy variables, with suitable adjustments for the lags involved.  Echoes or footprints of changes in the rate of new infections should appear in these latter measurements, particularly if the changes are of substantial magnitude: a greater effect would serve to make it easier to pick out the signal from the noise.  (I have been involved in at least three impact assessments where it was sufficient to note, by eyeball, singular kinks in otherwise smooth curves that corresponded exactly with the policy measures under investigation. The first dates back as far as the 1970s and concerned the impact of the nationalization of British Steel on the rate of diffusion of new steel-making technology.  Interestingly, the relevant diffusion models involved mathematics rather similar to those of basic epidemiological models: the diffusion curves are sigmoids, akin to those shown in charts of cumulative cases and cumulative deaths to be found in the Covid statistics.)   

The analyst has a set of high frequency (daily) data time-series available.  The dates of the interventions are known, so it is clear approximately when it is to be expected that echoes/footprints of a significant reduction in beta might be found. It is not a matter of looking for a needle in a haystack.  The techniques involved are bread-and-butter methodologies in the social sciences.  In financial economics and financial accounting alone there have been literally thousands of research papers covering the ground, before getting to applied economics more generally.

So, if the conjecture to be tested — that a bundle of harshly restrictive SEMs has a substantial downward effect on beta – is right, what does the footprint of an effective bundle of measures look like?  The diagram below illustrates.

It shows a highly magnified, short segment of an epidemiologic curve around the time, T, a bundle of restrictive SEMs is introduced.  Given the magnification, it suffices to represent the segment as linear (an approximation) to simplify the drawing (nothing substantive hinges on this simplification).  It is shown as a rising line ABC – new infections are increasing with time – but it could equally well have been drawn falling, or flat (as would be appropriate, if SEMs were introduced right at the peak of a new infections curve).

A reduction in beta at the given time will imply a discontinuous drop in new infections, shown as a drop from B to D.  A fall in beta of 10% would predict a 10% fall in daily infections, a fall of 20% in beta would predict a 20% drop in daily infections, and so on. 

But then what next, following the fall to D?  At D the evolution of the epidemic is moved to a different epidemiological curve, one with a lower beta but which passes through point D.  That lower curve will be subject to the same equation for new infections, but with a lower value of beta.  The equation also implies that the slope of the destination (lower) infections curve – the slope is the rate of change of new infections with respect to time – will also be lower.  The evolution of the curve from time T becomes, for a while at least, flatter.

There, then, is the signature of an effective SEM: a sharp and immediate drop in infections, followed by a flatter evolution of new infections thereafter.  Diagrams like this one did in fact appear in the public eye at the outset of the Spring 2020 epidemic, but they became rather harder to find in later periods.  The question is: have repeated signs of this pattern been found in the data?  The general answer is that, to the eye at least, they have not (which may account for the decline in production of diagrams showing discontinuities) and, more surprisingly, there appears to be an aversion to looking to see whether there are such effects by testing for them at a more sophisticated level than use of eyeballs. 

To repeat an earlier point, there is no lack of data available for the task, although there are major challenges in the fact that that new infections themselves are difficult to get at and extensive reliance has to be placed on proxy variables.  By their own natures the proxy variables will tend to smooth out the discontinuity to some extent, but, for cases and hospitalisations at least (deaths are more problematic) the smoothing can be expected to be limited to a period of a few days.  The discontinuous fall in beta shown in the diagram can be expected to be echoed in the proxies by a sharp turn downwards, followed shortly by a sharp return to a more normal patter. (Mathematically, the second derivative of the relevant, proxy data will turn sharply negative at some point after the introduction of the measures, followed shortly thereafter by a sharp, turn positive (followers on Twitter might notice again an old emphasis on the significance of second derivatives!)). So that’s what to look for.  

More than that, there are corollaries of the basic hypothesis that can also be tested.  The harsher SEMs have typically been imposed at regional or national levels, but there are time series of data at UTLA level. An effect might be difficult to detect from only one time series, but, major regional or national level SEMs should have simultaneous impacts with similar geometries across all UTLAs to which the apply.  The existence or non-existence of such simultaneous impacts is therefore a matter that could be examined.

Similarly, there are observations of occasions when SEMs have been removed, in periods when the epidemic has been on the wane (illustrating the endogeneity of the timing of changes in SEMs).  In these cases we are moving in the opposite direction, so the underpinning proposition – that SEMs have major, downward effects on beta – implies that their removal should lead to a discontinuous upward jump in beta and in new infections, followed by a steeper decline in new infections thereafter.

Given these (and other) ways of assessing the effects of SEMs, the lack of curiosity about testable effects is something I find astonishing, at least among people who would want to think of themselves as scientists.  More restrictive SEMs are not the normal sort of placebo measures often to be found in public policy when politicians want to do ‘something’ to reassure the public or a relevant lobby group, whilst reasonably safe in the knowledge that the chosen ‘something’ is very unlikely to cause any major harms.  Policy advisors can shrug at that practice in good conscience – no material harm done – but Lockdowns are a very different kettle of fish:  they cause very major (because so widespread) harms.

Why the lack of scepticism?

“Science is the belief in the ignorance of experts” said the physicist Richard Feynman, indicating the importance of a Humean scepticism in scientific endeavour.  As in Hume’s day, however, scepticism is anathema to many ‘believers’ (I use one of the possible antonyms to ‘sceptics’).

There are likely a range of reasons why so many have simply accepted the assertion that restrictive SEMs lead to substantial reductions in infections in the absence of any rigorous assessment to test that claim against the evidence.  The sentiment ‘the wish is father to the thought’ (we believe things we want to be true) is likely one of them.  In the face of a fear-inducing increase in risks and in the absence of vaccine, we can desperately want it to be true that there are things that can be done to substantially reduce those risks.

However, there are also advocates of lockdown who, in other contexts, would tend to look favourably on a desire to examine the evidence when developing and implementing public policies.  Why are they not their usual, scientifically-sceptical selves when it comes to this issue?

The explanation of it lies, I suspect, in those auld enemies of social scientists, intuitions.  Intuitions, an indispensable brain function for making snap judgments, can be helpful in suggesting hypotheses and theories, but they are best set aside when it comes to testing propositions.  Some may survive the tests, but very many won’t.

Two intuitions may be at work in relation to SEMs.  The first is that, other things being equal, social distancing reduces transmission (and hence the effective contact rate, beta), and there are good grounds for thinking that is true in most circumstances. The problem with this kind of intuition is simply a very general one:  other things rarely stay equal in the aftermath of major policy changes.

The second intuition, which is necessary to get from the first to a policy position, is much shakier.  It is that restrictive SEMs serve to increase social distancing in some aggregate sense. 

The problem is that the SEMs are ill-targeted, blunderbuss measures.  Similar restrictions are applied across a whole patchwork of different socio-economic contexts.  There can be contexts in which they will reduce transmission, others where they have little or no effect, and yet others where they have perverse (opposite to intended) effects. 

As a general matter, it might be posited that public spaces will be emptier, but that other spaces will be fuller (at any given time, people have to be located somewhere, they don’t just disappear for a while).  Shopping streets, restaurants and pubs might be emptier, but homes will likely be fuller for longer periods, and therein lies an illustration of the problem.  For millions of Britons, the home is one of those confined, relatively crowded spaces (which is maybe poorly ventilated in colder periods of the year) that Japanese Covid policy has advised positively avoiding.  Moreover this may be particularly true in areas of the country that rank high in tables based on metrics of social and economic deprivation where, as a matter of record, estimated infection rates have been highest.

Homes can also be locations where there are more face-to-face conversations in a given interval of time, and where those conversations can be louder:  it is a place where more intense emotions can be expressed loudly in ways that tend to be repressed in public, e.g. parents and their children have been known to yell at one another from time to time.  Again, face-to-face interactions and loud noise are things that Japanese Covid policy has advised avoiding as much as possible.

On top of these points is the fact that many millions of workers are still it work:  their activities are, for a variety of different reasons, deemed essential. Those workers return to homes when their day is done, connecting up the ‘at work’ and ‘at home’ contact-networks. Additionally, the critically important group of healthcare and care workers connect the other sub-networks to the set of those who are the most susceptible to the worst effects of the virus, the very elderly and those suffering from relevant co-morbidities.

Finally, of course, there are the usual, hugely important issues of compliance with the regulations, but enough said.  Contexts are almost uncountably varied and the detailed investigation of them would be an impossible task. 

We are left in the end, therefore, with the various time series of data that are available and the with question:  Is there anything here to suggest that the more restrictive bundles of SEMs have had material, downward impacts on the (aggregate) effective contact rate, beta?  

As a Bayesian – a disposition that requires frequent adjusting of (probabilistic) beliefs to new evidence – I keep an open mind.  A rigorous impact assessment of the evidence could go either way, and could even suggest different directional effects of SEMs at different stages of the contagions and in different seasons. However, looking at the data I would bet that, whatever the directional effect indicated by the exercise, it would be found to be rather small in magnitude.  Big effects tend to lead to patterns of observations that should be easy to spot (like the effects of nationalization on the diffusion of new steelmaking technology): major kinks in the curves of the predicted type would likely be apparent to the eye, and they are not.    

Author: gypoliticaleconomyblog

Lifetime student of political economy, retired academic and regulator.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s