
Andrew Rix is an independent research and evaluation consultant, and Honorary Researcher at Swansea University School of Medicine.
At the end of the recent Rethinking Medicine webinar, I felt uneasy. At one level the ‘movement’ is repositioning medicine in the broader context of the socioeconomic determinants of health and wellbeing, but in practice what is happening on the ground is the development of a whole range of changes to service delivery, concerning both the ‘what’ and the ‘how’.
New ways of thinking are leading to new ways of working, changed relationships and, most importantly, a reappraisal of the position of evidence-based medicine and the role of the patient.
We heard of a ‘narrative’ to plot progress, but it seemed to be mainly a retrospective account of initiatives that been tried and have attracted support.
‘Evidence’ was highlighted as a tool in the persuasion of others, but not necessarily as the way of distinguishing between options.
It brought to mind Rossi’s Iron Law of evaluation: The expected value of any net impact assessment of any large-scale social programme is zero.
This law is derived from experience in programme evaluation of multiple interventions, which individually have merit, but lack coherence of intent, form or measurement. Add them together, and the answer is nothing (or at least inconclusive).
Changes in the philosophy behind what services a health system chooses to offer have potentially far greater implications and impact on public and individual health and wellbeing than those arising from, for example, tweaking individual treatments.
Changes to clinical procedures are subject to rigorous research while service evaluation is often given a ‘light touch’. The changes we are discussing here, which go way beyond small changes in delivery, may at best be evaluated on a one-off basis, probably ‘light touch’ or maybe not at all.
A potpourri of evaluations, each designed to reflect the unique character of a specific approach, does not add up to evidence, no matter how large the jar. We need to be able to analyse the contents of the pot systematically, and that means a model, a typology and some means of measurement.
Enthusiastic innovators steer clear of assessments with strictly defined outcomes.
While accepting that more traditional, positivist methods may be inappropriate, and that qualitative approaches are used successfully in community development evaluation, we need to start thinking about how we inject some rigour into the evaluation framework now. Generalisations drawn from post hoc rationalisation are easily not credible evidence.
Slightly more convincing are the results of evaluations employing Rossi’s Stainless Steel Law, which states that: The better designed the outcome evaluation, the less effective the intervention seems.
In other words, tight definitions restrict the claims that can be made and reduce double counting. Not surprisingly, enthusiastic innovators steer clear of assessments with strictly defined outcomes (although the results are less liable to outright rejection). They cloud the ‘big picture’ claims made for innovation: shared decision making is a good example.
The reality is that we know very little about changes attributable to our rethinking of medicine. We need to start collecting data systematically so that we are at least in a position to categorise different approaches, theories of change, target groups (and inputs) if we are to stand any chance of convincing policymakers that the outcome of our thinking is worth applying in a system so profoundly rooted in the clinical model.
When enthusiasts talk to other enthusiasts, there is a temptation to see the world as full of enthusiasts. The reality is that the status quo holds most of the key cards, including those relating to the rules of evidence.
Some informed thought now could save much disappointment later.