Monday, May 04, 2015

Monday Forum - more on evidence based approaches

In yesterday’s post, Sunday snippets - problems with evidence based public policy, I expressed my reservations about the weight currently placed on evidence based public policy. I thought, therefore, that I might make it the subject of today’s Monday Forum.

Looking back over past posts, It seems that I first wrote on evidence based approaches at the start of 2007. I was especially interested in the application of evidence based approaches in medicine and their spread to other areas. I also found an early comment from 2tanners on the topic, so he has been consistent. It was such a nice story that I am repeating it again:


"Just on evidence based Scottish doctors, perhaps the most famous is Sir Arthur Conan Doyle, who used to emphasise the importance of observation and evidence in medicine as a medical lecturer.

One of his favoured tricks was to take a beaker of foul tasting fluid, dip his finger into it, taste it, pull a face and then ask the class to do the same. Only after they had all tasted the fluid did he take them to task for not noticing that he had dipped one finger, but tasted another. Unlike them. :)" 

I don’t have time to summarise that earlier material, but I pointed at the time to some of the problems that could arise from the blind application of the approach. Two quotes to illustrate:

"In Real world medicine a Scottish doctor expresses his concerns about the application of evidence based medicine in a UK context. In doing so, he makes a distinction between the measurable and immeasurable, suggesting that the focus on the measurable could blind.

He has a point. Part of the reason for the development of evidence based medicine lay in the need to challenge and test previously accepted medical nostrums. However its blind formalised application can distort practice to just the measurable. This holds especially where application is mandated through formalised Government rules".

And again:

"In his interview Professor Sutton defines evidence based management as the simple willingness to find the best evidence you can, and then act on it. But this is not always easy: It's hard to tell what's right and what's wrong, and anybody can be a management expert. It's a signal-to-noise ratio kind of problem: There's just too much stuff out there. And what sells best is by no means the best way to actually practice management. 

He goes on: People are attracted to brand-new ideas and things. The way most knowledge is developed is that people build on one idea, or on nothing at all. A consequence is that the same new things get discovered every six or seven years, and just relabeled. Think of business process reengineering, which is built on a whole lot of earlier ideas."


Taking these quotes as an entry point, I wondered about your own experience with the application of evidence based approaches. As always, feel free to wander!

17 comments:

2 tanners said...

I wrote a huge long screed and deleted it. If you know what outcome you want and you fail to research the environment, policy options, current economic movements and past relevant policy failures and successes, the chances are very small that you will succeed.

They are very large that you will recapitulate history, having refused to learn from it.

Jim Belshaw said...

I think that I've become perhaps too cynical, 2t.

I agree with your point, we tried to do this, but two problems arise. One is the way that outcomes are measured Today only measurable outcomes are allowed. The second lies in the failure to recognise the importance of experimentation and failure.

Winton Bates said...

I watched Q&A last night. The question of evidence based policy was brought up at several points in the discussion of various issues.
I think that helped leave me with the impression that the level of discussion was unusually sensible.

Jim Belshaw said...

I think that's right Winton. If you take Singer's point on the changing definition of rape and its impact on the stats, you have an evidence based point. He had to qualify heavily at once, to make the distinction between the impact on measurement and the question of what's right.

2 tanners said...

Harkening back to your univeristy philosophy, evidence will never determine the course you should follow (you cannot derive an "ought" from an "is" alone).

Having decided what you ought to do/achieve/etc, evidence becomes useful in determining the 'how'.

We use measurable outputs to support (not prove) claims of achievement in less measurable outcomes. E.g. We OUGHT to make women safer so we WILL build and staff refuges because there is evidence to show this works. So the number of sustainable refuges is the measurable output, building and staffing them is the action and safer women is the outcome.

What's so hard?

Jim Belshaw said...

Hi 2T. Back to you on this tonight. Jim

2 tanners said...

And while you are responding, the last Government organisation I worked for didn't like failure (in a pretty major way) but prepared independent reports on its projeccts and carefully tried to avoid repeating those failures.

It was very prepared to experiment, and knew that context was vital.

Anonymous said...

Really? Is this the best way to make 'women safer'? It's a typically Australian, statist approach.

DG

2 tanners said...

DG, I didn't mention "best" anywhere, and I implied this was an imperfect approach through the need for monitoring and evaluation. The domestic violence problem is multifaceted, and has societal, behavioural and economic roots. I am simply talking about one small policy to deal with one identified situation. I have not suggested that the policy is sufficient, simply that it has a chance ON THE EVIDENCE,of being more effective than alternative policies.

I've put up, now it's your turn.

Show me an example of a successful non-evidence based policy, and why it was better than any alternative.

Jim Belshaw said...

Hi 2T. While most policy is based to some degree on evidence, the current formulation of evidence based policy with subsequent evaluation is quite recent. It’s also very rigid.

The NSW public school system is an example of a primarily non-evidence based policy. Other examples include the spread of adult education. This was driven by ideas of equality and social advancement.

2 tanners said...

Jim, I'm not familiar with your examples. How were the decisions as to WHAT to do (not why to do it) taken and why do you say the examples were successful?

Jim Belshaw said...

Hi Bob. I have a little more time tonight so a fuller response. I’m not actually sure how far apart we really are. This might help clarify.

Recently I was looking at some papers connected with a major policy review. The document in question was the draft outcomes framework that was then to provide the base for the evaluation framework. The outcomes were expressed in hierarchical terms with broad result categories supported by performance indicators. The selection of indicators was based partly on research examples as to what was relevant, partly on the statistical measures available.

To my mind, the whole thing was a potentially dangerous waste of time because of two disconnects. The first disconnect was that between the indicators and the core policy problems that were driving the changes in the first place. The second was between the multiple statistical indicators and what was actually happening on the ground.

This is not a unique problem. Closing the gap has many similar features and is actually undeliverable in a statistical sense and also disconnected with needs. It is based on certain types of evidence, with results measured by another set of indicators.

Turning now to the NSW public education example. The why is based on the belief, that a certain minimum level of compulsory primary school education was good for the community. The what or how required the provision of schools. With a cash constraint accentuated by a rapidly spreading population, the challenge was to get some education out there, so there were a whole series of improvisations. The output and outcome measure was kids in schools.

Now going to your women’s refuge example and ignoring DG’s statist comment. The why is that women and children in certain impossible situations should have a way out. The what is refuges. Note that this deals with worst case solutions but does not address the underlying cause. This doesn’t mean that it’s a bad thing, just a partial thing. You say that safer women is the outcome. That’s what needs to be measured for it’s not always clear cut.

In the old policy environment, you would do something because you thought that it was a good thing based on a mixture of example, analysis and politics. Then, over time, and there was time, you would seek to improve. Is this still possible today?

Anonymous said...

Jim, your answer is a muddle. Any serious attempt to employ an evidence base invites at least either a test for allocative efficiency (cost benefit) or one for technical efficiency (cost effectiveness). In the case of the latter, using the end point of a reduction in harms to women and refuges as a comparator as a against say, a crime and punishment remedy, might suggest that not all answers to public policy dilemmas reside in government intervention (with the corollary of additional layers low productivity public sector workers).

DG

2 tanners said...

Are we in furious agreement? DG, to me your answer is exactly about an evidence base (including the option of doing nothing new) and Jim's answer seems to recapitulate my descrion of the process.

Perhaps as a clarification there is a real problem which I acknowledge between contribution (what you do) and attribution (how have you helped achieve your ultimate goal). This IS a weakness of evidence based policy.

Finally, any policy will always need alternatives/other actions. I've never seen a magic bullet solution that worked solved a large, complex problem.

Jim Belshaw said...

I must be tired tonight, for I'm getting a little lost. So keeping things very simple.

When I talk about evidence based policy or medicine or management, I am talking about a very particular thing, a currently popular model, that I happen to think has major problems. I am concerned with the way things work - or don't.

My concerns about the current approaches can be summarised thus: they lock us into the past; they ignore the importance of experience; they limit action to the measurable; they limit assessment of the results of action to the measurable; they attempt to incorporate too much; and they are applied in a blind rigid fashion that, in a public policy context, slows decision making, prevents new ideas, precludes learning from experience and grossly adds to overhead costs.

Anonymous said...

Can't agree with your last para, Jim; or that use of evidence (which, if to be meaningful, should reduce to a metric) "precludes learning from experience".

DG

Jim Belshaw said...

I know that, DG!