I know I often write about the reasons for success and failure in public policy. Still, this week I have spent a fair bit of time musing over it, sparked in part by my personal response to John Button's death including Bob Quiggin's comment on the post.
Both my post and Bob's comment referred to the work we were doing at a particular period and the new approaches we were trying to develop. That work still informs and unifies my analytical approach whether it be my comments on indigenous policy or child welfare matters. I think that it is still relevant to the issues I discussed in Fred Argy's question - Equality of opportunity: is more policy intervention needed?
This post outlines that approach, focusing on the analytical tools and theoretical concepts involved. I hope that it is not too dry.
As I write I am conscious of the irony of it all. Some of the particular things that I supported have morphed into what I now see as major reasons for current policy failure!
Importance of the right Intellectual and Political Climate
While the analytical tools and associated concepts are still useful, full effectiveness depends on the right intellectual and political climate.
I say intellectual as well as political, because both public policy and its administration in fact incorporate and are bound by sometimes unseen concepts and assumptions.
To bring about change, you have to identify and challenge those assumptions, or at least work around them. This was what Bob Quiggin was talking about in part when he referred to our attacks on sacred cows.
No matter how good the ideas one may have, they will wither if the climate isn't right.
In my post on John Button I suggested that the core role of Government was to set values. By this I do not mean the narrow ways the term is used today. We are now over obsessed with "values" to the point that this has become a major impediment to effective action. Rather, that Governments set the tone and direction for action.
They key thing about the election of the Hawke Government is that it provided a window for change, an atmosphere that encouraged new ideas and new approaches.
Complexity and Institutional Rigidities
The problems that Governments deal with are complex, as are the structures that evolve to manage the policy and program responses. This creates institutional and intellectual silos that impede broader action.
I have argued that the early periods of the Hawke Government were the time of greatest change and achievement. Even later achievements have their genesis in this earlier period.
As time passed, problems of complexity and institutional rigidity re-asserted themselves, slowly locking out new ideas and approaches. Things happened, but really new things became harder to achieve.
That is why this first period of the Rudd Government is so important. Its longer term achievements will depend upon the unleashed enthusiasm of this first period, the window before complexity and rigidity re-asserts itself.
My concern with Mr Rudd is simply this. He is a modern measurement man. Yet many of the things that he wants to achieve are long term and require new approaches. These things are harder to measure.
In all this, a key thing to remember is that the actual number of people in the system working on any topic is small, the number with the freedom to think outside the box smaller still.
This may sound strange, given the absolute number of public servants, but it's true. Most public servants are involved in doing, the numbers involved in policy development on specific topics are much fewer. Further, most of those are working on policy development are locked into agendas set by others.
I think that the core reason for our success in some of the things that we did achieve lay in the fact that, through happenstance as well as our own efforts, we were placed to some degree outside existing systems, unbound by the institutional constraints faced by others. To use management jargon, we were a skunk works!
The Matrix Approach
I now want to introduce the first tool we used, what I called the matrix approach. This is no more than an analytical device, but a very useful one.
In the middle of 1983 our job was to carve out new policy approaches for the development of Australia's electronics, aerospace and information industries.
Part of the reason for this lay in the perception inside the Department of Industry and Commerce that there was too much focus on declining industries, too little on emerging industries.
This perception was correct at many different levels. As a simple example, the Department had an entire division concerned with the textile sector, part of a section concerned with the computer industry.
Our role was to turn this around. For someone like me, this was equivalent to be given the keys of the lolly shop, freedom to do new things.
In starting, we faced a key problem.
The electronics, aerospace and information industries covered a diverse and very large sector in global terms, linked together by a common focus on electronics, computing, communications and systems.
We had little information about these sectors from either a global or local perspective. When I asked for all our past files on the aircraft industry, registry needed a trolley to bring them all up! There was not one policy file in the true sense of the word, not a single detailed economic analysis. All the files dealt with very specific issues.
All this meant that we were starting from scratch. So fundamental research was required. However, there was another problem, the very wide range of policies, policy instruments and regulations affecting the sector at global and local level. We had to identify these and their impacts.
Finally, we needed a way to ensure integration in our policy approaches so that we dealt with like issues in a like way.
We adopted the matrix approach as a device to manage these various difficulties.
In concept, this was very simple.
We started by listing all our sectors along one axis. For example, we had the aerospace or computer hardware industries. Each in turn could be broken into sub-sectors. This provided the focus for our industry research.
Along the other axis we listed every variable that we could think of that might affect each sector. This included policies and policy instruments, as well as variables relating in a general sense to industry structure, conduct and performance. To do this, we had to research a range of economic and policy areas, many well outside our formal institutional ambit.
We then compared the two, looking at common and differential impacts across sectors and sub-sectors. This provided a guide to possible actions
This sounds simple and in many ways it was. Yet the effects were quite profound, because it gave us a coherent approach that required us to act across policy areas in some ways very remote from our starting point.
Take, as an example, the problem of shortages of people with computer skills. This was a core issue in a number of sectors, requiring action within the education portfolio. The result was the first ever Commonwealth funding for specific tied places in universities.
Today I think that the precedent we established has had many perverse results, a problem that I addressed in Australia's Universities - a personal Mea Culpa. Yet we needed the skilled people.
In all, the matrix approach remains a valuable tool, one that could be applied, for example, in dealing with questions of social inequality.
Horizontal vs Vertical Measures
As our approach evolved, we had a constant problem with the view in the central coordinating agencies and especially Treasury that horizontal measures were always best. This view remains today and is equally a problem.
I am dealing with jargon here, so let me explain.
A horizontal measure simply means a universal policy, policy instrument or program that is applied across the board. A vertical measure is one targeted to the needs of a particular sector or geographic area.
There is a very strongly held view in Government that measures must be horizontal, universally applicable. Yet the reality is, as our work showed, that so-called horizontal measures have very different vertical effects.
The reason for this is simple.
Australia is not a homogenous whole. This means that any universal measure will have differential and sometimes perverse impacts in different areas and different sectors. By contrast, vertical measures can be better targeted so that they achieve the desired result. They can also be be integrated in different ways to achieve horizontal impacts.
All this led us to coin the mantra horizontal is vertical, vertical is horizontal. A decision as to what was better could only be made on the facts.
We really struggled to get this across in the face of universal prescriptions.
If you look at my writing on, for example, indigenous policy you will see that I am still fighting this fight.
Indigenous policy is a vertical concept, but its application is horizontal in that it is often applied universally independent of on-ground variations among our indigenous people. I have argued that we must take these variations into account, so I want more differentiated vertical measures.
I have also argued that we need to distinguish between problems that are indigenous problems and those that are subsets of broader problems that must be addressed at a broader level. Here I am arguing for more horizontal measures.
I find all this just as hard to get across today as I did in the eighties.
In trying to get new things through we were forced to develop quite sophisticated underpinnings, concepts to support our work. Some of these came from my own research and work, some from others, some from experience.
In all this, I find it hard to distinguish between the I and the we in describing them. All staff contributed. How, for example, do I identify Bob Quiggin's role?
He had limited experience when he started with us. However, he is a bright bloke who was prepared to challenge. So his contributions came both from his work on specific projects and his contribution to debate.
To manage this, I will just use "we" and "our" unless there is something very specific I want to recognise.
Central to our thinking was the reasons for policy failure. We believed that most policy failed in whole or part for three key reasons:
- Failure to properly specify the problem to be addressed including, in particular, a focus on symptoms rather than causes.
- Failure to properly articulate the policy response to the problem even if the problem itself was properly identified.
- Failure to properly link the policy response to the problem, assuming both were properly specified.
This may not sound especially profound, but it has significant effects on thinking when turned into questions:
- What is the problem we are addressing?
- What is our policy response?
- How does this relate to the problem we want to address?
In thinking about these questions, we developed the idea of what Doug Stuart called the plus/plus field. This centred on four quadrants, minus/minus, plus/minus, minus/plus and plus/plus.
Policy in the minus/minus field always had negative results. In the plus/minus and minus/plus fields results were always neutral, with pluses offsetting negatives. Only in the plus/plus field could you be sure of positive outcomes.
This may sound very mechanistic, and indeed it was in some respects. Yet it encapsulated our thinking in important ways.
Our starting premise that most policy failed meant that we were very critical of initial ideas, including our own. We did not reject them, we wanted ideas to bloom, but they needed to be tested. This made us reluctant to support immediate solutions without test.
However, once we were sure that ideas were in the plus/plus field we wanted to move. So long as proposals were in the plus/plus field we did not have to worry too much about refining because we knew the results would be positive. We could always fix the details later.
All this placed us somewhat at odds to the conventional Canberra, indeed Australian, approach.
To be continued