Posted by David Gentry
It is well known that animals and other living things can be used to detect a problem before people become aware of it. These are called sentinel animals or indicator species. A dog barking at noises that people cannot hear is an obvious example. More interesting are certain species in a marine environment that are sensitive to low levels of pollution. For more than a century canaries were used in coal mines to detect toxic gases. They became sick before people became sick, and thus gave miners a chance to escape.
To be effective, indicator species must be sensitive to known dangers. They are used to easily, cheaply and accurately monitor the presence of that danger. It is plain to see if the canary is active and happily chirping or not. A reliable relationship must exist between the state of the canary and toxic gases.
But there are limits to what canaries can tell miners. Dead canaries won’t tell where the gas is coming from or even the specific toxic gas (canaries are sensitive to multiple gases injurious to humans, such as methane and carbon monoxide). Once the canary falls off its perch, the miners know they are facing a hazard but must investigate further to find the source, exact nature and severity of the problem, and then decide what to do about it.
Indicator species and performance indicators have much in common.
Performance indicators can tell us if a problem is present, but cannot tell us the exact nature of the problem facing a program or organization or what to do about it. This can easily be seen by comparing performance indicators in the annual budget with performance audits. In a performance budget, performance indicators are expected for all major programs on an annual basis. But performance audits cannot be conducted annually for all programs. It’s simply too much work. Clearly, performance indicators are rougher estimates and more superficial than performance audits or program evaluations.
Performance indicators tell us that we need to look closer. Closer examination means collecting additional information, such as conducting performance audits, evaluations, public hearings and interviews with program managers, and finding academic research and journalist investigations. This information is outside the scope of information required in a normal budget submission.
Effective performance indicators are difficult to create. Many problems can afflict a public service program. The indicator should be understood well enough that if the performance indicator suggests a problem, we know approximately what the problem is and thus know where to delve deeper. A dead canary means the presence of toxic gas, not the absence of coal.
A couple of examples might illustrate these points. Incidence of disease is a good health program indicator. It measures the presence of a problem. But the indicator can’t tell a health program manager why the incidence is at the level it is, why it might be changing or what to do to reduce incidence. These questions require additional investigation. If a different health strategy is warranted, it is not known until the new strategy is decided that more or less funding is required. In addition, a ministry of finance may use GDP growth as an indicator. This indicator is so all-encompassing that it does not help us understand the effect of particular actions of the ministry of finance or focus attention on actions the ministry might do differently in the future.
Many performance indicators for a single program are unnecessary. Indicators collected annually are never sufficient to fully diagnose and measure a problem and can never substitute for a performance audit or evaluation. Many practitioners have advocated a fairly strict limit on the number of indicators. This makes sense if we view indicators as a trigger telling us when to research in more detail. The merit of indicators is their simplicity.
After a budget analyst has understood the problem, the next step is to decide what to do. Indicators may be associated with the underlying problem, but knowing what to do requires an understanding of the cause of the problem. Poor management is often the cause of poor program performance. One of the biggest challenges facing a budget analyst is to determine what to do when line ministry management must be improved. The principal power of a budget office is to adjust funding, but it is by no means certain that poor management should be punished with less funding or good management with more funding. The correlation between performance indicators and funding is often not high.
Anyone who has worked as an analyst in a budget office knows that a few problem organizations or programs often take up a disproportionately high amount of an analyst’s time. This occurs because even well crafted performance indicators are not sufficient to diagnose a problem and lead to a determination of what should be done about it. The analyst must dive deeper into the program or organization to figure out what is going on, using information not included in the normal budget submission.
There is much to be gained by well designed performance indicators collected annually. But we must recognize how best to use them and their limitations. In their day, canaries in a coal mine were invaluable but they communicated the presence of a problem by silently expiring. This was a blunt instrument indeed.
Note: The posts on the IMF PFM Blog should not be reported as representing the views of the IMF. The views expressed are those of the authors and do not necessarily represent those of the IMF or IMF policy.