Thursday, February 05, 2009

Quantifying and Enforcing Irrelevance: The Proposed “Performance Measures for Historic Preservation.”

A report just issued by the National Academy of Public Administration (NAPA) – Towards More Meaningful Performance Measures for Historic Preservation -- is worth examination by anyone concerned about the effectiveness, utility, and burdensomeness of the national historic preservation program.

Not that it’s a good report. It’s a dreadful report, and its preparation was a classic waste of taxpayer dollars on bureaucratic naval-gazing. But it’s worth looking at because it neatly encapsulates (and seeks to perpetuate) much of what’s made the national program as marginally relevant and maximally inefficient as it is. And because it places the program’s inanities in a larger public administration context, which helps account for them.

The idea of establishing and applying hard and fast quantitative performance measures has been popular in public administration for some time. Conceived in commercial and industrial contexts where they make a certain amount of sense –how many widgets you produce or sell are clearly relevant measures of a factory worker’s or salesperson’s performance --such measures began to be applied with a vengeance to the performance of federal employees during the time I was in government back in the 1980s. Of course, they didn’t work, because most federal agencies are designed to do things other than produce widgets. So they kept being reworked, rethought, but seldom if ever made more useful. We senior management types found ourselves required almost annually to reformulate the standards we applied in judging how our staffs were doing; it became something of a joke, but was taken with great seriousness by the Office of Management and Budget, and therefore by all the agencies. Soon the plague spread to the measurement of performance by government grant holders such as State Historic Preservation Officers (SHPOs), and beyond government and business to many other sectors of society. Hence programs like “No Child Left Behind,” transforming our schools into factories for the production of test-takers.

There’s no question that government, like business (and school districts), needs to find ways of judging performance, both by programs and by employees. But the application of simplistic quantitative measures is often a waste of time, because what government does (like what a truly good school does) is not easily reduced to countable widgets. Moreover, a focus on quantitative measures can and does distract management from major non-quantitative issues that may confront and beset an agency. It is also a fact – most succinctly formulated in quantum physics, but very evident in public administration – that the act of measuring the performance of a variable affects the way that variable performs. This principle seems to be routinely ignored, or given very short shrift, by performance measure mavens; its doleful effects are very apparent in the way the national historic preservation program works (or fails to).

OK, let’s get specific. First, the NAPA report gives us nothing new; it largely regurgitates standards that have been used for decades by the National Park Service (NPS) in judging SHPOs, and by some land management agencies (and perhaps others) in purporting to measure their own performance. The report was not in fact prepared by NAPA; NAPA simply staffed a committee made up of the usual suspects – three NPS employees, two staffers from the Advisory Council on Historic Preservation (ACHP), three SHPOs, the Federal Highway Administration’s federal preservation officer, one local government representative, three tribal representatives, and one person from what appears to be a consulting firm. Most of the names are very familiar; they’ve been around in their programs for decades. The tribal representatives seem to have focused, understandably and as usual, on making sure the tribes would be minimally affected by the standards; there are a couple of pages of caveats about how different the tribes are. So in essence, the standards were cooked up by NPS, the ACHP, one federal agency, one local government, and a consultant. No wonder they merely tweak existing measures.

So what are the measures? How should the performance of historic preservation programs be measured? Let’s look at those assigned “highest priority” by the report.

The first performance measure is “number of properties inventoried and evaluated as having actual or potential historic value.” This one has been imposed on SHPOs for a long time, and is one of the reasons SHPOs think they have to review every survey report, and insist on substantial documentation of every property, while their staffs rankle at having to perform a lot of meaningless paperwork. Of course, it also encourages hair-splitting; you don’t want to recognize the significance of an expansive historic landscape if counting all the individual “sites” in it will get you more performance points. The report leaves the reader to imagine how it is thought to reflect the quality of a program’s performance.

The next one is “number of properties given historic designation.” We are not told why designating something improves the thing designated or anything else; it is merely assumed that designation is a Good Thing. This measure – also long applied in one form or another to SHPOs – largely accounts for SHPO insistence that agencies and others nominate things to the National Register, regardless of whether doing so serves any practical purpose.

Next comes – get ready – “number of properties protected.” We are given no definition of “protected,” and again it is assumed without justification that “protection” is a good thing. And apparently protecting a place that nobody really gives a damn about is just as good as protecting one a community, a tribe, or a neighborhood truly treasures – as long as it meets the National Register’s criteria. Of course, none of the parties whose performance might be measured – SHPOs, agencies, tribes – has any realistic way of knowing how many properties are “protected” over a given time period, unless they find a way to impose incredibly burdensome reporting requirements. This probably won’t happen; they’ll make something up.

The next three reflect the predilections of the ACHP, though they’ve been around in SHPO standards for decades: “number of … finding(s) of no adverse effect,” “number of … finding(s) of adverse effect,” and “number of Section 106 programmatic agreements.” For some reason memoranda of agreement are apparently not to be counted. Nor are cases in which effects are avoided altogether, despite the fact that such cases presumably “protect” properties more effectively than any of the case-types that are to be counted. The overall effect of these measures is to promote a rigid, check-box approach to Section 106 review, and to canonize the notion that programmatic agreements are ipso facto good things.

The next one is “private capital leveraged by federal historic preservation tax credits,” which seems relatively sensible to me where such tax credits are involved – though it obviously imposes a record-keeping burden on whoever keeps the records (presumably SHPOs and NPS).

The last two high priority measures are “number of historic properties for which information is available on the internet” and “number of visitors to historic preservation websites.” The first measure, where it can be applied, would seem to favor entities with lots of historic properties and a predilection for recording them. The second seems almost sensible to me, but I should think it would present measurement problems – just what is an “historic preservation website?” The report “solves” this problem in a simplistic way by referring in detail only to SHPO, THPO, and NPS websites. This, of course, makes the measure wholly irrelevant to land management agencies, local governments, and others who maintain such sites.

There are other measures assigned lower priority by the report, but they’re pretty much of a piece with those just described.

It’s easy to see that these measures are pointless, that it was a waste of time to produce them. But they’re worse than useless because to the extent they’re actually applied, they require program participants to spend their time counting widgets rather than actually performing and improving program functions. And as noted, some of them encourage nit-picking, hair-splitting, over-documentation and inflexibility. They create systems in which participants spend their time – and insist that others spend their time – doing things that relate to the measures, as opposed to things that accomplish the purposes of the National Historic Preservation Act. Or any other public purpose.

“OK,” I hear the authors of the report fulminating – in the unlikely event they trouble themselves to read this – “so enough of your kvetching, King, where’s your alternative?” Indeed – given that there is a need to promote good performance in historic preservation programs, how can we “measure” it?

I put “measure” in quotes because I don’t think quantitative measures work – unless you truly are producing or disposing of widgets, which is not mostly what government does. But whether we can “measure” performance or not, I think we can judge performance.

Taking SHPOs as an example, suppose the NPS grant program procedures for SHPOs required each SHPO to have an annual public stakeholder conference in which all those who had had dealings with the SHPO over the preceding year got together with the SHPO (or in some contexts perhaps without him or her) and performed a critique. Make sure the public is aware of the conference and able to participate, so people who may have wanted to do something with the SHPO but been stiffed, or felt they’d been, could also take part. Maybe have a knowledgeable outside facilitator and a set of basic questions to explore, program areas to consider. How satisfied are you with the way the SHPO is representing your interests in historic properties, how sensitive the SHPO is to your economic or other needs, how creative the SHPO is in finding solutions to problems? How do you like the way the SHPO is handling Section 106 review, the National Register, tax act certifications, provision of public information, treatment of local governments, tribal concerns? Do a report, and if we must have quantification, assign a score or set of scores – on a scale of one to ten, this SHPO is an eight. File the report with NPS, which can consider it in deciding how much money each SHPO will get next year.

There are doubtless other ways to do it – probably lots of ways, and some may make more sense than what I’ve just suggested. We ought to consider them – or someone should. What we should not do is remain tied to something as thoroughly idiotic as rating the performance of inherently non-quantitative programs based on how many widgets they’ve produced.

1 comment:

RCast said...

Tom~ Again, great comments. I was part of a conference call to discuss THPOs concerns. It seemed pretty much a waste of time. Each year it seems the THPOs have something else added to their duties that was not in their original Cooperative Agreements. I do know that part of the Federal Government's job is to push paper and in most cases provide a lot of overkill when it comes to reporting. In the case of this report, your one suggestion, put plainly into a single paragraph, has more insight than the entire report and would probably save a lot of taxpayers money.