7/13/1997
The Washington Post
By David Osborne and Peter Plastrik
Most citizens care about the performance of their public institutions. Parents worry about the quality of their children’s schools. City dwellers anxiously scan the latest crime statistics. Drivers pray for better roads, transit riders for dependable buses and subways, air travelers for effective air traffic control.
But most elected officials pay little attention to performance. If the crime rate spikes, or student test scores plummet, or the snow doesn’t get plowed, politicians become intensely interested in performance — for a few weeks. But for the rest of the year, campaigns and debates center on policy issues and public images, not on how well public institutions perform. Politicians, responding to organized constituencies, argue over inputs — how much to spend on each program — but ignore the outcomes of that spending.
This disparity between what the public wants and what politicians deliver is fueling an explosion of efforts to compare the performance of different governments, schools and regions. Citizens’ groups, magazines and foundations are beginning to take performance information directly to the public. Once citizens can compare the performance of their institutions with others’, advocates argue, they can force their elected leaders to make improvement a priority.
Last month, for example, the Citizens’ League of Greater Cleveland published “Rating the Region,” which compared that metropolitan area with 25 others on a variety of measures, from the strength of its business climate to the quality of its education system and government.
The first time the league did so, in a 1994 report that focused on economic competitiveness and quality of life, it found greater Cleveland trailing every region (of 13 it measured) but Detroit. The resulting shock waves led to a major initiative on work force development, spearheaded by the Greater Cleveland Growth Association. Next time, the Citizens’ League plans to compare the performance of all counties within the region.
Citizens’ organizations in Jacksonville, Pittsburgh, St. Louis, Seattle and Philadelphia have also published regional comparisons. In 1989 the state of Oregon created a public-private council called the Oregon Progress Board, which has worked with thousands of citizens to come up with long-term goals for the state, called the “Oregon Benchmarks.” (For example, Oregon wants to have the best-educated work force in the nation and cut teenage pregnancy by more than half by the year 2010.)
Every two years, the Progress Board publishes a report measuring how the state is doing. Minnesota and Florida have followed suit, and close to a dozen counties in Oregon have also set benchmarks and begun measuring.
City governments are even getting into the act. Working with the International City and County Management Association and the Urban League, managers in 44 large cities and counties are sharing performance data on fire, police, neighborhood and internal business services. Their goal is not to inform citizens, however; it is to help managers and elected officials identify their city’s or county’s strengths and weaknesses, and learn from those doing better.
A few cities do share performance information with citizens. In Portland, Ore., the city auditor publishes an annual report detailing performance in six key service areas: police, fire, parks, water, sewer and streets. The report shows year-to-year trends on a variety of measures, including customer surveys, and compares Portland with half a dozen other cities of similar size. In 1995, the auditor distributed thousands of tabloid summaries through libraries, grocery stores and other public places.
Nationwide Report Cards: President Clinton is pushing hard for national education standards and exams, which would show citizens how schools and school districts stack up against their competition. And in January, Education Week published its first report card grading states on their commitments to public education.
Several other national efforts are on the drawing board. The Pew Charitable Trusts, which helped fund Education Week’s work, is also financing a partnership between Syracuse University and Governing magazine to rate states, large cities and counties. This effort will focus on internal management issues such as the use of technology and the quality of budget, personnel and other management systems, rather than results citizens care about, like safer streets.
The National Academy of Public Administration, chartered by Congress to improve governance, would like to fill the gap. Its Alliance for Redesigning Government hopes to publish comparisons of how well cities perform in public safety, education, public transit, support for neighborhoods, and infrastructure.
Will It Make a Difference?
Programs like Cleveland’s “Rating the Region” reports and Oregon’s Benchmarks have begun to change how public dollars are spent. But can comparative benchmarking actually force our public institutions to produce more bang for each buck? The answer may lie in Britain, where Prime Minister John Major’s 1991 initiative, the Citizen’s Charter, launched comparative performance ratings for schools, hospitals and local government services.
The ratings are published by the national Audit Commission, which gathers data on roughly 50 indicators for local services (and 30 more for police forces). Local councils are required to publicize their own performance on all 50 indicators, and each year the Audit Commission publishes comparisons on 10 to 20 of the most important, such as total spending per capita, the percentage of taxes due that are actually collected, and the percentage of household waste recycled.
The commission uses bar graphs to show how each council ranks against similar councils (rural vs. rural, metropolitan vs. metropolitan, London boroughs vs. London boroughs). Bar graphs are revealing, because they highlight the spread between councils: If there is a wide gulf between best and worst, it stands out.
The media devour the annual reports. A few headlines give the flavor of the coverage: “Bottom of the League”; “Top at Spending but Poor Results”; “Councils Shamed Into Doing Better.”
The ratings have had the biggest impact on the poorest performers. “On almost every performance indicator that we have, those that were performing worst three years ago have improved in the second and third year,” reports Paul Vevers, associate director of the Audit Commission. “That doesn’t surprise us, because being at the bottom does trigger such massive attention that either people improve or heads roll.” Average performers have also improved on most measures, though more gradually.
Clearly, the public spotlight on performance forces local officials to concentrate less on political posturing and more on producing what citizens want. It gives citizens and advocacy groups a way to hold their elected leaders accountable for performance.
The British experience — combined with the rash of new efforts here — suggests we may be on the edge of a new era of public accountability. By the turn of the century you may be able to pick up a publication that tells you how well your school, police department and transit agency perform against the competition, just as you might pick up Consumer Reports to see how well a Ford Taurus compares with a Toyota Camry.
There’s an old adage in management: “What gets measured gets done.” In the 21st century, perhaps we can add a corollary: “What gets compared gets improved.”