The public declaration of goals like “no waits for emergency room care” or “all patients will have the option to receive necessary surgery within three months” has brought intense focus to our health care improvement work as a province. It has also brought to light our health system’s entirely inadequate efforts and abilities to measure quality in a manner that helps both clinicians and managers learn and monitor progress toward the goals.
It was standing room only last month at a pre-conference workshop on measurement for improvement, held in conjunction with this year’s Quality Summit. Based on previous years’ experience with measurement-flavored sessions, we had planned for 50 or 60 participants; we were blown away when the room was overflowing with more than 200 people. All of a sudden, measurement is sexy – or at least widely recognized as a serious need!
This month, the Health Council of Canada (HCC) – a federal agency charged with monitoring and reporting on the provinces’ progress in meeting goals set out in the 2004 Federal/Provincial/Territorial Health Accord – released a report called Measuring and reporting on health system performance in Canada: Opportunities for improvement. The report points out the virtual impossibility of their task, given the inadequate and somewhat “chaotic” state of performance measurement in health care across Canada.
There’s actually quite a bit of good measurement and reporting on quality and performance happening – at least for some sectors of health care. However, these efforts are being carried out in a largely uncoordinated manner by a variety of national, provincial, and local organizations – and this activity is too often generating more heat than light.
Now heat (i.e., information for judgment) isn’t necessarily a bad thing: judgment is one dimension of accountability, and there’s certainly room for better accountability in health care. But heat without light (i.e., information that enables improvement) is a recipe for frustration. It’s mighty hard to cook well in the dark – and the risk of burning the meal is much greater!
In a seminal paper written nearly 20 years ago, Brent James of InterMountain Healthcare (Utah) describes the core elements of an outcomes (or performance) measurement/management system. He gives guidance, from InterMountain’s experience, on how to build such a system, and points out the perils of incomplete or poorly constructed ones. In a nutshell, James is saying:
In order to help health system managers, providers and other workers learn, improve, and be effectively accountable, a performance measurement system must:
- Include measurements of each of the three kinds of outcomes (products) of health care: clinical, patient experience, and cost;
- Provide the measurements of “local” care processes needed by point-of-care workers (derived bottom-up) as well as data on the external environment, client needs, and process outcomes (often established top-down) needed by managers; and
- It must connect those different kinds of measurements (bottom-up and top-down), along with data about patient subpopulations and specifics about care processes into an information system that supports learning and actionable accountability (as opposed to pure judgment).
The next few years are going to be exciting times to be working in Saskatchewan health care, as together we learn to apply a disciplined approach to organizing and delivering care so we achieve the strategic aims we’ve identified for ourselves. Our 5-year goals and interim improvement targets provide us with both the need and the opportunity to develop a “heat and light” system for measuring and reporting on health care performance.
PS – I’m on the road this week, visiting industries and health care organizations that are successfully using Lean in their operations. I am paying particularly close attention to how these organizations integrate measurement to meet the needs of leaders/managers, point-of-care, and support workers. Stay tuned. I’ll let you know what I find out.