More quality measures versus measuring what matters: a call for balance and parsimony
In the last half century, the USA has gone from defining quality, to measuring quality, to requiring providers to publicly report quality measures, and most recently, to beginning to hold providers accountable for those results. External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Our investment in required quality measures has served us well. First, it has stimulated the development of new quality measurement and improvement infrastructure within many health systems that was absent or less developed than in the past.1 Second, it has helped to make good on the call for transparent results reporting that was a central feature of the Institute of Medicine Chasm Report.2 Consumers, providers, governing boards, employers, payers and accreditors can view comparative results and take action to enhance accountability.3 Third, publicly reported measures have been associated with improved levels of quality: the hospital core measures programme shows that evidence-based care for hospitalised patients has increased rapidly across the country;4 and the QUEST programme (which includes over 150 hospitals that are voluntarily seeking to improve performance on a small, standard set of measures), has demonstrated rapid improvement in death rate, evidence-based care and inpatient costs per discharge.5 Fourth, publicly reported measures have created new opportunities for researchers to conduct comparative-effectiveness studies and for medical educators to advance practice-based learning and improvement.
The number of quality measures that healthcare providers are required to report has skyrocketed over the past decade and that trend is poised to continue. For example, the number of National Quality Forum approved measures has gone from less than 200 measures in 2005 to over 700 measures in 2011 (personal communication, Helen Burstin, MD Senior Vice President for Performance Measures, National Quality Forum, 26 February 2011). In just the past year the US Centers for Medicare and Medicaid Services recommended 65 quality performance measures to hold care organisations accountable and to make payments based on these performance metrics,6 and new measures are being introduced to ensure that providers are meaningfully using electronic health records.7
Unfortunately, the accelerated deployment of quality measures has had some unintended consequences. First, the need to invest in capturing required metrics and to improve performance on these measures to reach the top echelon has caused some providers to overinvest measurement resources and improvement dollars in these high-profile high visibility measures. This has led organisations to deplete their quality measurement budgets and ignore other important topics. To provide just one example, the Massachusetts General Hospital and Massachusetts General Physicians Organisation is required to report over 120 quality measures to regulators and payers necessitating an infrastructure which consumes approximately 1% of net patient service revenue.8 Consequently, this organisation has little left in its measurement budget to pursue more important topics, such as patient-centred health outcomes and healthcare-associated harm. Second, different providers will have different areas that are most in need of improvement. The most productive improvement in quality for a specific organisation depends upon where they are in their quality journey (eg, going from 10−1 to 10−2 harm events needs different approaches than going from 10−3 to 10−4 harm events).9 It may be better policy to have a small required set of quality metrics and large optional sets so that organisations can target their improvements on areas where they are most needed. Third, some providers appear to have made sham improvements (eg, distributing a smoking cessation leaflet to all heart failure patients at midnight to ensure 100% compliance with a particular core metric) that meet the measurement requirement but not the patient need for a meaningful intervention. Fourth, many providers have reached high performance levels, not by improving the efficient design of high-quality care but by hiring a heart failure or pneumonia nurse to plug the process holes before patient discharge, thereby scoring high but adding costs without improving the reliability of the basic process. This ‘whack-a-mole’ mentality to quality improvement is unsustainable and will produce only marginal benefit. Fifth, there are statistical considerations. When the underlying measure is imperfect, marginal improvements from 96% to 99% may reflect error in measurement and denominator management more than nearly perfectly reliable performance. Even with a more error-proofed measure it is very difficult to rank providers (physicians or hospitals) accurately due to problems of collecting accurate data across multiple sites, challenges of attribution, and difficulties in forming comparable risk cohorts. Finally, a substantial number of studies have shown that there is often a weak association between high scores on process quality measures for given conditions (eg, acute myocardial infarction, heart failure) and health outcomes that matter most to patients and payers.10
If the USA continues with the proliferation of required quality measures, we will go from hundreds of required metrics to thousands. There are thousands of diseases, injuries, clinical states and interventional procedures that have a large and growing list of evidence-based care processes,11 and every special interest group could lobby for ‘their’ disease, injury or procedure to enjoy the ‘legitimacy’ and command for resources that are associated with being designated as a required quality metric. At the same time, providers of care have a genuine need to develop internal mechanisms to continuously measure and improve the processes of care delivery (ie, what they do) and the outcomes and costs of care that they provide.
We believe that if current trends in the growth of required quality measures continue, providers will need to invest so much money to report externally imposed measures that there will be scant funds left to support provider-specific internal measurement systems needed for monitoring and improving quality and for capturing longitudinal measures and cascading them to major clinical programmes and front-line clinical microsystems. Unchecked growth in mandated quality measures will lead to a commensurate growth in quality metric budgets devoted to ‘required metrics’ and thereby leave few ‘discretionary’ dollars to focus on internal quality measurement systems or on the results that matter most to the end users. In short, the drive to increase the scope and depth of required measures to judge quality could have the unanticipated consequence of decreasing providers’ ability to manage and improve quality and meet our need for better quality, better outcomes and lower per capita costs.12
To summarise, the growth in the number of publicly reported and externally mandated quality measures has generated positive and negative effects. We ask, has the time come to provide guidance and principles for the future development of quality metrics that providers are required to produce? We think the answer to this question is ‘yes’. Even if we can shift some of the measurement burden to patients through greater reliance on patient reported measures, a development we support, the exponential growth of other measures will overtake our limited resources.13 Thus, the resources that might be devoted to ‘end user’ value will be diverted to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care and lower per capita costs).
Mục lục
A proposition
We offer this proposition. The investment in required metrics is warranted to the extent that it meets external stakeholders’ authentic need to have accurate data to hold providers accountable for the overall quality and value of care that they produce (ie, degree to which care is guided by informed decisions by patients and is safe, timely, effective, efficient, equitable and patient centred and thereby generates the best outcomes at the lowest cost).2 ,14 To achieve this we should adopt a new policy on required quality and value metrics that embodies balance and parsimony and is guided first and foremost by the end-user (ie, patients, families, consumers, employers and payers) needs for data on providers’ performance. This policy should be balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs.
We make the following recommendations:
-
Approximately 30% of the quality measurement dollar spent by providers should be invested in metrics required by external stakeholders and 70% should be invested based on the provider’s assessment of what most needs improving ‘now’ to improve performance (currently the balance is more on the order of 90% and 10% respectively). The provider’s assessment will vary across organisations and should reflect where they are on their improvement journey.
-
The set of quality and performance measures should be balanced to address end-user value: better outcomes, better care and lower per capita costs.
How we can achieve balance and parsimony
A few simple rules can guide the selection and use of measures required for transparent quality reporting as well as for value-based payments. Examples of such rules are as follows:
-
Measure process quality: select a balanced and small set of measures to assess the quality and safety of the process of delivering care based on a small set of critical evidence-based practices that have a strong relationship with health outcomes.
-
Measure value: select a balanced and small set of measures to reflect health outcomes, patient experience and per capita costs for individual patients and clinical populations to reflect the triple aim and to anticipate value-based payment mechanisms for accountable care organisations, bundled payments and patient-centred medical homes.
-
Design data systems to support internal quality needs and spinoff external quality measures: use a four-step process to support internal quality measurement and external reporting for selection and accountability: build quality measures into workflows on the basis of key process analysis, to have the greatest impact on the most patients; for a high-priority key process, explicitly design a data system (intermediate processes, final outcomes, patient experience and cost results) around the care delivery process, ‘roll up’ accountability measures at a clinic, hospital, region, system, state and national level; and provide transparent reporting on quality and value to promote learning, healthy competition on key results and to ensure public accountability.16 Current systems are far from these capabilities but they are consistent with the long-term goals of the Office of the National Coordinator’s Healthcare Information Technology meaningful use programme.
-
Use return on measurement investment: select measures taking into account the cost of data collection and reporting relative to the measure’s impact on quality, outcomes and costs.
-
Establish ongoing process for refining and selecting core measures: build stakeholder agreement on vital, standard measures of performance that are used by payers, regulators, consumers and accreditors to promote public reporting and value-based purchasing schemes across different payers and to harmonise regulation, accreditation and certification.
Table 1 provides an example of a parsimonious yet powerful and balanced set of quality and cost metrics that the authors support and that our organisations would welcome testing. Our recommendations for required measures are based on multiple sources, including the Institute for Healthcare Improvement’s whole system measures, national quality and safety strategies, and recommendations from consumers and employers that our health systems serve.
Table 1
Illustrative, parsimonious set of quality, outcome and cost measures*
In conclusion, as representatives of organisations that are working to improve quality and value (and who are blessed with far more resources than most to do so) we are calling for a new, more practical quality measurement policy. We cannot wait for the ideal measure set yet we need to move towards adopting an ever-improving set of metrics that can strengthen healthcare and improve results. It is time for a new direction; the clock is ticking. We must stop the avalanche of an ever-increasing number of mandated quality metrics so we can get to work on using measures that really matter and thereby focus on what we need to do for our patients, our communities and our country to provide better health outcomes, better care and lower per capita costs.
Acknowledgments
The authors wish to acknowledge the thoughtful review of this manuscript by Drs Elliot Fisher, Carolyn Clancy and Jennifer Daley.
References
- ↵
-
Bohmer
RM
,
-
Bloom
JD
,
-
Mort
E
,
-
et al
.
Restructuring within an academic health center to support quality and safety: the development of the center for quality and safety at the Massachusetts General Hospital
. Acad Med ;
84
:
1663
–
71
.
OpenUrl
CrossRef
PubMed
Web of Science
-
- ↵
Institute of Medicine
.
Crossing the Quality Chasm: A New Health System for the 21st Century
.
Washington, DC
:
National Academy Press
, .
- ↵
Puget Sound Health Alliance Community Checkup
. http://www.wacommunitycheckup.org/?p=home (accessed 20 Jul 2011).
- ↵
-
Chassin
MR
,
-
Loeb
JM
,
-
Schmaltz
SP
,
-
et al
.
Accountability measures—using measurement to promote quality improvement
. N Engl J Med ;
363
:
683
–
8
.
OpenUrl
CrossRef
PubMed
-
- ↵
Premier Healthcare Alliance
.
QUEST®: High Performing Hospitals Year 2 Results
. . http://www.premierinc.com/quest/year2/quest-year-2-results.pdf (accessed 1 Sep 2011).
- ↵
Centers for Medicare and Medicaid Services
.
Summary of Proposed Rule Provisions for Accountable Care Organizations Under the Medicare Shared Savings Program: Fact Sheet 2011
. http://www.cms.gov/MLNProducts/downloads/ACO_NPRM_Summary_Factsheet_ICN906224.pdf (accessed 20 Jul 2011).
- ↵
Centers for Medicare and Medicaid Services
.
CMS EHR Meaningful Use Overview
. http://www.cms.gov/EHRIncentivePrograms/30_Meaningful_Use.asp (accessed 1 Sep 2011).
- ↵
MGH Quality Measures Overview
. . http://www-958.ibm.com/software/data/cognos/manyeyes/visualizations/mgh-quality-measures (accessed 1 Sep 2011).
- ↵
-
Pryor
D
,
-
Hendrich
A
,
-
Henkel
RJ
,
-
et al
.
The quality ‘journey’ at Ascension Health: how we’ve prevented at least 1,500 avoidable deaths a year—and aim to do even better
. Health Aff (Millwood) ;
30
:
604
–
11
.
OpenUrl
-
- ↵
-
Krumholz
HM
,
-
Normand
SL
,
-
Spertus
JA
,
-
et al
.
Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement
. Health Aff (Millwood) ;
26
:
75
–
85
.
OpenUrl
-
- ↵
The Cochrane Collaborative
. http://www.cochrane.org/about-us (accessed 1 Sep 2011).
- ↵
Department of Health and Human Services
.
National Strategy for Quality Improvement in Health Care (National Quality Strategy)
. http://www.HealthCare.gov/center/reports (accessed 31 Jul 2012).
- ↵
-
Cella
D
,
-
Yount
S
,
-
Rothrock
N
,
-
et al
;
on behalf of the PROMIS cooperative group
.
The Patient Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap Cooperative Group during its first two years
. Med Care ;
45
(
5 Suppl 1
):
S3
–
11
. Current metrics http://www.nihpromis.org/ (accessed 1 Sep 2011).
OpenUrl
CrossRef
PubMed
Web of Science
-
- ↵
-
Porter
ME
,
-
Teisberg
EO
.
Redefining Health Care: Creating Value-Based Competition on Results
.
Boston, MA
:
Harvard Business Press
, .
-
- ↵
-
Martin
LA
,
-
Nelson
EC
,
-
Lloyd
RC
,
-
et al
. Whole System Measures. IHI Innovation Series white paper.
Cambridge, Massachusetts
:
Institute for Healthcare Improvement
; . http://www.ihi.org/
-
- ↵
-
James
BC
,
-
Savitz
LA
.
How intermountain trimmed health care costs through robust quality improvement efforts
. Health Aff (Millwood) ;
30
:
1185
–
91
.
OpenUrl
-