Thanks to the thought leadership of people like W. Edwards Deming and Peter Drucker, the quality movement in healthcare is marked by a strong emphasis on quality measurement. It’s by measuring improvements in clinical outcomes and processes that providers can confidently determine whether they’re moving in the right direction. As payers shift to becoming purchasers of healthcare, these quality measures will become the core indicators used to decide reimbursement in a “value-based” model.
As has been noted in this blog series, the NQF has played a central role in developing these metrics. An important facet of the meaningful use framework of the Medicare and Medicaid EHR incentive programs is the reporting of quality measures from certified EHR technology.
For EHRs to be used for quality reporting, measures must be “retooled,” as discussed in the latest blog post on “Retooling.” These eMeasures allow providers to shift from the historical method of manual chart abstraction or a reliance on billing data to leverage the clinical data captured in the EHR in the course of delivering patient care.
To ensure that the retooling process preserved the intent of the original measures, the Department of Health and Human Services contracted the NQF to convene a panel of subject matter experts to review 113 retooled eMeasures. In addition to the review by the panel, the 113 eMeasures were posted by NQF for public comment.
The charge to the committee consisted of the following tasks:
- Review the original measure specification and assess each corresponding eMeasure to determine if there were any substantive changes as a result of retooling.
- Recommend (if possible) a resolution to the issues identified by the panel as well as those surfaced during the public comment process to be considered by the measure stewards.
Findings by panel members were communicated to NQF as comments, which fell into five categories:
1. Codelists 157 (30%)
2. Logic 181 (33%)
3. Meaning 28 (5%)
4. Quality data model elements 116 (21%)
5. Readability 61 (11%)
Codelists have to do with the taxonomies used by the eMeasure. For example, were the correct SNOMED CT codes identified to represent a diagnosis?
Findings in this area made up a significant proportion of the panel comments. The panel also identified several issues with the logical structure of the eMeasures (i.e., how AND, OR, and NOT were used to construct logical statements and combined together).
Panel members also identified issues with how elements of the quality data model were used. For example, is performance of the measure established when a diagnostic study is ordered, performed or resulted? By working with NQF and the measure stewards, these findings were relatively straightforward to resolve.
Some of the more complex issues had to do with the readability of the eMeasure and with limitations of the quality data model itself. Certain measures, designed based on a paper medical record or claims data, proved problematic when retooled as eMeasures. This suggests that some performance metrics are simply not amenable to retooling and must be authored as eMeasures from the ground up.
Another benefit of the eMeasure format review was the identification of opportunities for enhancing the QDM. In only a very small number of cases were the eMeasure deemed to have deviated substantively from the original measure intent.
NQF developed a preliminary report summarizing the eMeasure feedback and delivered it to HHS in July 2011. A final report documenting public and panel comments, and their resolution reflected in the updated eMeasures, will be delivered to HHS in December 2011.
Ferdinand Velasco, MD, Vice President/CMIO, Texas Health Resources, is Chair, HIMSS Quality, Cost Safety Committee. Contact him at Ferdinandvelasco@texashealth.org.