I have been working with Clinical Quality Measures (CQMs) for several years, leading two open source healthcare projects for the
Office of the National Coordinator for Health Information Technology (ONC), first
popHealth and more recently
Cypress. Both of these projects cover intimate details relating to CQMs. While working in this space, I have noticed that there is a need for better techniques for the visualization and presentation of
Clinical Quality Measure results.
CQMs are reports that measure the quality of healthcare providers against patient-level data. CQMs are designed to measure the performance and quality care that a healthcare provider applies to a population of patients. Many factors are included in CQMs such as health outcomes, processes and systems in place at a facility, patient perceptions, and treatments provided. The idea behind introducing CQMs into the healthcare provider workflow is that continuously measuring providers against these metrics, the US healthcare system can be shaped to gradually provide higher quality and improved efficiency by monitoring these metrics.
CQMs are a required component of
Meaningful Use requirements for the Medicare and Medicaid Electronic Health Record (EHR) Incentive Programs. This program is a significant part of the HITECH Act. The Meaningful Use program can supply healthcare providers with up to
$44,000 in incentives if they demonstrate that they meet requirements to "
Meaningfully Use" an Electronic Health Record software system in their practice.
While the notion of measuring clinical performance has been around for ages, the importance CQMs withing the Meaningul Use program has been a forcing factor to motivate EHR software vendors. All are motivated to support calculation for the Meaningful Use CQM reports in their products. Since CQMs are a feature EHR vendors must support for this federal program, there have been several visualization techniques to present CQM results to their users:
This NextGen design requires a good amount of looking back/forth to understand the time interval being applied to the individual bar charts on the
mammography report. This alone is my biggest grievence with the design. However, the legend on the upper left of the bar chart, actually cuts off the top of results for women who have had a mammography screening in the past 12 months. This legend background is the same tone black as the background of the bar chart illustration. This decision ultimately results in seeing a visual representation of that metric that is lower than the actual numeric value.
This GE dashboard makes liberal use of color to differentiate the various clinical families, and provides no visual indication on the actual CQM results. The user is force to click on each box to both understand the value, as well as the name of the specific metric. I found this technique to be one of the more painful to use.
Admittedly, after several years leading popHealth, a Clinical Quality Measure
reference implementation software service, we have likewise struggled with the visualization we developed. Our popHealth CQM visualization has been warmly viewed by both clinicians and EHR vendors, but uses significant
screen real estate for displaying numerous CQM results:
One approach to the visualization of Clinical Quality Measures that I haven't seen wide adoption is the use of a Kiviat chart. Kiviat charts, or sometimes referred to as
Radar Charts, are two dimensional, multi-metric illustrations which use a variable number of evenly distributed radii. Each spoke is associated with one metric. The length of a value overlaid on each radii is proportional to the magnitude of the metric relative to the maximum value of the variable across all data points.
Use of Kiviat diagrams works well with an apples-to-apples comparison of multiple metrics against an arbitrary number of metrics. Kiviat diagrams can also make it easier to identify patterns in data if the radii are arranged in a consistent order from diagram to diagram.
After searching for Clinical Quality Measures
and Kiviat visualizations, I was able to find a good article from PIIM Research
detailing numerous
visualizations for clinical metrics, "
Advancing Meaningful Use: Simplifying Complex Clinical Metrics Through Visual Representation". This paper included a Kiviat visualization for some CMQs, and also notes the value in providing national averages as an overlay to
differentiate the delta from the measured results for one provider with a Kiviat approach:
While I am unaware of target
metrics or a national average for the Meaningful Use CQM results, I like
the concept of presenting targets for CQMs, here presented as a black underlay. One aspect that is absent from this approach is a way to
present when the providers' results exceed the target metrics. The
approach above with the national average is good for
presenting when there is a deficit in the results. However, it does not present a good way to visualize when CQM results exceed
targets and expectations.
I do not endorse the liberal use of color in this design either. From my experiences developing command and control systems, I have found that using red or green in any design will
immediately convey goodness or badness associated whatever metric is being presented. This PIIM design uses a spectrum of colors to indicate where each CQM's results are, in addition to having the Kiviat line as way to present this information.
I found the use of red and green to be very distracting when interpreting the notional results in the illustration. In particular, the name color green will appear on the CQM metrics at the same numeric value. However, targets for each measures in the example will vary on a measure-by-measure basis. I could easily imagine a light green CQM result (in the mid-80s) that might actually be very poor against a national average. Similarly, an orange/red CQM result (in the mid-60s) could easily exceed the national average and expectations.
The dark/light alternating background doesn't have any real value aside from showing the upper bounds on the entire space. While this is probably good from some users, I found it to be unnecessary.
What's my suggestion?
A few years ago, I worked with Involution Studio creative director
Juhan Sonin when he was at
MITRE. I have also attended and enjoyed designer
Stephen Few's course on
business dashboard design. See my attempt at an amalgam of Juhan's minimalist/clean visualizations using Kiviat graphics with Stephen's constant guidance to focus on the critical information below:
|
Juhan Sonin's Kiviat illustration, re-purposed against synthetic data for Meaningful Use Stage 1 Clinical Quality Measures |
Taking this design further, and by overlaying best practice targets with the individual results, users can quickly understand where the quality of care meets/exceeds/fails expectations:
The decision to use red as the best practice number is that when the red Kiviat is visible, it implies that the provider's performance is below the targets that she/he should be meeting. The use of blue for the measured results was primarily to made to select a color that would compliment the red. I couldn't find any good tones of green that would work without making the illustration look like it had a "Christmas" theme.
To my knowledge, the best practice metrics for any of the nationally recognized Clinical Quality Measures (Meaningful Use program,
PQRS program,
Pioneer ACO program... etc) are not yet established. I don't know if there is work being planned to identify what these targets need to be.
A strength and limitation to applying Kiviat illustrations in this way is that the power to rapidly assess metrics "
at a glance". For instance, the ability to recognize patterns through shapes alone is could provide the ability for an analyst to rapidly review the results of numerous physicians to identify specific providers who need to address improvements on the care they provide their patients on specific diseases and demographics.
What are some potential downsides to using Kiviat diagrams with CQMs?
A potential problem is that Kiviats benefit from standardization to the placement of the particular metrics to allow for the rapid visual inspection of the CQM results. For example, changing the placement of the hemoglobin A1C metric, after an analyst had been trained on one particular illustration, would probably result in increased
cognitive load and/or errors. If the location of the individual CQM metrics are standardized, it will alleviate the need for users to re-read the names of the metrics every time they were presented to users.
Additionally, there are some limitations to Kiviat illustrations endemic to the visualization technique itself. One weak spot I have noticed is the sensitivity to changes in higher-ranging metrics tends to be more exaggerated over the same magnitude of change on smaller-ranging metrics. For example, a before/after 40% increase (from 50% to 90%) on the Diabetes LDL Management CQM below yields a more striking increase to the surface area in chart. Meanwhile, a 40% reduction in the Diabetes Urine Screening CQM (from 50% to 10%) is the same change in the number of diabetic patients, but is less pronounced.
See below for a before/after example that highlights changes of the same magnitude, but a
perception of a larger change on the larger valued metric:
Another challenge to using Kiviat visualizations assumes that the higher the CQM result, the better
the performance and quality of care by the provider. This
isn't always the case with the 44
Meaningful Use Stage 1 Ambulatory CQMs. For instance,
NQF 0059: Diabetes HbA1c Poor Control considers patients meet the requirement for this CMS if the
hemoglobin A1c value in diabetic patients is in an undesirable range > 9%.
This type of CQM is "legal" logic. However, the fact that the measurement grades an
undesirable result complicates the approach to visualize this result with other more
desirable metrics, such as
NQF 0575: Diabetes HbA1c Control, which considers a
hemoglobin A1c value < 8%.
One simple (albeit crude) solution to this problem would be to ensure that all future Clinical Quality Measures endorsed by the
National Quality Forum (NQF) are required to associate "goodness" with meeting the numerator criteria for any CQM.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. © Rob McCready, 2012.