I often get asked what the biggest concerns are for organizations participating in MIPS. For the 2019 performance year, one of the biggest concerns is the unknowns with the Cost category. Because there are so many unknowns with this category, I’ll try to focus on what we do know.
The Cost component of the total MIPS score continues to rise each year, with 2018 being the first year where 10% of the score is attributed to Cost. This will continue to ramp up each year until Cost is 30% of the score in 2022. Most organizations are struggling with the inability to monitor cost data, as CMS calculates it based on claims data 6 months after the performance year ends.
Additionally, you will only receive performance feedback based on how you chose to submit for the year.
For example, if you submitted individually, you will receive individual clinician cost scores. However, if you submitted as a group, you will receive one cost score for the entire TIN. The few pieces of data that we have for cost data include the 2016 QRUR, the 2017 Performance Feedback Reports, and the 2018 National Summary Cost Field Testing Report.
How can you build a strategy around a category you have no insights on?
New 2019 Measures & Scoring
In 2019, there are eight new episode-based measures for a total of ten measures (MSPB & TPCC remain from 2018). These ten measures are averaged to give a total cost score up to 15 possible points. All ten measures in 2019 fall into one of three measure types: (1) Annual Cost of Care, (2) Acute Inpatient Medical Condition, and (3) Procedural Episode. The measure type determines how the cost is attributed to the clinician(s).
Just like last year with Medicare Spending per Beneficiary & Total Per Capita Cost, there are minimum case thresholds for each measure. If any of the measures do not have enough attributed beneficiaries, then the remaining measures are averaged to calculate the score. If no measures meet the threshold, then the entire Cost category is re-weighted to the Quality category. This can be good and bad.
The good news is that most TINs or NPIs will not meet the threshold on all of the new episode-based measures, so you can be less overwhelmed when digesting ten cost measures. Based on Table 3 below from the National Summary Report, this summary data indicates that 83% of TINs were above the minimum case threshold for only one measure out of the new eight episode-based measures. At the individual clinician level, it’s even more extreme, with 98% of clinicians only having one measure attributed. This means that an individual clinician could have their entire cost score ride on MSPB, TPCC, and one of the new episode-based cost measures.
The bad news is that if you do not meet the threshold for many measures, your entire cost score could ride on just a few measures.
Without any performance data, it’s difficult to gauge where you stand and how to improve. The first piece of advice I can give is to ensure that you are properly coding all diagnosis codes for your patients so that the Hierarchal Conditional Category (HCC) risk adjustment is properly applied. The HCC risk adjustment accounts for how high risk the patients are. Without proper coding, your physicians could be negatively impacted with a false adjustment as compared with other clinicians.
Second, I recommend your team review the codes and services that get attributed as “clinically-relevant” for each measure in the cost category. All of these codes will get factored into the cost measure calculations, so it’s important to understand what they are.
What is the National Summary Report?
Back in November of 2017, CMS initiated a field test for all the new episode-based measures for 2019 to ensure they were properly developed to include in MIPS. Over 14,000 TINs participated in the field test, and the measures had tens of thousands of episodes per measure, many as high as hundreds or thousands of episodes. CMS then summarized all the data they gathered into a National Summary Report to show performance trends and insights for each measure.
There are some very interesting tidbits in the report, including the reliability for each measure, or the ease of control at the TIN level versus the individual level. CMS also included stats for each measure to show the importance of the measure and why it was included in the cost category. For example, for the Elective Outpatient Percutaneous Coronary Intervention (PCI) measure, this procedure is performed in 600,000 patients each year and have the highest aggregate costs of all cardiovascular procedures, totaling about $10B annually. Seeing the high dollar expenditures helps justify why these measures are of critical importance for CMS.
Organizations that participated in the field test did have the ability to download their individualized performance feedback on these episode-based cost measures. However, if an organization did not participate, the National Summary Report is the only new piece of performance insights that we have to determine how organizations are performing in 2019. I strongly encourage everyone to take a look at the data to see where the average cost is for each measure.
As I mentioned at the beginning, the market is frustrated that there is not more provider or TIN-specific information that will enable a practice to zero-in on the areas where they can best manage Cost. At this point, it’s a level playing field since no one has any performance data from CMS. Past performance is a starting point, so your 2017/18 Performance Feedback Reports *could* provide you with a baseline for two of the measures. Analysis of 2016 QRUR data might provide you with some insights into the areas driving the most costs, albeit for providers who were attributed under the former PQRS program. This feedback report may be one of your best better-than-nothing resources. As stated earlier, proper coding and reviewing the trigger codes for each episode may provide insights into what costs will be attributed. I’m as anxious as you are to begin to see some real performance data on these new measures!
This article was originally published on SA Ignite.