Seven CDI Performance Metrics You Should be Monitoring

Share on Facebook21Share on Google+0Tweet about this on Twitter0Share on LinkedIn0
print
by Lisa A. Eramo, MA
Excerpted with Permission from For The Record
January 2018, Vol. 30 No. 1 P. 14

All organizations should measure the following seven program metrics from the onset of any CDI efforts and throughout the duration of the program:

1.Query rate/volume. Definition: Of the cases reviewed by CDI specialists, how many include a query?

Before measuring this metric, decide how the organization will calculate it, says Fran Jurcak, MSN, RN, CCDS, vice president of clinical innovation at Iodine Software. For example, will it count the total number of cases that include a query, or will it count the total number of queries per case? Counting individual queries more accurately indicates how much time CDI specialists spend reviewing each case, Jurcak says.

Remember that it’s not only about the number of queries, says Glenn Krauss, RHIA, BBA, CCS, CCS-P, a senior consultant at Federal Advisory Partners, who notes that organizations must also examine the clinical validity of those queries. Do the queries actually improve the quality of the documentation so that it most accurately reflects patient severity? Krauss defines quality documentation as having the following four attributes:

  • valid chief complaint;
  • physical exam and assessment, both of which are congruent with the history of present illness;
  • definitive or provisional diagnosis with appropriate specificity; and
  • plan of care that’s congruent with the assessment.

“To me, that’s CDI—progress notes that tell the progress of the patient,” Krauss says.

If organizations intend to use the query rate/volume as a barometer of performance, they must ensure that all CDI specialists know when a query is appropriate, Jurcak says. “People are very subjectively making decisions about what they’re going to query as opposed to saying, ‘I want documentation integrity across the board regardless of how many queries are needed on an individual case,'” she says.

When CDI specialists don’t pose queries consistently, it becomes difficult to rely on the query rate/volume as a metric for effectiveness, Jurcak says. Posing queries inconsistently also sends mixed messages to physicians. For example, a CDI specialist decides not to query for heart failure because he or she has already reached the maximum severity of illness and risk of mortality. “You send a message to the provider that you query for heart failure only when it matters [for reimbursement], and then you wonder why providers don’t comply,” Jurcak says.

2. Review rate. Definition: Of the total number of cases, how many did CDI specialists review?

Don’t be fooled into thinking that a high review rate indicates an effective CDI program, Jurcak says. “You have to know that your staff are reviewing records, but you don’t want them spinning their wheels on cases that don’t benefit from CDI,” she explains. “It’s about reviewing the right records every day.”

Leveraging technology with artificial intelligence to prioritize cases can help matters, says Jurcak, who suggests manually eliminating cases from the workflow that typically have limited documentation opportunities (eg, elective joint replacement surgeries with a length of stay of fewer than three days, first-day admissions for which a physician hasn’t yet documented a history and physical).

Jurcak, who was a nurse before moving into CDI, views performance metrics differently now that she works for a technology company. Organizations shouldn’t strive to review 100% of their cases because many won’t benefit from CDI, she explains. In fact, organizations leveraging technology to prioritize cases for CDI may ultimately witness a decrease in their review rate but an increase in their query volume, Jurcak says.

3. Response rate. Definition: Of the cases queried, how many solicited a physician response?

Physician responses to queries are critical; however, Jurcak says organizations must take a closer look at the type of responses they receive. A high response rate doesn’t mean an organization is necessarily improving documentation quality.

For example, many programs use query templates that automatically provide the option of “unable to determine” or “other.” When physicians check one of these nonspecific boxes, they’ve technically responded, but they may not have provided additional information to improve the quality of care or documentation specificity. Rather than default to these options on every template, consider including them only when necessary, Jurcak says.

For example, when a CDI specialist poses a compliant query with appropriate clinical evidence, it may not make sense to provide an answer of “unable to determine.” Jurcak says providing this option allows an easy out for physicians who don’t understand the query or who aren’t willing to take the time to accurately document the conditions being monitored and treated.

Krauss cautions organizations using this metric to consider the following question: Even when a query yields a codable diagnosis, does the documentation enable the organization to defend that diagnosis in the event of an audit? “If the clinical information, facts of the case, and context surrounding the diagnosis do not paint a picture of acuity in support of the diagnosis, the fact that the diagnosis is charted by the physician as a direct result of a query serves very little, if any, purpose,” he says. “The outside reviewers will simply refute the diagnosis and remove from the claim, thereby downcoding the diagnosis-related group (DRG).”

4. Agreement rate. Definition: Up for debate.

Experts say there is no consensus within the industry on how to define this metric. Some organizations say agreement occurs when a physician provides a codable diagnosis rather than stating the clinical indicators aren’t relevant. Others say agreement occurs when a physician provides the anticipated or assumed diagnosis. A third interpretation is that agreement occurs when the physician agrees with the query—even when appropriate documentation is absent from the medical record.

The absence of a uniform definition makes nationwide program comparisons nearly impossible, Jurcak says. This metric is meaningful only when organizations take the time to formally define it—and then train staff on how to report data consistently, she says.

5. Complication or comorbidity (CC) or major complication or comorbidity (MCC) capture rate. Definition: Of the cases queried, how many yielded a CC and/or MCC?

Theoretically, the CC/MCC capture rate should increase as CDI efforts are initiated, says Amber Sterling, RN, BSN, CCDS, director of CDI services at TrustHCS. However, organizations shouldn’t assume that a low CC/MCC capture rate equates to ineffective CDI. In some cases, CCs and MCCs may simply be absent in the population.

6. Case-mix index (CMI). Definition: What is the average relative weight of all DRGs reported during a defined period of time?

In theory, the CMI should increase as CDI specialists capture additional CCs and MCCs. However, there are other factors that can influence CMI, such as the volume of surgical patients, removal of a service line, and the seasonality of certain diagnoses—none of which CDI specialists can impact using queries, Sterling says.

7. Financial impact. Definition: How does the working DRG compare with the final-coded DRG?

The challenge with this metric is that staff assign impact inconsistently, Sterling says. “It seems relatively simple, but there are a lot of gray areas,” she says. “You see a lot of variance in CDI staff practice. It takes diligence by the program managers to continually educate and audit their team.”

For example, will the organization count the financial impact anytime a CDI review yields a CC or only when the review yields a CC that’s the only CC on the case (thus shifting the DRG)?

“If you’re saying there’s a dollar impact on this case, the case must meet your standards for how you’re reporting impact,” Sterling says. “If you report $20 million of impact, but then you find out later there was an error on how things were reconciled and it was actually $12 million, your C-suite is not going to appreciate that. I’ve seen it happen. It can be significant.”

 

Libman Education eLearning Library Subscription: Receive unlimited access to over 50 courses, assessments, and training curriculums designed to enhance job-specific, self-paced learning. Pay One Price for a year of unlimited access to courses that can ensure you are up-to-date and ready for any coding challenge! Learn more or order now here.

 

________________________________________________________________________________________________

 Lisa A. Eramo, MA, is a freelance writer and editor in Cranston, Rhode Island, who specializes in HIM, medical coding, and health care regulatory topics. Contact Lisa at [email protected] or https://lisaeramo.com/

 

 

  • Stay Informed

    Get Libman Education blog posts and updates relevant to you as a HIM professional sent right to your inbox! We keep your info confidential.

  • This field is for validation purposes and should be left unchanged.