The anesthesia quality improvement program at Vanderbilt University Medical Center in Nashville can claim a variety of successes, notably less postoperative hypo- and hyper-glycemia and fewer wound infections.
When anesthesia providers noticed they weren’t monitoring blood glucose in patients with diabetes as frequently as their own goals specified, they put in place an electronic notification screen in their anesthesia information system that pops up if a glucose level hasn’t been obtained in the past hour.
“Since doing that, the incidence of hypo- and hyperglycemia in the PACU [postanesthesia care unit] has diminished, and there’s been a decrease in wound infection rates in diabetic patients,” says Warren Sandberg, MD, PhD, chair of the department of anesthesiology.
Here is a closer look at a few components of Vanderbilt’s program.
Dr Sandberg says Vanderbilt electronically collects process measures data such as on-time starts and turnover time; clinical process data such as whether normothermia was maintained and antibiotics were given on time; and basic clinical care indicators such as whether the recommended care bundle was used for central venous catheterization, whether patients with diabetes were managed to departmental expectations, and whether departmental hand hygiene performance meets institutional expectations. Anesthesia providers can use a dashboard to measure their own performance against departmental aggregate data.
Vanderbilt is testing automatic notification of postoperative events of interest to providers, such as length of stay, unplanned ICU admission, or increased creatinine.
“The system scans the hospital records each day after surgery,” Dr Sandberg explains. “If anything is positive, you receive an email in the morning.” Providers also receive weekly feedback on factors such as length of stay and minor complications.
Vanderbilt developed the software for its quality program internally, but Dr Sandberg says, “What we do can be done with a commercial off-the-shelf product that can be configured by the end user or by a user group working with a vendor.” Although the program is accessible from any device, he says it’s most easily viewed on a traditional computer.
Another component of the program at Vanderbilt is peer review for its providers, who are hospital employees, as one part of the initiative to comply with the Joint Commission requirement for ongoing professional practice evaluation of medical staff.
“It’s designed to detect clinicians who are practicing substantially differently from the rest of the group,” says Dr Sandberg. The program is based on one developed by David Zvara, MD, professor and chair of anesthesiology at the University of North Carolina School of Medicine in Chapel Hill.
Reviews are assigned monthly by an algorithm that first assigns rater-subject pairs based on recent interactions, such as hand-offs, and then assigns the remaining few unpaired providers randomly.
The reviewer completes a confidential survey of nine questions, each with a 4-point Likert score and an option for when the reviewer is unable to answer the question; no text comments are collected. “The questions map back to the Accreditation Council for Graduate Medical Education core competencies,” says Dr Sandberg. He believes the two most important questions are:
• Would you feel comfortable handing over a case to this provider?
• Would you recommend a family member be cared for by this provider?
No one in the anesthesia department’s clinical leadership sees the raw data, which are processed through the anesthesia information management system and protected by Vanderbilt’s peer review process. Only the chair of the department’s peer review committee, who is a respected senior clinician, sees the unblinded data.
If the chair sees providers who are outliers, he notifies the individual’s division chief, describing the areas where individual ratings depart from the group. This prompts a formal or informal focused professional evaluation.
When a formal focused professional practice evaluation (FPPE) is chosen, Vanderbilt’s medical staff policies guide its implementation. The goal of the FPPE is to give clinicians the tools and guidance to raise their practice level to the expectation of the group. “To date, nobody has failed to come back into alignment with the group,” Dr Sandberg says.
Each anesthesia provider completes about four evaluations a month. He adds that peer review is better than trying to use general indicators and other metrics to evaluate individual anesthesia providers (sidebar).
Part of ensuring anesthesia quality is ensuring quality anesthesia providers. But how do you allow new residents to spread their wings while keeping patients safe?
At Vanderbilt, a mobile app that enables faculty members to see into the OR and access data such as patient vital signs gives residents independence while allowing for oversight.
“It gives residents the illusion they are in the room on their own,” Dr. Sandberg says, adding, “We don’t record anything; it’s simply a visualization tool, but it’s really helpful for keeping an eye on things while training for independence.”
Dr Sandberg credits the success of Vanderbilt’s program to “a culture that is welcoming to continuous improvement. We look for system problems to fix rather than focus on the individual level.”
At quarterly multidisciplinary quality meetings, attended and led by personnel from all perioperative disciplines, both good and bad outcomes are discussed.
“This means everyone can contribute,” he says. Faculty and staff also are forgiving of innovation that’s not implemented perfectly because, Dr Sandberg says, “They are optimists about developing systems that work better and improve our ability to use information in real time, which will improve performance in the future.” ✥
Bayman E O, Dexter F, Todd M M. Assessing and comparing anesthesiologists’ performance on mandated metrics using a Bayesian approach. Anesthesiology. 2015;123:101-115.
Wanderer J P, Shi Y, Schildcrout J S, et al. Supervising anesthesiologists cannot be effectively compared according to their patients’ postanesthesia care unit admission pain scores. Anesthesia & Analgesia. 2015;120:923-932.
Warren Sandberg, MD, PhD, chair of the department of anesthesiology at Vanderbilt University School of Medicine, says that attempts to use general indicators to evaluate the quality of individual anesthesia providers are often flawed in their implementation.
For example, a study from Dr Sandberg’s institution (Wanderer, et al) found that pain scores in the postanesthesia care unit are, as Dr Sandberg says, “worthless” for rating individual anesthesiologists because the most significant determinant of the response turned out to be which nurse asked the question.
“Until this study, it probably hadn’t occurred to many people that such a seemingly simple measure is sensitive to implementation differences,” Sandberg notes. However, the phrasing of the question likely influences the responses, for example, “You don’t have any pain, do you?” vs “You look like you are having a lot of pain.”
He adds that a recent study by Bayman, Dexter, and Todd found that when using the appropriate analysis techniques, “it’s almost impossible to distinguish the work of one anesthesiologist from another.”
Dr Sandberg believes this and other studies show that there’s nothing about anesthesiologists as isolated practitioners that can easily be measured as the basis for rating their performance. “Nobody practices in isolation. Anesthesiologists, nurse anesthetists, residents, and nurses all work in a system, which makes it hard to attribute a change in performance to an individual,” he says.
Monitoring is important to detect serious outliers, but not useful for improving quality performance of individuals. “You need to focus on the team, not individuals.”