Continuing on from yesterday’s post ( see: Just Read: Can peer grading be made to work in MOOCs? Can it work for doctors? | Ted Eytan, MD), another article I was connected to is this one (H/T Bob Wachter, MD @Bob_Wachter and Harlan Krumholz, MD @HMKyale), which describes an experiment in physician peer review.
Thanks to the advent of video technology, and physician, hospital, and payer collaboration around bariatric surgery in Michigan, 20 surgeons were recruited and asked to submit a representative (their choice) video of their performance of laparascopic gastric bypass, considered among the more complex of bariatric procedures.
The videos were edited to 25 to 40 minutes in length and distributed over the course of a year to 33 surgeons from 24 hospitals, who rated them on a scale of 1 to 5 (1=chief resident, 5=master surgeon). The researchers then compared the average ratings of each surgeon with their complication rates, and …
As compared with patients treated by surgeons with high skill ratings, patients treated by surgeons with low skill ratings were at least twice as likely to die, have complications, undergo reoperation, and be readmitted after hospital discharge.
Researchers also found that skill was not related to years in practice, advanced training, or being at a teaching hospital, it was related to : volume. The most skilled surgeons did almost 3 times as many procedures per year.
In the article online, you can access the videos of the most skilled and least skilled surgeons (again, as judged by their peers). If you have access, they are worth a look.
In contrast to yesterday’s post, which studied thousands of learners in a MOOC, the authors took some simple steps to assure accuracy of grading, including some statistical manipulation to correct for harsher or easier graders, and review of a second video by the study authors, as well as review of videos by 5 surgeons not in Michigan (“Perhaps not surprisingly, the non-Michigan surgeons were somewhat harsher judges, particularly of surgeons in the bottom quartile.”).
When reading this, it’s a little daunting to think that you might be operated on by a surgeon in the lower quartile of skill. On the other hand, probably the more daunting idea is that surgeons are operating at a lower skill level and no one knows it, including their peers and their patients and their patients’ families. It’s possible that at some level their peers know, though. And knowing isn’t as important as doing something to improve skill, and the authors imply that this is possible.
If we wanted to extrapolate these findings to non-surgical specialty, like, say, family medicine, we (other physicians, nurses, patients) might rate patient-physician interactions over a spectrum of care experiences (in person, over the phone, virtually, email or video), to assess skill, and understand how this contributes to the outcome of interest (the ones the patients care about, optimal health to achieve life goals).
Innovations like Project Echo lead up to this concept, as one of its foundations is increasing the skill level of primary care clinicians to produce outcomes at the level of specialists in their field (see: Now Reading: Breaking knowledge monopolies within the health professions to end suffering – Project ECHO | Ted Eytan, MD).
As usual, I see many possibilities – do you?
From @tedetyan: Just Read: Physician Peer Review Can Work: Assessing Surgical Skill and Complication Rates http://t.co/94pgzXDbjc #medicine