miércoles, agosto 18, 2021
|
The PHQ-9 is a very frequent topic of conversation on the listserve. I have long been concerned about our field's overconfidence in the ability of the PHQ-9 to inform treatment. While I use the PHQ-9 in practice nearly everyday I am concerned that some clinicians and administrators put too much emphasis on what a total score means. This paper gives us some fresh ideas about how we should be thinking about what a PHQ-9 score means. It is the first paper I have seen to associate PHQ-9 scores with meaningful, patient-centered outcomes.
First let me say that the PHQ-9 is a fairly good tool for screening patients and identifying those who have an unmet behavioral health need. While I think computer adaptive testing is a better approach and the way of the future, there are still plenty of settings where a paper and pencil PHQ-9 is the best we can do.
I am more concerned about the value of the PHQ-9 in measuring treatment response. I trust that a clinician can use a PHQ-9 to monitor a patient over time and compare an individual patient’s responses at the beginning of treatment to a response 6 weeks later. Where things start to get far more murky is when an insurance company or health system starts to draw “quality” conclusions based on the patient’s change in PHQ-9 score.
This paper reviews many of the most common alleged “quality measures” such as:
≥ 50% decrease from baseline
Absolute decrease of ≥ 5
Score of <10
Score of <5
Those are just four of the metrics discussed in the paper. The first and most obvious thing that strikes me about these metrics is that all of them use numbers that are a multiple of 5. Much like this guide to interpretting the PHQ-9 which I have seen different versions of over time:
5-10 = mild
10-15 = moderate
> 15 = severe
These metrics and interpretation guides have always been suspicious to me. What is the likelihood that actual empirically derived numbers would all be multiples of 5? It seems far more likely to me that someone just made all of these numbers up. They don’t have anything to do with DSM criteria for depression severity, or if they do its only by coincidence. Likewise, these numbers don’t have anything to do with actual, meaningful patient-centered outcomes of depression treatment. By this I mean, what do these numbers actually tell us about a reduction in suicide risk? Or the ability of a patient to return to work? Or the ability of a patient to fulfill family responsibilities?
What this paper suggests is that patient-centered outcomes appear to be associated with a 7-9 point reduction in PHQ-9 scores:
"Broadly, our findings suggest that the first notable increase in the likelihood of having favorable dichotomized PCO values occurs after an absolute decrease of 7–9 points from the initial PHQ-9 score”
Shocking! The real-world, empirically derived number is not a multiple of 5.
While this study seems to be a secondary analysis of a 20 year-old dataset, it still impresses me. And I would love to see this replicated with a more robust set of patient centered outcomes in a prospective study.
In the meantime I am going to adjust my framework for thinking about PHQ-9 scores to align with a 7-9 point reduction standard. And I am going to continue to keep my focus on question 10, which doesn’t even impact the score: “how difficult have these problems made it for you to do your work, take care of things at home, or get along with other people?” I believe this is the short-cut to understanding patient centered outcomes for the patient sitting in front of you.
If you are in a system that is trying to pay or judge clinicians based on improvements in PHQ-9 scores please push back. And if your system is being judged based on the arbitrary depression metrics this article reviews, please use this paper to resist the tyranny of these arbitrary metrics.
Carlo, A. D., Basu, A., & Unützer, J. (2021). Associations of Common Depression Treatment Metrics With Patient-centered Outcomes. Medical Care, 59(7), 579–587. https://doi.org/10.1097/mlr.0000000000001540