There was a recent article in the New York Times reporting on the findings of this paper comparing two treatment strategies for ADHD.
I sent the article around on social media, partly because I thought it was an interesting bit of work. The greater part of my spreading the article around, though, was curiosity about the reactions. Occupational hazard. Or requirement.
This is a complicated study. When complicated studies are reported in the media, I find a couple of things happen. First, there’s just a little distortion of emphasis in the reporting that fits the prevailing popular narrative about the condition. I’ve talked about examples of this before. (Here, and here, to start.)
Then there is the public reaction, which tends to be similar to the media distortion, but without the media’s attempt at balance. If there are camps on an issue, one (or both) sees exactly what they already believed.
So in this article the reactions I typically saw were along the lines of:
- “Of course these nonpharmacological treatments work better. They’re natural and this ADHD thing is mostly medicalizing normal kid stuff anyway.”
- “These medicines are dangerous, and we just shouldn’t use them on kids.”
The former was most interesting to me. Because, right there in the article, it says that’s not what the study found. It’s almost the opposite of what the study found. Here’s the whole paragraph:
After two months, the yearlong study took an innovative turn. If a child had not improved, he or she was randomly assigned one of two courses: a more intense version of the same treatment, or an added supplement, like adding a daily dose of medication to the behavior modification. About two-thirds of the children who began with the behavior therapy needed a booster, and about 45 percent of those who started on medication did.
Got it? After two months, kids did better on the (relatively low dose) medication than they did with the behavior therapy, and more in the behavior therapy group needed either more intensive treatment or a switch of modality.
In other words, if the question is which treatment gets more kids better at two months, the answer seems to be the medicine (though that’s not exactly how the study was designed).
Which is not to say that non-pharamcological treatments don’t work. If anything, I’d say cognitive interventions for adults are pretty promising these days, if you can find them. In kids, it might be a little more iffy – some things look like they let you get away with less medication but aren’t necessarily better, full stop.
Happily, this isn’t a which-treatment-is-better study. We don’t have enough of those, so if it were I’d be happy. But it’s better. It’s a which strategy is better study, which is far more useful. Yet strangely, can go badly wrong.
There are several fundamental questions I constantly face. So often, I’ve strongly considered throwing that choice in as a Law of Psychiatry. Usually, I can only do two of these, and sometimes just one:
- Use the treatment with the best chance of success.
- Use the treatment with the least bad effects.
- Use the treatment the patient is most likely to get.
Case in point, the recent study of SSRIs for major depressive disorder. Overall, using higher doses gets the best chance of remission. However, using higher doses also gives higher adverse events. From the population perspective, going high seems to produce the most wellness. On the other hand, if you have a patient who has recurrent depressions, lives in the real world, and may need stay on this stuff for years; maybe tolerability ought to be a consideration.
Such is the paradox of this study. The behavioral treatment worked at getting kids to follow rules, but so did the medication. I expect the behavioral intervention probably didn’t work as well on the other aspects of ADHD – executive dysfunction and attention impairment. I could be wrong. Behavioral systems change behavior, so that’s what was supposed to happen.
Now here’s the rub: if you start off with behavioral therapy, and it failed to produce adequate improvement (as it did more often than not), the kids for whom behavior therapy failed did better than the kids for whom medicines failed.
Why didn’t you get the same effect if they started with meds? Why on earth is A then B the magic sauce, instead of A + B? Most particularly, when A doesn’t work as well as B the first time around. The strategy here is, starting with the least effective option and then adding in the more effective gives better results than starting with the more effective.
As is often the case for these strategic studies, the results are absolutely not commonsense and sometimes downright counter-intuitive.
The authors had a suspicion which I share. Those parents who started with low dose medication and were randomized to the behavior therapy afterward didn’t work as hard at it. Could be they weren’t as highly motivated to do something else in the face of (presumably) the partial improvement from the medication, or maybe they just didn’t want the trouble.
OK, enough of the armchair social critique, Dr. Bloggs. The study says teach the behavioral management techniques to improve problematic behaviors, then add medication for residual symptoms, and you wind up with someone who’s better off in the long run. So let’s just do that.
That’s probably exactly what we should do. I would also bet it will barely happen in the real world.
First off, behavioral interventions aren’t easy. Not every teacher, nor every parent, is going to participate fully; and that assumes you have the people available to teach the treatment and that insurance will pay for it (what with it “failing” more often than it succeeds even in the domain in which it is most adapted).
Second, speaking of insurance, is the deep love in the insurance industry for “fail first” criteria. What would happen, say, if an insurance company required the family to “fail first” at a trial of behavior therapy (8 weeks, plus a whole lot of work) before allowing medication; or vice-versa?
Then there is this: Families who are doing behavior therapy explicitly instead of medicines are not following this protocol. They are picking the less effective initial treatment without the plan to engage in the strategy that renders that less effective initial treatment the better starting choice.
So what this study says is not behavior therapy works better than medicines. What this study says is that starting with behavior therapy and using medication later at a low dose produces better results, and maybe for some kids you can get adequate behavioral improvement and you may not need the meds. I’d really like to see their grade books before I made that judgment, but it’s entirely possible.
It doesn’t serve the pharmaceutical industry since it means a lower chance of being on medicines, and at lower doses. It might serve the insurers if fail-first criteria are used, but that won’t serve the kids or their families. It certainly does not fit the “medicine doesn’t work/are dangerous” narrative of those who are anti-medication. It also isn’t going to manufacture the behaviorally-informed therapists and the teachers who are willing and able to implement such strategies.
So mainly it informs good practice, but beyond that doesn’t really serve anybody’s agenda.
Is it going to change the world?
I’ll be over here, holding my breath.