There was a recent article in the New York Times reporting on the findings of this paper comparing two treatment strategies for ADHD.
I sent the article around on social media, partly because I thought it was an interesting bit of work. The greater part of my spreading the article around, though, was curiosity about the reactions. Occupational hazard. Or requirement.
This is a complicated study. When complicated studies are reported in the media, I find a couple of things happen. First, there’s just a little distortion of emphasis in the reporting that fits the prevailing popular narrative about the condition. I’ve talked about examples of this before. (Here, and here, to start.)
Then there is the public reaction, which tends to be similar to the media distortion, but without the media’s attempt at balance. If there are camps on an issue, one (or both) sees exactly what they already believed.
So in this article the reactions I typically saw were along the lines of:
- “Of course these nonpharmacological treatments work better. They’re natural and this ADHD thing is mostly medicalizing normal kid stuff anyway.”
- “These medicines are dangerous, and we just shouldn’t use them on kids.”
The former was most interesting to me. Because, right there in the article, it says that’s not what the study found. It’s almost the opposite of what the study found. Here’s the whole paragraph:
After two months, the yearlong study took an innovative turn. If a child had not improved, he or she was randomly assigned one of two courses: a more intense version of the same treatment, or an added supplement, like adding a daily dose of medication to the behavior modification. About two-thirds of the children who began with the behavior therapy needed a booster, and about 45 percent of those who started on medication did.
Got it? After two months, kids did better on the (relatively low dose) medication than they did with the behavior therapy, and more in the behavior therapy group needed either more intensive treatment or a switch of modality.
In other words, if the question is which treatment gets more kids better at two months, the answer seems to be the medicine (though that’s not exactly how the study was designed).
Which is not to say that non-pharamcological treatments don’t work. If anything, I’d say cognitive interventions for adults are pretty promising these days, if you can find them. In kids, it might be a little more iffy – some things look like they let you get away with less medication but aren’t necessarily better, full stop.
Happily, this isn’t a which-treatment-is-better study. We don’t have enough of those, so if it were I’d be happy. But it’s better. It’s a which strategy is better study, which is far more useful. Yet strangely, can go badly wrong.
There are several fundamental questions I constantly face. So often, I’ve strongly considered throwing that choice in as a Law of Psychiatry. Usually, I can only do two of these, and sometimes just one:
- Use the treatment with the best chance of success.
- Use the treatment with the least bad effects.
- Use the treatment the patient is most likely to get.
Case in point, the recent study of SSRIs for major depressive disorder. Overall, using higher doses gets the best chance of remission. However, using higher doses also gives higher adverse events. From the population perspective, going high seems to produce the most wellness. On the other hand, if you have a patient who has recurrent depressions, lives in the real world, and may need stay on this stuff for years; maybe tolerability ought to be a consideration.
Such is the paradox of this study. The behavioral treatment worked at getting kids to follow rules, but so did the medication. I expect the behavioral intervention probably didn’t work as well on the other aspects of ADHD – executive dysfunction and attention impairment. I could be wrong. Behavioral systems change behavior, so that’s what was supposed to happen.
Now here’s the rub: if you start off with behavioral therapy, and it failed to produce adequate improvement (as it did more often than not), the kids for whom behavior therapy failed did better than the kids for whom medicines failed.
Why didn’t you get the same effect if they started with meds? Why on earth is A then B the magic sauce, instead of A + B? Most particularly, when A doesn’t work as well as B the first time around. The strategy here is, starting with the least effective option and then adding in the more effective gives better results than starting with the more effective.
As is often the case for these strategic studies, the results are absolutely not commonsense and sometimes downright counter-intuitive.
The authors had a suspicion which I share. Those parents who started with low dose medication and were randomized to the behavior therapy afterward didn’t work as hard at it. Could be they weren’t as highly motivated to do something else in the face of (presumably) the partial improvement from the medication, or maybe they just didn’t want the trouble.
OK, enough of the armchair social critique, Dr. Bloggs. The study says teach the behavioral management techniques to improve problematic behaviors, then add medication for residual symptoms, and you wind up with someone who’s better off in the long run. So let’s just do that.
That’s probably exactly what we should do. I would also bet it will barely happen in the real world.
First off, behavioral interventions aren’t easy. Not every teacher, nor every parent, is going to participate fully; and that assumes you have the people available to teach the treatment and that insurance will pay for it (what with it “failing” more often than it succeeds even in the domain in which it is most adapted).
Second, speaking of insurance, is the deep love in the insurance industry for “fail first” criteria. What would happen, say, if an insurance company required the family to “fail first” at a trial of behavior therapy (8 weeks, plus a whole lot of work) before allowing medication; or vice-versa?
Then there is this: Families who are doing behavior therapy explicitly instead of medicines are not following this protocol. They are picking the less effective initial treatment without the plan to engage in the strategy that renders that less effective initial treatment the better starting choice.
So what this study says is not behavior therapy works better than medicines. What this study says is that starting with behavior therapy and using medication later at a low dose produces better results, and maybe for some kids you can get adequate behavioral improvement and you may not need the meds. I’d really like to see their grade books before I made that judgment, but it’s entirely possible.
It doesn’t serve the pharmaceutical industry since it means a lower chance of being on medicines, and at lower doses. It might serve the insurers if fail-first criteria are used, but that won’t serve the kids or their families. It certainly does not fit the “medicine doesn’t work/are dangerous” narrative of those who are anti-medication. It also isn’t going to manufacture the behaviorally-informed therapists and the teachers who are willing and able to implement such strategies.
So mainly it informs good practice, but beyond that doesn’t really serve anybody’s agenda.
Is it going to change the world?
I’ll be over here, holding my breath.
Hello! Great post. I haven’t read the paper, but you say that
“Now here’s the rub: if you start off with behavioral therapy, and it failed to produce adequate improvement (as it did more often than not), the kids for whom behavior therapy failed did better than the kids for whom medicines failed.”
Maybe the patients who do not respond to meds are just a more enriched group (enriched for non-response) than those who don’t respond to therapy. Because there are fewer non-responders to meds. So the fact that meds non-responders go on to have poor outcomes might just reflect the fact that they’ve been selected, in a sense, for poor outcomes.
LikeLiked by 2 people
This is a very astute analysis. It’s complicated and I ended up not being sure if I agreed, but at least one sees a very good mind at work.
LikeLiked by 1 person
Solid point, which I hadn’t considered. I was sticking to the content of the NYT article because my interest was mainly in the reaction to that. But the results of the study itself were (as they must be) darn complicated. Among other things, the differences between conditions on the other measures (typically, combined symptom scales) were not earthshaking. Comparing the behavior therapy “failure” group, it looked like there were differences between classroom and home settings, which looked to me like they clearly favored adding medication in that case. There was also a dramatic adherence difference for the behavioral intervention between the meds-first and behavioral-first groups. I love studies like this, but they are confounding. BTW, I love your blog too. Some day I’ll grow up to be smart like you. ;->
If you take “hyperactivity” out of the equation and look solely at students with attention problems, it’s been my experience that people have already TRIED behavior modification – not formally, but they’ve still tried it. It’s only after that fails that they start thinking maybe this kid has a more serious problem – and they get the student tested. Then it is improvement in attention on medication that convinces people they have the right diagnosis.
The problem is when school officials, or parents, target hyperactivity as the main symptom and the activity they want stopped. It’s much more likely to lead to misdiagnoses.
Another problem is that you have to match the right child to the right medication, and it’s very important to get the right dose.
After all that, medication will not be a panacea. Just because a kid has suddenly started paying attention doesn’t necessarily mean they’re paying attention to the right things. Depending on how long the student has gone undiagnosed, they have habits – in particular, they have developed goals external to what’s going on in school. A smart kid may have a very well-developed mechanism of avoiding doing things that makes them feel or be perceived as “stupid.” That’s where counseling comes in. (That’s also where tutoring comes in, which is equally important – tutoring by someone who understands ADD.)
In today’s penurious society, all this is made much more difficult by the absence of counseling and tutoring for kids whose parents can’t afford to pay for it privately (even insurance companies that cover ADD limit the amount of counseling, and limit pay for counseling).
In the end, I think this study supports what should be common sense. Nice when a study does that.
LikeLiked by 1 person
I became deeply suspicious of common sense after the results of hormone replacement therapy trials ~10 or so years ago, and seeing what’s been happening in chronic pain treatment in the last decade has not improved my skepticism. I do like to see it tested.