…and back again.

The CDC guidelines on opioid prescribing just landed like a ton of bricks. The media blitz has begun.

They’re pretty straightforward. In the context of the most godawful opioid epidemic in living memory, you could probably guess the content without seeing them. Let’s try a game, shall we? Pick the item not included in the recommendations. The answer is at the end, NO PEEKING!


 

  1. Opioids are the work of Satan on earth.
  2. People who take opioids are likely to BECOME ADDICTED AND DIE.
  3. Nobody should ever take opioids for acute pain for more than 3 days or they could BECOME ADDICTED AND DIE.
  4. OPIOIDS = DEATH
  5. Ibuprofen is nice.

 

The answer is number 3; the guidelines actually recommend no more than 7 days of opioid therapy for acute pain.

OK. I’m exaggerating some.

My feelings about this, I admit, are a little paradoxical. I spend half my days withdrawing people from heroin and trying to get them into long-term treatment, and the other half dealing with folks whose pain treatment has gone completely to hell. I do all this in one of the most drug-ravaged cities in the country. If I found out what proportion of the inpatients I have seen over the years died in this overdose epidemic I probably wouldn’t get out of bed for a week. I have been up close and personal with the failures and horrors of opioids for my entire professional life.

To be clear, my discontent is not exactly with the guidelines and certainly not with their intent. Opioids were handed out as monotherapy way too often, with way too little evidence of long term benefit, and with remarkably little appreciation of long-term risks.

Here’s my problem: I don’t think the steady drumbeat about the dangers of opioids is as much a change of course as it is part of the same cycle that produced the epidemic in the first place. If that’s true, then I suspect we’re just setting ourselves up for the next one, a generation or so down the line.

Here’s my take on the history of these epidemics. On the upswing, generational forgetting sets in and the horrors of the last epidemic fade, and maybe some new thing happens. Perhaps there’s a newest-latest-greatest drug that’s “different.” Less stigmatized, believed to be less habit forming, or what have you. Perhaps a higher potency, cheaper version comes out. It could be any of those; but some combination of destigmatization, increased availability, a more behaviorally powerful drug, and perception of low risk gets a lot of people exposed.

At the crest, the drug spreads up the social ladder – suddenly it is not just the endemic population of impoverished, stigmatized “criminals” who are using it; it’s Aunt Effie who’s hooked on her pain pills. Alarm bells start ringing.

On the downswing, the stigma reasserts itself, new legal and regulatory controls are put in, and the drug’s perceived safety goes out the window. Any benefits are judged to be trivial relative to the risk and, in hindsight, any who believed otherwise are considered naive fools. The drug settles back into the endemic, stigmatized population and bides its time.

Thus we find ourselves at the crest of the opioid epidemic, and just starting to notice that we’re in the midst of a benzodiazepine epidemic, while happily skipping along as marijuana slides from illegal to pseudo-medical to accepted recreational. As to the latter, I don’t think it’s a coincidence how many people have been showing up in ED’s rip-roaring psychotic off the various synthetic cannabinoids that you can buy in a 7-11 these days.

Here’s what doesn’t happen: We never figure out what the actual risks and benefits of these @#&#^ things are. While in one generation we decide cannabis is the destroyer of youth and weigh down research; in the next that very lack of information allows anecdotes to fuel its return as a “treatment” for everything from glaucoma to nausea to mood disorders.

So now we’ve gone from “there is no top dose of opioids” to “over 50mg of morphine is the danger zone” in the last ten years or so, with barely a shred of evidence in support of either. Pop back over to those CDC Guidelines, and you’ll notice the evidence base for these recommendations is barely better than expert opinion. That being exactly the grade of “evidence” that led to the prior approach to pain that got us into this mess.

So the pendulum swings back. Again.

Where is the sweet spot between deadly ignorance and what have we done? What is the rational approach to the use of opioids?

We still don’t know. 

As physicians we are advised to take functional outcomes into account, when opioids have but moderate effects on chronic pain intensity and even less on function. We are advised to screen for risk of addiction when that entity is poorly defined for prescribed opioids,  and the instruments that purport to screen for it are poorly validated and barely break a sensitivity of 50%. It also doesn’t address the public health consequences of putting opioids in more medicine cabinets.

So are we any closer to preventing the next epidemic, once we work our way through the shattered lives, and death, and the secondary heroin epidemic, and the tertiary spikes in HIV and hepatitis and crime and all that misery? Are we finally going to learn something?

Oh, I hope so. I just don’t see it happening. In my more optimistic moments, I think there are opportunities to learn. There are testable hypotheses here. Here are mine:

  1. Long-term (years) chronic opioid monotherapy for chronic pain probably doesn’t work all that well, though it might do something.
  2. The endless pseudoaddiction vs. addiction debate will only be settled by (mostly) ignoring it and agreeing on identifiable, countable aberrant behaviors as bad outcomes.
  3. The risks of chronic opioid therapy depend intimately on how risk-averse prescribers are. When prescribers think it’s high risk, only the most carefully selected patients are exposed and adverse outcomes are minimal. That implies that in the opioids are evil era prevalence of bad outcomes will be low, which then primes the pump for the generational forgetting that can drive the next opioid epidemic, just like the success of vaccines opens space for the anti-vaccine movement.
  4. Attempting to treat chronic pain with a comorbid severe psychiatric condition without making treatment of the psychiatric condition co-primary (or, maybe even primary) is a losing game. So we have to fix the ridiculous divisions in our payment system.
  5. Standard addiction treatment systems are poor at managing people with opioid use problems and chronic pain. That has to be fixed or screening for addiction in this context becomes a circular mess. It’s a sick, sad thing when someone has to wind up switching over to street heroin to finally get into treatment for addiction, but that’s what I’m seeing aplenty these days.

In all fairness and in a moment of seriousness, I applaud what the CDC is trying to do. What I really, really want is for us to stop the cycle of stupidly lyonizing these drugs, causing untold harm, then demonizing them in a panic. The problem isn’t whether they should be demonized or lyonized.

The problem is the stupid (and the panic).

In the absence of any new, real data; I’m afraid we’ll win the battle only to set the stage for the next one. I may still be alive, and maybe even still practicing, to see it.

Here’s hoping I’m wrong. What I’m seeing right now is a bloody disaster, and it would break my heart to see it again.

Advertisement

If only something could be simple.

There was a recent article in the New York Times reporting on the findings of this paper comparing two treatment strategies for ADHD.

I sent the article around on social media, partly because I thought it was an interesting bit of work. The greater part of my spreading the article around, though, was curiosity about the reactions. Occupational hazard. Or requirement.

This is a complicated study. When complicated studies are reported in the media, I find a couple of things happen. First, there’s just a little distortion of emphasis in the reporting that fits the prevailing popular narrative about the condition. I’ve talked about examples of this before. (Here, and here, to start.)

Then there is the public reaction, which tends to be similar to the media distortion, but without the media’s attempt at balance. If there are camps on an issue, one (or both) sees exactly what they already believed.

So in this article the reactions I typically saw were along the lines of:

  1. “Of course these nonpharmacological treatments work better. They’re natural and this ADHD thing is mostly medicalizing normal kid stuff anyway.”
  2. “These medicines are dangerous, and we just shouldn’t use them on kids.”

The former was most interesting to me. Because, right there in the article, it says that’s not what the study found. It’s almost the opposite of what the study found. Here’s the whole paragraph:

After two months, the yearlong study took an innovative turn. If a child had not improved, he or she was randomly assigned one of two courses: a more intense version of the same treatment, or an added supplement, like adding a daily dose of medication to the behavior modification. About two-thirds of the children who began with the behavior therapy needed a booster, and about 45 percent of those who started on medication did.

Got it? After two months, kids did better on the (relatively low dose) medication than they did with the behavior therapy, and more in the behavior therapy group needed either more intensive treatment or a switch of modality.

In other words, if the question is which treatment gets more kids better at two months, the answer seems to be the medicine (though that’s not exactly how the study was designed).

Which is not to say that non-pharamcological treatments don’t work. If anything, I’d say cognitive interventions for adults are pretty promising these days, if you can find them. In kids, it might be a little more iffy – some things look like they let you get away with less medication but aren’t necessarily better, full stop.

Happily, this isn’t a which-treatment-is-better study. We don’t have enough of those, so if it were I’d be happy. But it’s better. It’s a which strategy is better study, which is far more useful. Yet strangely, can go badly wrong.

There are several fundamental questions I constantly face. So often, I’ve strongly considered throwing that choice in as a Law of Psychiatry. Usually, I can only do two of these, and sometimes just one:

  1. Use the treatment with the best chance of success.
  2. Use the treatment with the least bad effects.
  3. Use the treatment the patient is most likely to get.

Case in point, the recent study of SSRIs for major depressive disorder. Overall, using higher doses gets the best chance of remission. However, using higher doses also gives higher adverse events. From the population perspective, going high seems to produce the most wellness. On the other hand, if you have a patient who has recurrent depressions, lives in the real world, and may need stay on this stuff for years; maybe tolerability ought to be a consideration.

Such is the paradox of this study. The behavioral treatment worked at getting kids to follow rules, but so did the medication. I expect the behavioral intervention probably didn’t work as well on the other aspects of ADHD – executive dysfunction and attention impairment. I could be wrong. Behavioral systems change behavior, so that’s what was supposed to happen.

Now here’s the rub: if you start off with behavioral therapy, and it failed to produce adequate improvement (as it did more often than not), the kids for whom behavior therapy failed did better than the kids for whom medicines failed.  

Why didn’t you get the same effect if they started with meds? Why on earth is A then B the magic sauce, instead of A + B? Most particularly, when A doesn’t work as well as B the first time around. The strategy here is, starting with the least effective option and then adding in the more effective gives better results than starting with the more effective.

As is often the case for these strategic studies, the results are absolutely not commonsense and sometimes downright counter-intuitive.

The authors had a suspicion which I share. Those parents who started with low dose medication and were randomized to the behavior therapy afterward didn’t work as hard at it. Could be they weren’t as highly motivated to do something else in the face of (presumably) the partial improvement from the medication, or maybe they just didn’t want the trouble.

OK, enough of the armchair social critique, Dr. Bloggs. The study says teach the behavioral management techniques to improve problematic behaviors, then add medication for residual symptoms, and you wind up with someone who’s better off in the long run. So let’s just do that.

That’s probably exactly what we should do. I would also bet it will barely happen in the real world.

First off, behavioral interventions aren’t easy. Not every teacher, nor every parent, is going to participate fully; and that assumes you have the people available to teach the treatment and that insurance will pay for it (what with it “failing” more often than it succeeds even in the domain in which it is most adapted).

Second, speaking of insurance, is the deep love in the insurance industry for “fail first” criteria. What would happen, say, if an insurance company required the family to “fail first” at a trial of behavior therapy (8 weeks, plus a whole lot of work) before allowing medication; or vice-versa?

 

Then there is this: Families who are doing behavior therapy explicitly instead of medicines are not following this protocol. They are picking the less effective initial treatment without the plan to engage in the strategy that renders that less effective initial treatment the better starting choice.

So what this study says is not behavior therapy works better than medicines. What this study says is that starting with behavior therapy and using medication later at a low dose produces better results, and maybe for some kids you can get adequate behavioral improvement and you may not need the meds. I’d really like to see their grade books before I made that judgment, but it’s entirely possible.

It doesn’t serve the pharmaceutical industry since it means a lower chance of being on medicines, and at lower doses. It might serve the insurers if fail-first criteria are used, but that won’t serve the kids or their families. It certainly does not fit the “medicine doesn’t work/are dangerous” narrative of those who are anti-medication. It also isn’t going to manufacture the behaviorally-informed therapists and the teachers who are willing and able to implement such strategies.

So mainly it informs good practice, but beyond that doesn’t really serve anybody’s agenda.

Is it going to change the world?

I’ll be over here, holding my breath.