It’s About More Than Money

Conflicts of Interest in the Healthcare Industry are Both Financial and Non-financial

When your doctor prescribes a new medication for you, of course you want to believe that they have made the choice of which drug to prescribe based on science and your individual needs. When you read a scientific paper giving the results of a study to evaluate a new intervention for a disease, it is expected that all the benefits and risks discovered in the study have been accurately and completely described. When you go online seeking guidance from a patient advocacy organization about how to approach treatment for an illness, it is your assumption that the information you will see on the nonprofit’s website is unbiased and reflects up-to-date science on the topic.

         A study published last fall in the medical journal the BMJ tells us, however, that those beliefs, expectations, and assumptions must be tempered by a consideration of a considerable web of potential conflicts of interests. To come to that conclusion, the authors did a thorough—in technical terms, a “scoping” –review of the literature and obtained input from “an international panel of experts with broad expertise in industry ties and deep knowledge of specific parties and activities.” They looked at ties of manufacturers of drugs and medical devices with a variety of players in the healthcare industry, including researchers, practitioners, and non-profit organizations, and considered both financial and non-financial incentives. As such, this is perhaps the most comprehensive examination of potential conflicts of interest in the healthcare field that we have yet seen.

Tangled Web of Relationships

         What they found is sobering. If you look at figures one, two, and three from their publication (which is available for free online), you will see a tangled web of relationships between the medical product industry and what they call the healthcare ecosystem. It is relatively easy to trace payments made from pharmaceutical companies to medical schools in the form of research grants and then to refer to the long literature that shows that reports of the results of such research tend to emphasize benefits of the company’s products and minimize risks. It is also now straightforward to identify payments from drug and medical device companies to individual physicians because these are covered under various sunshine laws that mandate public disclosure. There is also a long literature documenting that paying doctors to give lectures or attend dinners influences what they prescribe.

When money changes hands between a medical industry company and a healthcare organization or provider, the potential for conflicts of interest arises. Some non-financial conflicts of interest can also be important (image: Shutterstock).

          Perhaps we are less likely to consider the ways that pharmaceutical industry funding affects consumers directly, but we have only to remember the advertisements that the companies regularly place about their medications to see that they are influencing us as well. Studies show that these direct to consumer (DTC) advertisements affect how we understand the risks and benefits of drugs, sometimes in subtle and not always accurate ways.

         We may not always realize that non-profit organizations that serve as patient advocates and educators also receive money from drug and device manufacturers. There is much less known about whether these payments adversely affect the advice these organizations give us and the work that they do.

Non-Financial Incentives Are Also Found

         An important feature of the BMJ study is that it considers non-financial incentives as well as financial ones. A medical scientist might be included as a co-author of a paper about a study that was funded by a pharmaceutical company without receiving any actual money themself. Even though no money changes hands in this case between scientists and drug companies, the potential for the scientist to be biased about the study results is potentially present. Here is how the authors of the paper describe the financial and non-financial ties they looked at:

Many medical product industry ties to these parties are financial, involving money or items of financial value, as when companies negotiate prices with supply chain agents; purchase reprints from journals; make contributions to public officials for campaigns; provide consultancy, speaking, or key opinion leader payments to healthcare professionals; or financially support government agencies, healthcare organizations, and nonprofit entities through donations, grants, or fees. Other ties are non-financial, as in companies’ direct-to-consumer advertising to patients, advertising and detailing of prescribers, unpaid professional consultancy work, or the offer of data, authorship, and other professional opportunities to clinicians and researchers. All party types have financial ties to medical product companies. Only payers and distribution agents lack additional, non-financial ties.

         These potential conflicts of interest all involve the medical product industry, and they are extensive. Yet, there is more. The paper does not address potential conflicts of interest that do not involve industry. There are also instances in which a long career of advocating for a particular intervention may bias a scientist in how they talk and write about new studies. Let us say for example that an investigator has written several papers showing that a particular type of psychotherapy is good for treating headaches. That scientist’s career and reputation come to be associated with the benefits of the therapy, so when the results of a new study the scientist conducts do not replicate previously found benefits, the scientist could feel conflicted about reporting those results. That is why replication of studies by independent groups is always necessary before we conclude that a finding is solid.

         There is some oversight now of direct pharmaceutical industry payments to researchers and prescribers, but much less to non-profit organizations, the BMJ paper noted. And when it comes to potential non-financial conflicts of interest, including those that do not involve industry, it is hard to know what kind of oversight could be implemented.

Incentives Do Influence Behavior

         It would be easy to dismiss what the BMJ study found by noting that these conflicts of interest are always expressed with the modifier term “potential.” That is, a financial or non-financial relationship between industry and an individual or organization in healthcare does not automatically translate into actual behaviors. Many doctors we know insist that they can go to pharmaceutical industry-sponsored continuing medical education courses without coming away feeling more favorable about the drug the company makes or going on to prescribing it more often. Many medical journals now take great care to ensure that research studies supported by drug companies include balanced reporting of benefits and adverse side effects and even publish studies in which the company’s drug did not work.

         As we noted above, however, there is ample, recent literature attesting to the fact that incentives influence the behavior of scientists, clinicians, and consumers. For example, one recent study showed that physicians were more likely to implant cardioverter-defibrillator devices made by manufacturers who had paid them the most money. Although the proportion of physicians who receive payments from industry seems to be decreasing, a study showed that 45.0% took some form of payment in 2018. A second example comes from a study published last year in the Journal of the National Comprehensive Cancer Network. An analysis of editorials in oncology journals showed that 74% had at least one author with a disclosed conflict of interest with a pharmaceutical company, 39% had a direct conflict of interest with the company whose drug was being discussed in the editorial, and 12% of the editorials were judged as “unfairly favorable” to the product being discussed, of which a majority fell into the direct conflict of interest category.

         These are just two examples in which conflicts of interest with industry seem to directly influence behavior, one involving what doctors prescribe and the other what medical scientists write in scientific journals. There are many more of these kinds of studies, leaving little doubt that the money the drug companies spend has a real effect on how their products are used. Again, that does not necessarily mean patients are being harmed. The drug that a doctor who has taken money from a drug company prescribes may be exactly the best one for an individual patient. The study paid for by a pharmaceutical or medical device company may really show that its product is beneficial and has a tolerable adverse side effect profile. All too often we see commentators go in the opposite direction on this and try to claim that any amount of money that changes hands between companies and the healthcare system automatically invalidates the results.

         We need to be wary, however, about how medical industry money influences every aspect of the healthcare system. If you accept help with a co-pay for a drug from the company that makes it, be certain you and your doctor really think it is the best one for you. Be very skeptical about ads for drugs—that seemingly endless recitation of adverse side effects that is always accompanied by cheerful music and video representations of happy people does not really tell you much about how the drug will affect you. It is fair game to look up your doctor on one of the publicly available search engines to see if they accepted any money recently from a pharmaceutical company. Whenever a paper lists a company as a sponsor of a study, read it carefully for signs that benefits are being magnified and adverse events minimized. Before donating money to a non-profit healthcare organization or accepting its advice, inquire about donations they receive from industry sources.

         Much harder is screening for non-financial conflicts of interest, both the kinds the BMJ paper was able to uncover and the ones that are much harder to detect, like the emotional attachment an investigator has to their findings. Keeping an eye out for inflated language is one way to watch for bias. If findings are described as “breakthroughs” or “major,” remember that those assessments are usually in the eye of the person who ran the study. When a paper has the phrase “we have previously shown,” ask yourself if anyone else has also shown it.

         Without going to the extreme of reflexively rejecting anything that has a connection to a financial or non-financial incentive, it is important to recognize the web of influence that engulfs our healthcare system. Every one of us needs to be on guard for the insidious presence of bias.

Raining on the American Football Parade

Is Playing Football Bad for the Brain?

On Sunday, February 13, millions of people watched as two teams in the National Football League (NFL) played in the annual Super Bowl game. At the time of writing this commentary, we did not know which teams would be in the game or the outcome, but we could reliably predict that there would be a huge worldwide audience, lots of fanfare, and many excited fans of the American version of football eagerly watching the game.

Millions of people watched  the National Football League’s championship game—the Super Bowl—in February but concerns swirl that American football is the cause of a degenerative brain disease called chronic traumatic encephalopathy or CTE. (image: Shutterstock).

         We also predicted that very few people watching the Super Bowl would be giving much thought to the case of former NFL player Phillip Adams. Last April, at the age of 39, Adams shot and killed six people before shooting himself to death. At autopsy, as reported in the New York Times, Adams’ brain showed severe chronic traumatic encephalopathy, or CTE, a degenerative brain disease that is described as follows by experts at the Boston University CTE Research Center:

Chronic Traumatic Encephalopathy (CTE) is a progressive degenerative disease of the brain found in people with a history of repetitive brain trauma (often athletes), including symptomatic concussions as well as asymptomatic subconcussive hits to the head that do not cause symptoms. CTE has been known to affect boxers since the 1920’s (when it was initially termed punch drunk syndrome or dementia pugilistica).

According to the story in the New York Times by Jonathan Abrams, “More than 315 former N.F.L. players have been posthumously diagnosed with C.T.E., including 24 players who died in their 20s and 30s.”  There are now a significant number of papers in scientific journals linking repetitive head trauma experienced during contact sports like American football and CTE and reports linking CTE to a wide variety of abnormal behaviors, including violence and suicide.

Evidence is Not Definitive

It is important to note that the evidence linking playing American football to CTE and CTE to violence and suicide is not airtight. In fact, there have been questions raised about the quality of the data linking playing American football to adverse behavioral and cognitive outcomes. For example, one study found that overall, homicidal violence is rare among NFL players. A review of the literature on suicides among NFL and former NFL players found only weak evidence for a causal relationship with CTE.

As we have often noted in these commentary pages, the most robust type of study to prove a causal link between two things is the randomized controlled study (RCT), in which a group of people is randomized to different conditions. In theory, and usually (but not always) in practice, the RCT study design controls for all the differences between the randomized groups except for the randomized condition itself and therefore gives the clearest picture of whether one thing actually causes another.

One can easily see how biases could creep into the study of CTE in American football players. Perhaps only former players with signs of brain disease, like early dementia, or who exhibit violent behavior like Adams did, come to the attention of researchers and wind up undergoing post mortem brain examination. It could even be that the rate of CTE is in fact no higher among people who have engaged in contact sports and experienced repetitive head trauma than would be found among the general population. The main study upon which the conclusion that CTE occurs at a high rate among people who have played football was conducted by the Boston University group led by Dr. Ann McKee and published in the Journal of the American Medical Association (JAMA) in 2017. This was a case series of examinations of 202 donated brains from deceased American football players. The research group found that overall, 87% of the brains showed pathological evidence of CTE, including 99% from former NFL players. The players whose brains showed evidence of CTE after death had exhibited many signs and symptoms of abnormal behavior and cognition during their lifetimes, including impulsivity, depression, suicidal ideation, and violence.

Nevertheless, the authors of this landmark study acknowledged in their paper several limitations to their work. They wrote:

“… a major limitation is ascertainment bias associated with participation in this brain donation program. Although the criteria for participation were based on exposure to repetitive head trauma rather than on clinical signs of brain trauma, public awareness of a possible link between repetitive head trauma and CTE may have motivated players and their families with symptoms and signs of brain injury to participate in this research. Therefore, caution must be used in interpreting the high frequency of CTE in this sample, and estimates of prevalence cannot be concluded or implied from this sample.

The second major limitation they noted is the lack of a comparison group of people who were similar in every way to the American football players except that they never played any contact sports.

Other Study Designs Are Needed

We cannot, however, conduct an experiment in which people are prospectively randomized to play in the NFL or not and then, when they die, autopsy all their brains. While this would answer the question definitively, it is obviously not something that could ever be done. And this allows people to cast doubt on the claims that playing tackle football is harmful to a person’s brain. Does this mean that the answer to the question of whether American football and other contact sports associated with head injuries causes CTE and consequent behavioral disturbances will always remain elusive?

Not necessarily. Remember that there are causal links between exposures and adverse health outcomes that we know about that were not proven by RCTs. The best example, of course, is cigarette smoking. No one ever randomized people to either smoke or not smoke cigarettes, waited decades, and then determined that the smokers had higher rates of lung cancer than non-smokers. So how do we know with such certainty that smoking causes lung cancer?

We know this from a variety of studies, including animal studies showing that smoking causes changes in lung cell biology and large human population studies in which it is clear that smokers have higher rates of lung cancer (and a lot of other terrible diseases) than non-smokers.

So, it is possible to determine causality without doing RCTs, but it usually takes large and expensive studies that are very carefully performed. Two recent studies once again raise alarms that playing American football may harm the brain.

The first is a study published in the journal Neurology in which 75 people with a history of repetitive head injuries, including 67 people who played an average of 12 years of American football, underwent magnetic resonance imaging (MRI) scans at an average age of 62 years. Their brains were then examined when they died, at an average age of 67 years. The investigators found a high rate of what are known as white matter hyperintensities, an abnormality in the long tracts that connect brain regions, in the MRI scans of the study participants, and the greater the number of these abnormalities, the longer the participant had played football. More than three quarters of the participants had CTE at autopsy and the burden of white matter hyperintensities on MRI scanning predicted the amount of CTE pathological markers in their brains that were detected postmortem. Once again, there are obvious limitations to this study, which may have suffered from ascertainment bias and lacked a comparison group. Nevertheless, a study like this adds a degree of biological plausibility to the idea that playing football can cause CTE because white matter abnormalities are exactly what one would predict would be found in brains of people who had experienced multiple head traumas.

Abnormalities in the white matter of the brain, called white matter hyperintensities, were detected at a high rate in former football players and were associated with markers of CTE in their brains (image: Shutterstock).

A second recent study of interest was published last December in one of the JAMA journals. It showed that among a group of 19,423 men who had played in the NFL, the rate of the fatal neurological disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease) was four times higher than the general population rate. Furthermore, those players who had ALS also played significantly longer in the league than players who did not get ALS. The authors of the study noted that there may be links between ALS and CTE and concluded “Ultimately, this study provides additional evidence suggesting that NFL athletes are at increased risk of ALS and suggests that this risk may increase with more years of NFL exposure.”

         Our society does permit adults to engage in potentially dangerous and even fatal behaviors. Thus, with some restrictions, adults may choose to smoke cigarettes, drink alcohol to excess, or eat unrestrained amounts of processed foods. One could therefore argue that if an athlete understands the risks of playing in the NFL, that is his choice. But we do not permit youths to smoke or drink and our concern is with the children who engage in American football or other sports that involve repeated head trauma. Even without suffering a concussion, one recent study showed evidence of brain damage in college football players who had played football for only one year. This begs the question of what effects playing football might have on high school-aged players or even younger children who play in organized tackle football leagues.

         While we cannot definitively state that playing football causes CTE at this point, there is an abundance of evidence to cause significant concern that playing football may harm the brain. Just as we do not permit children to smoke cigarettes, drink alcohol, or drive cars, it is time we ask ourselves whether it is okay to let them play tackle football. And if football becomes prohibited until the age of consent, where would that leave the NFL and its ability to recruit new players? We urgently need studies now to investigate potential harms of football to developing brains. Much as many of us love watching football games, it is not worth risking brain damage to children.

A Post-Mortem on Glasgow and COP26

Is There Hope to Mitigate the Climate Crisis?

Last November, representatives from more than 100 countries gathered in Glasgow, Scotland for a much-anticipated United Nations-sponsored climate conference. The goal was to try to forge international agreements to keep the earth’s temperature from rising more than 1.50 Celsius above pre-industrial levels. The stakes were and remain high: climate scientists consider 1.50 C to be a tipping point: exceeding that will bring us more severe flooding, wildfires, heatwaves, species extinctions, zoonotic diseases, and human displacement.

         An article in the November 19 edition of Science by Cathleen O’Grady laid out the Glasgow conference’s success and failures. To be sure, many things were accomplished, giving us some reason for hope that countries are finally taking the climate crisis seriously and are prepared to take meaningful action. More than 100 countries present at COP26 pledged to adopt new curbs on greenhouse gas emissions and the conference concluded with a call for “phasing down” burning coal and other fossil fuels.

         Other notable accomplishments included:

·  Hundreds of companies and investors made voluntary pledges to phase out gasoline powered cars, decarbonize air travel, protect forests, and ensure more sustainable investing.

·  There were agreements to halt and reverse deforestation

·  An agreement to cut methane gas emissions, which are more potent at warming the planet than carbon dioxide emissions, by 30% by 2030 gained international media attention

·  Countries decided to review their goals annually instead of every five years, with next year’s conference slated to take place in Egypt.

Representatives from nearly 200 countries gathered last November in Glasgow, Scotland for the U.N.-sponsored COP26 conference on climate change. While there were hopeful signs, many were disappointed by the conference’s ultimate outputs (image: Shutterstock).

A great deal of attention was placed at the conference on how high-income countries would deal with their responsibilities to the low-income countries that have contributed the least to global warming but suffered its devastating effects the most. There were some new commitments made for funds to flow from the wealthy nations to help poor countries cut greenhouse gas emissions and build the infrastructure that they will need to adapt to the inevitable ravages that climate change will bring to them in the future. It was also decided to begin a discussion to create a fund to compensate low-income countries for the damage already done to them by the relentless burning of fossil fuels by high-income countries.

Another development that garnered a great deal of attention was a joint announcement by the U.S. and China to increase their cooperation on combating climate change. The two countries are at odds on many issues and thus their decision to cooperate on perhaps the most pressing issue of all—saving human civilization from climate change—struck many as highly significant.

Many Shortcomings, Disappointments

Yet despite what seems to be some signs of progress, the presence of hundreds of thousands of protestors in Glasgow during the conference signaled the many shortcomings and disappointments that attended COP26.  “By the end of the meeting,” O’Grady wrote, “…it was clear that the international effort to limit global warming to 1.50C above preindustrial levels…is on life support.”

None of the commitments or agreements made during the Glasgow conference will actually get the world to stay under the 1.50C increase tipping point; in fact, we are now on a course to exceed that limit, with expectations that if we do not see dramatic decreases in greenhouse gas emissions soon, temperatures will rise by more than 2.00C by the end of this century. That will bring us all the devastations noted above, with some small island nations disappearing totally as sea levels rise and parts of the world becoming virtually uninhabitable. At times it seems as if some world leaders at the conference were more concerned with protecting the fossil industry, which had ample representation of its own in Glasgow, than about the droughts and food shortages that the climate crisis is already bringing to many parts of the world. A major source of controversy are the subsidies that some wealthy nations provide to fossil fuel companies. The conference concluded with a call to phase them “down” instead of the hoped for language of “phase out.”

On the critical subject of helping poor countries cope with the climate crisis, much of the conference’s concluding language is vague. “Developing nations did not get one big thing they wanted in Glasgow: a new “loss and damage” fund,” O’Grady explained. “Fund advocates argued that developed nations, having produced the vast majority of historic emissions, should help developing countries cope with the costs of climate-related extreme events…In the end, the pact promised only a ‘dialogue’ on loss and damage.”

From a political point of view, perhaps our expectations for what might be accomplished at COP26 were too high all along. It is clearly going to be incredibly complicated to get nearly 200 countries to agree on plans of action that will cost billions of dollars and disrupt business in so many ways. From such a vantage point, what got done in Glasgow was impressive in the tone it set: country leaders are now seemingly united in recognizing that the climate crisis real, ongoing, and threatening. They seem resigned now to taking it seriously and to trying to find meaningful solutions. There was a sense of urgency palpable to many in Glasgow that had not been felt at previous meetings.

More Climate Disasters Looming

At the same time, however, the results of the conference leave us feeling that we remain on the same collision course with disaster as we did before the Glasgow conference began. For instance, in December we learned that giant cracks in one of Antarctica’s hugest glaciers, the Thwaites Glacier, is bringing it closer to collapse than experts had previously predicted.  The melting of this massive glacier, which has been called an “icon of climate change,” will further contribute to the already perilous rise in sea levels.  It “already loses around 50 billion tons of ice each year and causes 4% of global sea-level rise,” according to an article in the journal Nature.  Now, new fractures in the Thwaites Glacier mean things will get even worse. It seems all around us there is one piece of evidence after another that the climate crisis is imperiling civilization and so far, the countries most responsible for what is happening and with the most power to do something about it have been unable or unwilling to take the decisive action needed.

Melting glaciers, like the Antarctic’s Thwaite Glacier, are contributing to sea level rises at a pace even faster than experts originally warned. The effects will be devastating, especially for small island nations (image: Shutterstock).

This makes the announcement last month that the Build Back Better legislation, which passed in the U.S. House of Representatives is stalled in the Senate and may never see the light of day, especially troubling. About a fourth of the money allocated by the bill–$500 billion—would be earmarked for climate investments. Now, at least at the time of this writing, the bill is imperiled, and some doubt its ultimate passage. If even one major high-income country cannot rouse itself to make the necessary investment in combating climate change, how can we expect 200 countries to agree to anything strong enough to have an impact on things like the melting of the world’s widest glacier?

This is not the time for despair but rather for bold political action. While it is important that individuals take actions in their own lives that reduce their carbon footprints, like switching to electric cars and eating less meat, the climate crisis can only be seriously approached by actions at national and international levels. The most important thing, then, that individuals can do is to support efforts, campaigns, and organizations that promote national climate legislation and international agreements aimed at significantly curbing greenhouse gas emissions. It is imperative that we all become involved in the political process if we are going to address the climate crisis.

Does Facebook Cause Depression?

Facebook and other social media platforms are under fire these days, accused of causing a wide variety of harms to individuals and societies. Among the charges is that spending time on social media can cause depression. A study published in November in the journal JAMA Open Networks attempted to look at this question more closely and has garnered widespread attention from behavioral health experts and the media.

         The paper’s lead author is Roy Perlis, a psychiatrist at the Massachusetts General Hospital and Harvard University in Boston and is titled “Association between social media use and self-reported symptoms of depression in US adults.” The authors note that a number of studies have hinted at this association between social media and depression, but most of them have been cross sectional or involved only a small number of participants, making it impossible to draw any cause-and-effect conclusion.

         In the Perlis et al study, more than 5000 people with a mean age of 55.8 years and who had very low scores at baseline on the nine-item Patient Health Questionnaire (PHQ-9), indicating that they were not depressed, were surveyed approximately monthly between May 2020 and May 2021, with measures of social media use and repeat PHQ-9 at each month. The investigators found that about nine percent of the participants had a five point or more worsening of their scores on the PHQ-9, indicating that they had become significantly depressed over time. Those who had worsening depression also had the most use of three social media platforms, Snapchat, Facebook, and Tik Tok. For Facebook, participants with worsening depression scores had about a 40% increased use of the social media platform compared to those participants without worsening depression scores. Controlling for measures of social contact and social support did not alter these findings, suggesting that social media use substituting for real-life social contact was not the cause of these findings.

Does It Show Causation?

         There are many strengths to this study, including the very large sample size, the use of a well-validated instrument to measure depression, and the longitudinal design. These were all people who were not depressed at study initiation, so the investigators were able to trace the relationship between social media use and the evolution of depression. The question becomes, then, can we say that this study suggests that looking at social media platforms like Facebook can actually cause depression?

         Some headlines on internet sites seemed to hint that this might be the case. “Social Media Use Tied to Self-Reported Depressive Symptoms,” read one. “Social media use linked to depression in adults,” read another. While neither of these used the word “cause” in its headline, the impression one gets from words like “tied to” and “linked to” might easily be misinterpreted to mean that a causal relationship had been demonstrated.

In fact, Dr. Perlis and co-authors note in the paper that “Notably, social media use may simply be a marker of underlying vulnerability to depression.” That is, even though they did not score in the depressed range on the PHQ-9 at baseline, it is still possible that those people who went on to develop worsening depression scores nevertheless had a vulnerability or predisposition to depression that both drives the illness and drives one to spend more time on Facebook and other social media platforms. As the authors themselves point out, the study did not control for innumerable confounding factors that could be the cause of this association between depression and social media use (e.g. previous history of depression; current life stress; reasons for going to social media platforms) and therefore causality cannot be established by this study.

How Would We Establish Causality?

         Yet the findings are troubling because they leave open the possibility that one way to explain them is that in fact spending time on social media is a causative factor for depression. Is it possible that someone who would not otherwise become depressed does so because they spend an excessive amount of time on Facebook, Snapchat, or Tik Tok? What kind of a study could really answer the causality question here?

         Typically, our default study design to answer causality questions is the randomized trial, in which a group of people is randomized to different conditions. In studies to test whether a new medication is effective, for example, we would randomize a group of people who have a particular health condition to either receive the experimental drug or a placebo. Because nothing else is different between the two groups except whether they have received drug or placebo, the assumption is reasonably made that we have controlled for other potentially confounding factors and isolated the true effects of the drug (things aren’t really that simple, but that’s for another commentary). Could we do that with social media use? The basic design would have to be something like starting with a large group of people without depression at baseline and randomizing them to different levels of social media use. We would then follow them over time, assuming that by the randomization process we have controlled for all possible confounding factors except for how much social media is viewed. Then, we would see if people in high social media utilization groups were more likely to develop clinical depression than people in low utilization groups.

         It is easy to see why doing such a study would be challenging. First of all, it would be expensive, requiring that either the social media companies pay for it, which would raise issues of conflict of interest and bias, or that external funds from a foundation or federal funder be obtained. It is unclear how much appetite there would be for doing this.

         Next, it would be hard to recruit such a group of people because participants would have to agree to accept the level of social media use to which they were assigned. A person who spent little time on Facebook, for instance, might be randomized to a group that must view it at a high level while someone who is used to viewing a lot of social media might get randomized to a group in which they would be asked to stop altogether for an extended period of time. It would also be hard to enforce that people really adhered to the amount of time on social media to which they were assigned.

         Finally, if such a study did show that social media use caused depression, what would we do with that information? Note that in the Perlis et al study, 90% of the participants did not develop worsening depression scores during the year of observation. Whom would we tell, then, to refrain from too much social media use? And what demands could we make on social media companies to reduce the risk that their platforms harm mental health?

         One solution recently announced by the social media platform Instagram is to offer “take a break” reminders to teenage users. The idea here is that limiting the time people spend on social media could help prevent adverse mental health outcomes. Given that it is voluntary whether a person agrees to get these reminders or to follow their advice and actually get off the site, it is unclear how effective this would be.

         We probably cannot depend on randomized trials of social media use being completed any time soon to decide if it does in fact cause psychiatric illness. Remember, however, that we did not discover that cigarette smoking causes lung cancer by doing randomized trials; instead, we depended on large population studies to make that case. It may be that we will have to rely on studies like the Perlis et al one, large cross-sectional population studies, and other novel forms of study design to tease out whether there is a causal relationship here. Right now, it might be plausible to advise anyone with a history of depression or in situations that increase risk for depression to be judicious about how much time they spend on social media.

Don’t Call It Flip Flopping

Guidelines Should Change When the Data Say So

For decades people with even low risk for having a heart attack or stroke have been taking low-dose aspirin (usually 81 mg) daily to prevent a first heart attack or stroke. Perhaps because it’s called “baby” aspirin, it sounds so benign. What could be risky about something with the word “baby” in it?

         In an excellent article in the New York Times last October health columnist Tara Parker-Pope wrote “…it came as a shock to many this month when an influential expert panel, the U.S. Preventive Services Task Force, seemed to reverse decades of medical practice, announcing that daily low-dose aspirin should no longer be automatically recommended in middle age to prevent heart attack.” To some, Parker-Pope noted, that “shock” translated into the belief that medical experts had “flip-flopped” on the issue. Some people, she implied in the article, will lose faith in medical recommendations because of what seem like abrupt changes in advice.

         But Parker-Pope also does an excellent job of tracing the evolution of recommendations to take (or not to take) aspirin to prevent cardiovascular (e.g., heart attacks) and cerebrovascular (e.g., strokes) events and convinces us that this is not random flip-flopping but rather the orderly process of science.

Why Would Aspirin Work?

To start, it is logical to ask the question of whether aspirin would be expected to have any effect in reducing the risk a person has to have a heart attack or stroke. We know that many heart attacks and strokes are caused by blood clots forming in arteries that carry blood to the heart and to the brain. Some of these blood clots are thrombotic, arising in those arteries themselves, and others are embolic, arising somewhere else like a deep leg vein or the surface of a heart valve, and travelling through the circulation until they get stuck in a small artery. Either way, those blood clots reduce the ability of blood to flow through the artery where they lodge, decreasing the amount of oxygen that can be delivered to the heart or brain tissue, and thus causing cells to die. Because strokes and heart attacks are high on the list of the most common causes of death, it makes sense to try to prevent blood clots from forming.

Aspirin interferes with the ability of platelets to form clumps and therefore reduces blood clotting. It might help prevent disorders in which blood clots are a cause but can also increase the risk for bleeding (image: Shutterstock).

         The physiology of blood clotting is complex and involves an array of elements, one of which is a cell that circulates in the bloodstream called the platelet. One of the many actions of aspirin is to reduce platelet activity, thereby making the formation of abnormal blood clots less likely. So aspirin has been seen as one way of reducing the potential for abnormal clotting to occur.

At First It Seemed To Work

         The original study, as Parker-Pope points out, that seemed to suggest a benefit for aspirin in preventing first heart attacks—in men—was published in the New England Journal of Medicine in 1988. The study involved more than 22,000 male physicians who took one 325 mg tablet of buffered aspirin every other day or a placebo. Compared to placebo, participants in the aspirin group had a nearly 50% reduction in both fatal and non-fatal heart attacks. The effect was so large that the study was stopped by the study’s independent data monitoring board and the results published.

         There was also a slight increase in strokes in the aspirin group compared to the placebo group in that study. Experts attributed this to what are called hemorrhagic strokes—strokes that occur when a blood vessel bursts and blood leaks into surrounding brain tissue. Because aspirin decreases the blood’s ability to clot by interfering with platelets’ ability to form clumps, these strokes were probably an adverse side effect of taking aspirin. At the time, it seemed clear that the benefits of aspirin outweighed the risks, and many people began taking aspirin in various preparations and at different doses for primary prevention—that is prevention of first—heart attacks. Aspirin also came to be recommended by some guidelines for secondary prevention of heart attacks and strokes, that is to prevent them in people who already had serious cardiovascular or cerebrovascular disease or previous heart attacks and strokes.

         In its 2016 guidelines, the U.S. Preventive Services Task Force (USPSTF), an independent body of experts that reviews literature and data to make recommendations about how to prevent diseases from occurring, made the following statement about people between the ages of 50 and 59:

The USPSTF recommends initiating low-dose aspirin use for the primary prevention of cardiovascular disease (CVD) and colorectal cancer (CRC) in adults aged 50 to 59 years who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years.

But in its 2021 guidelines the USPSTF says the following about people between the ages of 40 and 59:

The decision to initiate low-dose aspirin use for the primary prevention of CVD in adults ages 40 to 59 years who have a 10% or greater 10-year CVD risk should be an individual one. Evidence indicates that the net benefit of aspirin use in this group is small. Persons who are not at increased risk for bleeding and are willing to take low-dose aspirin daily are more likely to benefit.

Now, that is a big change, from a blanket recommendation that people with a relatively low risk of heart attack start taking low dose aspirin on a daily basis to making the decision “an individual one” and saying the net benefit “is small.”  Parker-Pope nicely points to three studies published in 2018, that either failed to show any benefit of aspirin in preventing heart attacks or found that the benefit didn’t outweigh the risk of bleeding as an adverse side effect of aspirin. Those new studies influenced the USPSTF to propose altering its guidelines for aspirin use, which are still being worked on by the agency.

What Accounts for the Difference in Findings?

What changed between the 1988 study that seemed to clearly indicate a benefit for aspirin to 2018, when equally well-designed studies did not? While it is possible that the 1988 results were a fluke, that seems unlikely given the magnitude of the findings. But remember that the 1988 study only included male physicians and they are hardly a representative group of the entire population. When women and people who aren’t doctors are included in more recent studies, things might be expected to change.

Parker-Pope makes an interesting speculation about what else might have changed since 1988—our general health. She writes “Fewer people smoke, and doctors have better treatments to control diabetes, high blood pressure and cholesterol, issues that all affect risk for heart attack and stroke. Aspirin still works to protect the heart, but doctors say the benefits aren’t as pronounced now that other more effective treatments have emerged. As a result, the risks of aspirin, including gastrointestinal bleeding and brain hemorrhage, are of greater concern, though they remain low.”

It is plausible, as Parker-Pope is suggesting, that 20 years ago the benefit of daily aspirin in preventing fatal heart attacks was larger than the risk from daily aspirin of serious bleeding, like hemorrhagic strokes. But as the risk to the population of having a fatal heart attack decreased because of things like less cigarette smoking and better control of high blood pressure, the added benefit of daily aspirin becomes no longer large enough to outweigh the risk of bleeding. This could account for the difference in findings between 1988 and 2018.

Daily aspirin intake can increase the risk for abnormal bleeding as occurs in hemorrhagic stroke, when a blood vessel in the brain bursts and blood leaks out into surrounding brain tissue (image: Shutterstock).

There’s a lot more to these recommendations. We’ve only touched here on primary prevention for people who don’t have much risk for heart disease. Other recommendations apply to primary prevention in people who do have higher risk and to secondary prevention. One expert stated emphatically that “The easiest patient group to address is adults of any age who have a history of heart attack, stroke, or revascularization [e.g., having had a coronary artery stent placed] and are taking aspirin for secondary prevention. They should continue taking aspirin; the new recommendations don’t apply to them.” There are now more drugs that inhibit clotting than we had in 1988, so aspirin may not be the right choice for many or even most patients who should be on some form of anticoagulation therapy to prevent heart attacks and stroke.

The important point here is to notice that as new science gives us new data, guidelines are going to change. That is not flip-flopping, it’s science. It is of course critical that physicians and other healthcare providers are conversant with the data and the latest iterations of treatment guidelines. People should not despair or be frustrated when recommendations change if the changes are made on the basis of new and emerging science. The complete story about aspirin use to prevent cardiovascular and cerebrovascular events is probably not written yet; there will certainly be more studies reporting more data, some of which may complicate the picture. We need to be sure that the science is allowed to flow unhindered and that the information about the results of that science are carefully interpreted by experts and made widely available to the public.

Several things can be done to help people accept changes in scientific consensus. We can continue to hammer at journalists and their editors to be careful how they present new findings, refraining from calling everything a “breakthrough” and acknowledging in their stories that almost every new finding raises important questions that will be researched further. Then, journalists and editors could report more often on ongoing research that hasn’t yet reached the level of changing guidelines for care so that people can see how the process of scientific advances evolves over time. 

More fundamentally, we need to educate people from elementary school on how science really works. Once science was taught to children and adolescents as a fixed group of facts, creating the impression that everything was set in stone and making it hard for anyone to accept change. Even with the evolution of more hands-on learning, in which students are encouraged to do “experiments” and to solve problems, there has been an emphasis on “getting the right answer.” “Experiments” in science class are often more like recipes and are supposed to lead to a single, correct final result. What we need is to show students is that experiments rarely lead to a set of totally “correct” or expected results, that many experiments either produce unexpected and difficult to interpret results or fail altogether to produce a significant finding. From there, scientists keep designing more experiments, trying to work out what went wrong the first time(s) until they get something interesting. We need, therefore, to help people cope with uncertainty and change in science.

Regulating Social Media

How Much Evidence Do We Need?

It seems that not a day goes by without our reading some harsh indictment of Facebook, its subsidiary Instagram, Twitter, You Tube, and other social media platforms. They are accused of warping young minds, distorting election results, spreading misinformation about diseases, and generally imperiling modern society.

         An article by columnist Ishaan Tharoor in the Washington Post last October bore the headline “The indisputable harm caused by Facebook.” Tharoor wrote that “Facebook and the other apps it owns…. are now increasingly seen through the prism of the harm they appear to cause. They have become major platforms for misinformation, polarization and hate speech. At the same time, [Mark} Zuckerberg [the Facebook founder and owner]and his colleagues rake in billions of dollars each quarter in profits.”

         But what is the evidence that we are actually harmed by Facebook, Instagram, WhatsApp, and other social media platforms? Last September the Wall Street Journal accused Instagram of having done research demonstrating it is “toxic” to teenage girls. This was part of a larger series of articles, based in part on testimony by a whistleblower, alleging that Instagram and its owner Facebook have hidden internal documents showing how much harm they do.

Social media stands accused of harming society in a variety of ways, but evidence may be lacking to determine if it is the direct cause of all these alleged harms (source: Shutterstock).

         The charge that Instagram harms young women is at first glance plausible. Numerous sources have shown that rates of depression and other mental health problems are rising among young people in the U.S. and elsewhere. For example, a 2019 Pew Research report showed that 17% of U.S. teenage girls had experienced at least one episode of major depression in 2017, compared to 7% in 2007. At around the same time, Instagram users rose at a rapid pace. So, is logging on to Instagram and scrolling for hours through its feed of photos and videos causing teenagers to become depressed?

The Data Are Lacking

         In October, Laurence Steinberg, a professor of psychology at Temple University, wrote about this in the New York Times and his findings are startling: “Amid the pillorying of Facebook that has dominated the latest news cycle there is an inconvenient fact that critics have overlooked: No research – by Facebook or anyone else – has demonstrated that exposure to Instagram, a Facebook app, harms teenage girls’ psychological well-being.”

         What is the research that the Wall Street Journal says showed that Instagram harms teenagers, especially girls? “Facebook conducted surveys and focus groups to which people were asked to report how they thought they had been affected by using the Instagram app,” Steinberg explained. “Three in ten adolescent girls reported that Instagram made them feel worse about themselves.”  Now surveys and focus groups are important aspects of research attempting to understand how things affect us, but as Steinberg points out, they rarely can establish cause and effect relationships. That is, the research Facebook apparently did on its own that was leaked to the Wall Street Journal is merely suggestive and does not establish that Instagram causes depression or any other mental health disorder in anyone, including teenage girls. First of all, note that the data cited suggest that seven in ten adolescent girls did not indicate that looking at Instagram altered their self-esteem. Second, it could be that depression increases the chances that someone will spend time looking at Instagram and not the other way around. Perhaps lonely, depressed people seek answers to their problems on social media. Third, as Steinberg points out, a myriad of other factors could mediate any relationship between Instagram and mental health.

         Steinberg noted that there is some research looking at relationships between social media use and mental health. “Of the better studies that have found a negative correlation between social media use and adolescent mental health,” he wrote, “most have found extremely small effects—so small as to be trivial and dwarfed by other contributors to adolescent mental health.”

         We are not attempting here to defend Facebook, Instagram, or any other social media platform. [Full disclaimer: Critica president Jack Gorman once owned about $2000 in Facebook stock, which he sold several months ago because of conflict of interest concerns]. We at Critica have had our own negative experiences with Facebook. One of our projects is to engage people who spread misinformation about COVID-19 vaccines on online platforms, including Facebook and Twitter. We have seen our posts, which attempt to provide correct information about health and science, deleted by Facebook’s algorithms while misinformation about vaccines, sometimes containing what we consider to be akin to hate speech, are left standing by those algorithms. Facebook officials have acknowledged that what we post should not be deleted and tried to help us with this problem, but it is a daunting one because with millions of posts circulating on its platforms it is not possible for artificial intelligence to correctly pick out with 100% accuracy which ones are worthy to let stand, and which are misinformation and should be deleted.

         We recognize that because some of our work depends on our posting on Facebook, it could be argued that we are incentivized to be reluctant to be critical of it. We are still fully cognizant of the problems with misinformation, hate speech, and political polarization that circulate throughout social media and agree that these pose a threat to society in multiple ways. Nevertheless, the example of the charge that Instagram harms teenage girls’ mental health should make us pause and ask the most basic question of whether we know that social media is in fact causing harm or instead is reflective of adverse circumstances rather than their cause.

         This is not merely an academic question because if social media is indeed causing the kind of harm to which it is accused, then it is perfectly appropriate for us to request government regulation as a remedy. Let us say, for instance, that it could be demonstrated that Instagram causes depression in 30% of teenage girls who use it. We would be remiss if we didn’t demand some intervention to prevent such a large number of young women developing such a serious disorder. We might ask if we would have such severe political polarization and hate speech leveled against society’s marginalized people if it weren’t for Facebook? If we didn’t have social media platforms to spread misinformation about vaccines, would more people be vaccinated today against COVID-19 today, and would fewer people have died? Would we have fairer elections throughout the world if there were no social media platforms spreading disinformation? Clearly, if the answer to any of these questions is “yes,” then social media urgently needs very serious regulation.

Meaningful Research is Very Difficult to Accomplish

         It will not be easy to acquire such data, however. Sticking with the claim that Instagram causes depression in young women, how would we go about documenting a true causal relationship? Steinberg calls in his New York Times op-ed piece for randomized controlled trials, but it is not easy to imagine what those would look like. We would need to take a group of young women who were free of depression as the research participants and randomize them to have some exposure to Instagram versus no exposure or exposure to something else. Then we would see whether the group exposed to Instagram developed worsened self-image or even depression. You can see immediately what the problem with this design is, however, because implementing the exposure and deciding how much exposure is not at all straightforward. Some participants will already have considerable experience with Instagram and others less or none, so how to control for that in the experiment is tricky. Trickier still is figuring out how much exposure in the experiment is both practical and meaningful. Simply having the exposure group look at one Instagram post or stay on Instagram for a specified time period may be insufficient to show an effect; people who use Instagram can do so on a regular basis and it may be that prolonged and/or repeated exposure is needed to induce a mental health problem. Finally, even if we were able to come up with a viable randomized study design, there are obviously important ethical issues in attempting to actually see if something can make people feel worse about themselves or develop depression.

         So designing a meaningful experiment to show that a social media platform can cause a mental illness is not going to be easy. How much more difficult will it be, then, to show that social media actually causes adverse behaviors like not getting vaccinated or harms minority groups or causes elections to be unfair?

         Social scientists are working on designing experiments that will help us answer these questions in ways that are practical but do not pose ethical issues. When we contemplate regulating social media, we must ask a number of difficult questions. What exactly would we be regulating? Banning teenagers from using Instagram? Banning certain types of Instagram posts? Banning Instagram entirely? There is even the possibility that under some circumstances, looking at Instagram could be helpful for mental health. Perhaps depressed teenagers go to Instagram seeking answers for their problems and maybe some of them actually get help in making connections there to other people or to groups that specialize in mental health issues.

         One place to start would be to implement regulations mandating that social media companies make their platforms available to scientists working on these issues. Last summer Facebook banned a group of New York University investigators from using its platform to conduct research on political ads and disinformation. We see that as emblematic of a general hostility to research and transparency on the part of social media companies like Facebook, something we believe can only be addressed by a legislative intervention. Ryan Calo, professor of law at the University of Washington, has called for just such action. “Congress holds the power to stay Meta’s [the new name for the Facebook company] hand when it comes to threatening legal action or blocking accountability research…It could mandate transparency.”

         Accusations about harms caused by Facebook and other social media platforms are being made at present but in some cases we still lack sufficient data to know with a reasonable degree of certainty that social media is indeed causing those harms. But the accusations are serious and therefore we cannot be reticent about demanding the evidence to decide if they are warranted. If social media companies like Facebook refuse to allow researchers to gather the information society needs to judge their safety, then we believe it is appropriate for governments to take action to make them. It may be that the end result of good internet research is that social media is not the purveyor of the harm it is accused of; we need to find that out.

Do Americans Trust their Doctors?

The crisis of public trust in health and medicine may be more complex than you think.

Do Americans Trust their Doctors?: The crisis of public trust in health and medicine may be more complex than you think. 

It stands to reason that an important component of a functioning society is the population’s trust in government and public institutions. This trust is especially important in a Democratic society, in which citizens need to have faith in and feel supported by their public representatives in order to feel that there’s even any point in participating in civic responsibilities such as voting. Without trust, there can be no reliable and open communication and basically no way for the government to inform and guide people through sometimes difficult situations and decisions. 

Even before the COVID-19 pandemic, there was evidence that Americans’ trust in government was declining. A Pew Center poll from 2019 found that in addition to Americans’ declining trust in government and each other, 64% of citizens believed that the declining trust in government made solving the country’s problems harder, and 70% said that declining trust in fellow citizens made this kind of problem-solving more difficult. There is reason to believe that trust in government, both federal and local, has further decreased during the pandemic. 

Is this distrust in government a symptom of a larger societal distrust in all authority figures? One arena where this trust issue has come up frequently in recent years is in medicine and health. It has been assumed of late that Americans’ trust in doctors, the healthcare system, and public health officials is at an all-time low. And given common refrains we hear from people about their confusion about various COVID-19 guidelines and outrage over various restrictions, it seems only logical that trust in public health would be extremely low. 

However, some recent research conducted by our organization Critica suggests that this is not quite the case. While it is true that people hold a lot of suspicious views, especially about government agencies, government health officials such as Dr. Anthony Fauci, and pharmaceutical companies, people still overwhelmingly trust and turn to their personal physicians and other healthcare providers for advice. When asked about to whom they would turn for information both on the COVID-19 vaccine and on the flu vaccine, an overwhelming majority of focus group participants in our study said that they would first and foremost ask their personal or family physician. Many also commented that for information on childhood vaccines, they would first go to their child’s pediatrician. While most of these people also said they would consult the internet, it was rare for anyone to list the internet as their sole source of information. When a friend or family member was a primary source of information or advice, it was almost always because this person was a healthcare worker. 

These findings suggest that the issue of trust in physicians and health officials is much more nuanced than we might think. While it is true that people overwhelmingly cited their personal physicians as their primary source of information on all things health-related, they would also often spout out what could almost be characterized as conspiracy theories in the same breath. Many people simultaneously believed their doctors were trustworthy while also stating that they thought that vaccine manufacturers and pharmaceutical companies more generally had corrupted doctors and the healthcare system and that they were deeply suspicious of the whole operation. How can a person simultaneously mistrust the healthcare system and even physicians writ large but also trust their own personal physician more than anyone else for essential decisions about their health and safety? While we need more research to be able to answer that question it is worth recognizing that people probably do not always carry over beliefs about large systems to their interactions with individuals. That is, while it may seem dissonant, it is actually possible, and quite common, for people to distrust “doctors” as an entity but still place an enormous amount of trust in their personal physician. This realization is important for several reasons: 1. It allows us to more accurately explore the actual impacts of conspiratorial or distrustful thinking on people’s day-to-day decision-making – that is, it is possible for someone to hold conspiratorial thoughts but still trust an individual in the class they have suspicions about; 2. It gives us a window to intervene in conspiratorial or distrustful thinking by helping people recognize that there are people they trust, even within a category of people they claim to entirely distrust; and 3. Perhaps most importantly, it allows us to have hope that our healthcare system might be able to regain people’s trust.

How Much Trust Do We Have In Newly Approved Medications?

When a new medication is approved by the U.S. Food and Drug Administration (FDA) it means that the drug has undergone years of testing, first in the laboratory, then in animals, and finally in people with the condition which the new medication is intended to treat. We expect that approved medications have two features: first, that they work for treating at least the condition for which they are approved—called efficacy—and second, that they are safe.

         A recent article published last October suggests a problem with that assumption in at least one area of FDA approvals. According to the article in Fierce Pharma, “Across six cancer indications for four medicines through the FDA’s accelerated approval program which have since been pulled, Medicare had spent $224 million between 2017 and 2019, a new study published in JAMA Internal Medicine shows.” Clinical trials done on these four cancer drugs that were done after FDA approval showed a lack of efficacy and therefore the drugs were withdrawn. What is going on here and is this a recurring pattern or an isolated event?

The Accelerated Approval Pathway

         The four drugs discussed in the article above were all approved under the FDA’s accelerated approval pathway, which allows a drug company to win approval using what are called biomarkers as the outcome measure during clinical trials. Ordinarily, FDA requires that a new medication—or an old medication being tested for efficacy for a new indication—must show efficacy in treating an actual disease. For example, if a pharmaceutical company has a drug that it thinks works for treating Lyme disease, it must show that the drug has a direct effect on Lyme disease itself in patients enrolled in clinical trials. But for some indications, showing a direct effect on the disease itself might take a long time and therefore using a surrogate marker for the disease can speed along drug approval. This would be a great benefit when the disease in question in very severe, like cancer. The idea is that getting potentially effective medications on the market earlier than later for potentially life-threatening illnesses could save lives.

Questions have been raised about the FDA’s accelerated approval program (image: Shutterstock} 

        Let’s take a theoretical example in which a type of cancer ordinarily has a median survival time of five years. That is, half of patients with this cancer live for more than five years and half live for less than five years. It will take years to see if a new anti-cancer drug improves survival time significantly better than already approved treatments. So the FDA may agree, under its accelerated approval program, that all the drug company needs to prove is that its new anti-cancer is better than existing treatments on a measure that responds to treatment more quickly, like tumor size. In our theoretical example, let’s say existing treatments reduce the size of the tumor in this type of cancer within three months of treatment for 30% of patients. Tumor size becomes the biomarker in clinical trials of the new anti-cancer drug and if it can reduce tumor size faster or in a higher percentage of patients than the existing treatments to which it is compared, then the FDA is inclined to grant it approval for broader use.

         But the fact that a drug works on a biomarker does not necessarily mean it actually improves overall disease outcome, so FDA only grants accelerated approval based on a biomarker on the condition that the drug company perform confirmatory trials, also called phase four trials. In confirmatory trials, the drug company must demonstrate that its new anti-cancer drug actually makes patients live longer. And it is on this count that some new anti-cancer drugs appear to fail, prompting the FDA to request their withdrawal.

         According to the Fierce Pharma article, the FDAs accelerated approval program has been criticized often:

… the program has repeatedly come under fire for its lack of accuracy and follow-on oversight. A 2019 JAMA Internal Medicine study led by Harvard researcher Aaron Kesselheim, M.D., found that only 19 of 93 cancer drug indications the FDA waved through under the pathway between late 1992 to mid-2017 had positive overall survival data from confirmatory trials.

         If the accelerated approval program puts drugs on the market that turn out to lack efficacy, how often does this happen under the regular FDA approval process? Can we be secure in our belief that newly approved drugs are indeed safe and effective?

Withdrawals Are Uncommon

         The FDA has the authority to ask a manufacturer to withdraw its medication from the market if it determines that the drug’s benefits no longer outweigh its risks.  It turns out that withdrawals of drugs by the FDA because of safety or efficacy problems is very uncommon. In a 2017 study, Yale University investigators found that of 222 drugs approved between 2001 and 2010, three were subsequently withdrawn by FDA. It should be noted, however, that safety concerns were identified after approval for an additional 120 of these medications, some of which were deemed serious. Thus, while total withdrawal of a medication may be rare, the identification of previously undetected adverse side effects once a drug has been on the market for a few months or years is relatively more common.

Once a medication is approved by the FDA, withdrawal of that drug for safety reasons is very uncommon (image: Shutterstock). 

There have been a few cases of medications being withdrawn that have made headlines, like the story of the anti-inflammatory medication Vioxx. Vioxx was supposed to be a safer alternative to already marketed drugs like naproxen and ibuprofen, but after its approval studies showed it caused an increased risk for heart problems and heart attacks, some of which were fatal. Five years after approval Vioxx was withdrawn from the market and the drug became the subject of thousands of lawsuits and a multi-billion-dollar settlement by its manufacturer, the pharmaceutical company Merck. Interestingly, in the original clinical trials involving more than 5,000 people that led to the FDA approval of Vioxx, these heart problems were not detected.

That’s the concern with new drugs—will serious adverse side effects turn up only years after they are approved, even though the approval process involves years of rigorous testing and hundreds to thousands of clinical trial participants? Is it safe to take a medication that has just been approved?

We can take some comfort from the fact that adverse side effects or lack of efficacy serious enough to cause the complete withdrawal of a medication are uncommon. When this does happen, as in the dramatic case of Vioxx, it makes headlines, but that situation is unusual. On the other hand, new adverse side effects do crop up that require FDA to inform prescribers and the public. And in the case of the accelerated approval process, problems with efficacy seem altogether too common.

The FDA clearly needs to review the accelerated approval process for drugs that treat the most serious and often life-limiting illnesses, like cancer. It needs to do a better job of determining how closely linked biomarkers are to actual outcomes and force drug companies to conduct and report on confirmatory trials after approval much more expeditiously than is now the case.

         With drugs approved under the regular process, we can be reasonably reassured that most turn out to be safe and effective even after years of prescribing them. It is obvious to recommend that people taking any medications report adverse side effects promptly to their prescribers and that prescribers stay up to date on newly surfacing adverse events that involve drugs they prescribe. It is also important that journalists not dramatize newly emergent safety and efficacy problems with already approved drugs because in many cases they will turn out to be rare problems that can be managed or issues that apply only under special circumstances that don’t apply to most people taking the medication. Perhaps the best we can say about taking new medications is that they should be tried when patient and prescriber agree the potential risks are outweighed by the benefits; then both collaborators should remain vigilant for every piece of new information that emerges.

How to Express Risk in the Headlines

Two headlines on the same topic appeared recently, conveying very different messages about health risk. The first appeared in an online source called MEDPAGE TODAY on October 6. It read:

Israeli Data Favor Higher Estimates of Post-Vax Myocarditis.

The second appeared in the Israeli newspaper Haaretz the next day and read:

Study: Only 142 Out of 5 Million Vaccinated Israelis Suffered Heart Inflammation

         Both headlines, and the stories they accompanied, are reporting on studies that appeared in the New England Journal of Medicine that looked at the incidence of a type of inflammation of the heart muscle, called myocarditis, and its incidence after receiving the Pfizer/BioNTech COVID-19 vaccine.

         The actual research studies about post-vaccination myocarditis are generally felt to be important contributions to our understanding of the risks and benefits of COVID-19 vaccines and they generally paint a reassuring picture about vaccine safety. Stories in the media about this research, however, vary substantially in the amount of concern they are likely to cause among a reading public. The headline from MEDPAGE TODAY, admittedly intended for a medical audience, creates the impression that something new and dangerous has been discovered about the vaccines. One might infer from this headline that previously low rates of myocarditis attributed to the vaccines were wrong and that new research has revealed a much more significant problem. The Haaretz headline, by contrast, puts this new information immediately in context: only 142 cases of myocarditis out of 5 million people who received a COVID-19 vaccine, for an incidence of 0.003% or around 1 case of myocarditis in every 35,000 or so people vaccinated.

         We can put this into even further perspective by looking at the two papers in the New England Journal of Medicine about which these stories are reporting. Both were studies from Israel that looked at rates of myocarditis following the Pfizer/BioNTech COVID-19 vaccine. Both found very small rates of myocarditis after receiving the vaccine, and both found that the risk was highest in teenage boys and young men. And finally, both studies found that myocarditis post-vaccination was almost always mild or moderate and resolved without any further consequences.

What Is Myocarditis

         Myocarditis  is an inflammation of the heart muscle, or myocardium, that occurs in about 1.5 million people worldwide every year, with an incidence of at least 10 to 20 cases per 100,000 people. Most cases are caused by viruses, although there are a number of other causes. The symptoms are usually not subtle and include relatively sudden onset of chest pain, difficulty breathing, and fatigue. Jack Gorman, the Critica president, remembers that the first patient he admitted to the hospital when he was an intern in pediatrics at a New York City hospital in July, 1977 was a young man with myocarditis. The patient was in very mild distress, his admission to the hospital was precautionary, and he recovered uneventfully. That is the typical course of myocarditis, although it can be more serious and in one of the New England Journal of Medicine studies there was one death from post-vaccination myocarditis.

Myocarditis is a disease in which the heart muscle or myocardium becomes inflamed. The most common cause of myocarditis is viral infection (image: Shutterstock).

As explained in an excellent article in the journal Nature, the risk of myocarditis from actual COVID-19 infection is 18 times the annual incidence rate cited in the paragraph above, “a much more significant risk than is observed following vaccination.” Because the rate is a bit higher among young men, some governments have decided not to authorize mRNA vaccines (that is, the Pfizer/BioNTech and Moderna/NIH vaccines) to that group of people. The article in Nature explains that the risk in teenage boys and young men “of developing myocarditis might be increased more by the vaccine than by the disease, particularly because children rarely develop severe COVID-19.” In all other demographics, the risk of getting myocarditis from COVID-19 appears to be clearly higher than is the risk from the vaccine. Hence, with the possible exception of teenage boys and young men, the myocarditis data leave unchanged recommendations that most people should have a COVID-19 vaccine as soon as possible.

Putting the Myocarditis Risk in Perspective

The facts about post-vaccination myocarditis have not stopped anti-vaccination advocates from using the Israeli studies to bolster their argument that the COVID-19 vaccines are dangerous. This is curious at best because they often cite as a reason against vaccination what they claim to be a 99% survival rate from COVID-19. That means, of course, that they believe that 1% of us–one out of every 100 people–infected with the virus that causes COVID-19 will die. Isn’t that a much greater risk than the 0.003% or around 1 case of myocarditis out of every 35,000 people, especially given the fact that almost all of those people with myocarditis recover and death from post-vaccination myocarditis is very rare?

What concerns us most, however, is not anti-vaccination misinformation but rather the fears that can be stoked by misleading headlines.  The excellent headline in Haaretz about post-vaccination myocarditis immediately puts it into context and the rest of the story tells us that while there is a concern about vaccinating young men (whose risk is still very, very low of developing a serious problem), people can be reassured that once again very careful surveillance and data analysis proves the COVID-19 vaccines to be remarkably safe. Not all headlines we saw, however, were this careful and responsible and some, obviously trying to entice readers to “click,” implied a much more serious problem.

The Haaretz headline shows us that it is perfectly possible to keep the public informed about the latest scientific developments without scaring us. Do editors and journalists worry that a headline or story that tells us that a health risk is exceedingly small won’t attract readers? Perhaps. But we are hopeful that editors and journalists understand that they have a huge impact on the public’s health and that nuance in their headlines and stories have great potential to affect our attitudes and our behaviors. Editors and journalists know that it is a public health imperative that as many people as possible get vaccinated against COVID-19. That means crafting their messages carefully so that they convey the best possible public health messages. This applies not only to stories about COVID-19 vaccines, but to all stories about health and science in which our safety is at stake.

It Is Not All In Your Head

But Where is It?

Chronic back pain is a debilitating condition that affects up to 20% of people. It causes real suffering and prompts billions of dollars in annual healthcare expenditures. Although serious underlying pathology can cause chronic back pain, in many cases even extensive diagnostic workups do not yield a reason for even very severe symptoms. But back pain really hurts and people who have it do not want to be told the pain is “all in your head” just because doctors cannot find a medical reason for it.

         How then to explain a recent study in which a psychological treatment for chronic back pain produced substantial, lasting relief? A collaboration of scientists led by investigators at Weill Cornell Medical College and Dartmouth College randomized people suffering with moderately severe chronic back pain to receive a psychological intervention called pain reprocessing therapy (PRT), saline injections in the back (placebo), or usual treatment. At one year follow-up, two-thirds of the patients randomized to PRT reported being completely or nearly-completely pain free, compared to 20% in the placebo group and 10% in the usual care group. The investigators also performed functional magnetic resonance imaging (fMRI) scans on the participants and showed small but significant changes in brain activity in the PRT group in areas of the brain known to be involved in pain perception and interpretation, like the anterior insula and anterior cingulate cortex.

Chronic back pain is a common and debilitating condition that prompts extensive healthcare utilization but often has no apparent cause and is difficult to treat. A psychological treatment, however, was recently reported to be very effective in relieving pain (image: Shutterstock).

         The psychological treatment in this study—pain reprocessing therapy (PRT)—is described by the study authors as seeking to “promote patients’ reconceptualization of primary… chronic pain as a brain-generated false alarm. PRT shares some concepts and techniques with existing treatments for pain and with the cognitive behavioral treatment of panic disorder.” Patients randomized to PRT had one telehealth evaluation and educational session with a physician and then eight one-hour sessions with a PRT therapist. The results of the study, which were published in JAMA Psychiatry, are clearly impressive. Chronic back pain usually does not remit on its own and indeed in this study patients who received placebo or usual care for chronic back pain were for the most part still in pain at the one-year follow-up point.

Why Should a Psychological Treatment Work?

         So, if a psychological treatment is effective in relieving pain that clearly emanates from someone’s back, what does that tell us about the origin of the pain? There are many other instances in which pain and other symptoms that are experienced in various parts of the body but for which no underlying medical pathology is found respond to psychological treatment. Cognitive behavioral psychotherapy (CBT), for example, is an effective treatment for irritable bowel syndrome, a condition characterized by abdominal pain and other gastrointestinal symptoms.

         The reason why psychological treatments work for painful conditions is not a  surprise to neuroscientists, but seems hard for some people to accept. While back pain is not “all in your head,” the brain is. Think for a moment about what happened the last time you accidentally touched a hot object, like a hot stove. Without thinking your hand immediately withdrew and then fractions of a second later you felt the pain. The reason for the delay in pain perception in this situation is that it takes longer for neural impulses to travel from sensory receptors in the skin of the hand to the brain than it does to travel to neurons in the spinal cord, where the reflex arc that causes your hand to pull back is initiated. It is only when the neural impulses reach specific regions in the brain that you can feel pain. In that sense, all pain is “in your brain” and therefore interventions that operate at the level of the brain, like psychotherapy, have the capacity to alter our perception of pain.

Neural impulses from sensory receptors in the skin travel first to the spinal cord and then to the brain. This explains why we withdraw our hand from a hot object before we actually feel the painful sensation (image: Shutterstock). 

         What we have described with the reflex arc that occurs when we touch hot objects does not mean that someone who touches a hot stove is “making the pain up” or that the pain we feel when we touch hot stoves is “psychosomatic.”  It means that pain is a symptom dependent on a particular organ of the body, the brain, which is susceptible to psychological intervention. Other symptoms of common diseases are similarly dependent on the brain, like fatigue, memory problems, and, of course, “brain fog.” Incidentally, the specific regions of the brain that perceive pain in the hot stove example, like the anterior insula and anterior cingulate cortex, are also the ones where changes in brain activity were seen on the fMRI scans in the patients with chronic back pain who received the psychological intervention PRT.

         It is probably the case that almost all “physical” illnesses have at least some components that are brain related. Sometimes this is the sadness, anger, and other emotions that often occur to us when we are seriously sick. In other instances, however, the brain is intimately involved in the basic physiology of the illness, as appears to be the case with chronic back pain, irritable bowel syndrome, long COVID, and a host of other syndromes for which obvious medical causes are elusive but suffering and disability are real.

         The chronic back pain study tells us that considerations of providing psychological interventions should be a prominent part of the approach to many illnesses. This starts with how healthcare professionals talk to patients about the steps they take to evaluate and treat an ailment. If a doctor first performs a lengthy work-up of some pain complaint that is bedeviling her patient, for example, finds no abnormalities on a myriad of tests, and then says “I can’t find anything wrong, it must be psychological,” it is likely that the patient will feel dismissed and angry. But if that doctor begins the evaluation by saying “it is often the case that even the most sophisticated tests don’t show anything, so we won’t be surprised if all the tests come back with normal results. We have good treatment options if that is what we get and we’ll work together to help you with your symptoms,” it is more likely that the doctor establishes a collaborative and trusting relationship with her patient in pain, one that has a better chance of seeing the patient get better. Some of the effective interventions in such situations will, unsurprisingly, be psychological. We need to better prepare and educate both healthcare professionals and patients that this is so.