What Journalists Write Matters

Is There a “Do No Harm” For Reporters and Editors?

At graduation, physicians recite the Hippocratic Oath, some versions of which include the famous injunction “Primum non nocere” (First, do no harm). It is not the case, however, that only healthcare professionals need to remember this warning.

         Last December 9 and then again on February 4 The New York Times published two articles that are causing a stir in the world of suicide prevention and research (note that Critica will not post links to either article for reasons that will become clear). The first article detailed a website that gives advice on how to kill oneself and the second tells readers about a chemical that can be bought online that is increasingly used by people trying to kill themselves (again, Critica will not list the name of the website or of the substance). Between these two articles, a piece appeared online in Medium by four academic suicide researchers titled “The New York Times might have increased suicide deaths. Here’s what it can do to fix it.”

In two articles, The New York Times gave details about a website that promotes suicide and a means of attempting suicide. Did it violate guidelines for reporting on suicide and did its actions increase the number of suicides? (image: Shutterstock).

         The authors of the first Times article, dealing with the website, apparently worked for a year researching the site and uncovering cases of people who seem to have been motivated to kill themselves by it. It is accompanied by a warning, “This article focuses on suicide and contains details about those who have taken their own lives. If you are having thoughts of suicide or are concerned that someone you know may be, resources are available here.” But the article then goes on to explain in detail how to find the suicide-promoting website. We had no trouble finding it with a simple Google search and although the site says it is only for people over 18, there is of course no real way to prevent children from going on it. Once there, they will find postings by people seriously considering suicide and advice on how to take one’s life.

Uncovering a “Dark Corner” of the Internet

According to one report (again, Critica is not providing links), the Times authors considered the ethics of describing the website in detail because they felt compelled to bring to light a “dark corner” of the internet. If people are following the advice of this website and dying, then a thorough public investigation may be warranted to allow authorities to consider regulating it. The Times story identified 45 people whose deaths by suicide involved the website and noted that “The site now draws six million page views a month, on average—quadruple that of the National Suicide Prevention Lifeline…” They decided to name the website in their story and the substance used in many of the suicides “in order to fully inform readers about the dangers they pose, particularly to the young and vulnerable.”

But the four suicide researchers who authored the Medium article remind us that “Decades of research have also shown that when news and entertainment media share information about suicide method and location, there is a short-term increase in suicide deaths.” Over 30 years ago Madelyn Gould and David Shaffer of Columbia University were among the first to show in an article in the New England Journal of Medicine that media depictions of suicide can lead to imitative suicide attempts. The New York Times named the website, gave many details about the people who had supposedly killed themselves because of the website, and named a means of attempting suicide that some of them had used. Critica is deliberately withholding links to the articles that do this because we believe the research showing that doing so can in fact lead to an increased risk for suicide among some vulnerable people.

Guidelines for Reporting Suicides

         That research by Gould, Shaffer, and many others has led to the development of suicide reporting guidelines, which, among other things, recommend against printing the method by which someone ended their life. This was clearly violated by the second New York Times story, which details a method apparently being used with increasing frequency and how to obtain it. Again, we had no trouble finding the substance on Amazon and could easily have purchased it.

         The Medium article authors, writing about the first of the two Times articles, stated that “While we were grateful that this investigation could lead to the website and the novel method being shut down, we were equally concerned that disregarding the suicide reporting recommendations could result in more suicide deaths.” The authors then urged The New York Times to conduct research to find out if in fact their articles are associated with an increase in suicides. “If the journalists and editors believe that publishing this piece was in the service of saving lives,” they wrote, “then it is imperative for them to follow through in the ways outlined…”

         Of note, the Medium article authors say that The New York Times refused to publish their article.  

More Reporting Guidelines are Needed

         When journalists write their stories, they are undoubtedly interested in getting a maximum number of readers. Some are also interested in producing work that will advance a public good. But do journalists and editors consider their potential to do harm? Reporting on suicide is one of the ways that journalists can indeed do harm, but guidelines that may minimize that harm are inconsistently followed. There is also evidence that reporting on mass shootings can encourage copycats and guidelines have been issued for better reporting of mass shootings as well. It would be interesting to learn the extent to which these guidelines are followed.

         During the Covid-19 pandemic, we have seen that misinformation about vaccines can lead people to delay or refuse getting vaccinated. Thus, we know that what people see and read influences their behaviors. Journalists and editors, therefore, must be aware of the potential they have for causing harm if they do not carefully think through the ways that they report about important health issues.

Imagine the following possibility: There is consensus among medical experts that high blood pressure (hypertension) increases the risk for adverse cardiovascular and cerebrovascular events, like heart attacks and strokes. If diet and exercise do not lead to improvement, antihypertensive medication may be necessary for many people. A finding reported in a medical journal might identify a previously unknown but rare adverse side effect from one such medication. A reporter might think it their duty to make that adverse side effect known to the public in the spirit of informing people about the risks of their medications. That is an understandable thought. At the same time, however, will the journalist consider that the way they report on the adverse event might make people overestimate a rare risk, generalize the adverse side effect to all antihypertensive medications, or become distrustful of their doctors’ advice to continue taking their medication? Does the journalist think about the potential for their article to have the unintended consequence of motivating some people to stop taking necessary medication, risking serious adverse consequences?

         It is unclear to what extent journalists and editors are aware of the potential their communications have to do harm, although there has been considerable discussion among journalists about the adoption of stronger editorial codes. A Hippocratic Oath for journalists has even been suggested. We believe that a broader set of guidelines is needed around ensuring that what journalists write about and post in the areas of science and health are in the service of the public’s health. Then, we need to work on incentives that will motivate journalists to follow them. Critica is prepared to work with journalists and editors to craft such guidelines and incentives.

Long Covid and the Brain

Inject an agent that stimulates the immune system (usually something called lipopolysaccharide) into a rat and the animal will exhibit a constellation of behavioral symptoms called “sickness behavior.” The symptoms include reduced locomotion, decreased appetite (anorexia), sleepiness, and decreased engagement in usual activities like grooming. In other words, the rat seems to be depressed. Scientists can demonstrate that the immune system activation that causes sickness behavior in rodents also produces physical activation of immune markers in the brain.

         Humans who get infected with various viruses and bacteria can relate: when we are sick we generally have decreased interest in things (called “anhedonia” by psychiatrists and psychologists), decreased energy, sleep disturbances, reduced appetite, and a sad feeling. Is this because stimulation of the immune system by these infectious agents is having a physical effect on our brains or is it a psychological reaction to feeling sick that is not accompanied by any discernible physical brain changes?

         This question, which is very difficult to answer, has become especially relevant during the current coronavirus pandemic. The syndrome of Post-Acute Sequelae of SARS-CoV-2 Infection (PASC), better known as Long Covid, has sparked intense interest throughout media and the internet and is the subject of intensive research by basic and clinical scientists throughout the world. Many of the manifestations of Long Covid are neurological and psychological in nature, including headache, “brain fog,” confusion, loss of the senses of taste and smell, depression, and anxiety. This raises the question of how the virus that causes Covid-19 affects the brain. Are the neurological and psychological symptoms of Long Covid related to physical changes in any part of the central nervous system?

Virus Not Detected in Human Brain

         Some viruses are neurotropic, meaning they can infect nervous system tissue directly. Examples are the measles, polio, and rabies viruses. So far, however, scientists have not found evidence that the virus that causes Covid-19 (SARS-CoV-2) is neurotropic in humans, although the virus has been detected in the brains of mice and non-human primates. Autopsy studies have failed to produce any signs of direct infection of neurons or other cells in the brains of people who have died from Covid-19. That means that it is unlikely that ongoing viral brain infection is the cause of Long Covid’s psychological and cognitive symptoms.

         One theory that has been prominently advanced and studied to explain Long Covid brain-related symptoms is the possibility that an ongoing immunological process is affecting the brain. Given the well-established effect in rodents of inflammation elsewhere in the body on both immune function in the brain and on behavior, this is certainly possible. While the evidence for this mounting, it is still far from definitive. 

The Difficulties of Studying the Human Brain

         Studying the human brain is a difficult undertaking because it is encased behind something called the blood-brain barrier, a protective layer of cells and blood vessels that keep many things circulating in the bloodstream from entering the brain and many things in the brain from exiting into the blood. This means that blood tests generally do not reveal much about what is happening in the brain. And unlike other organs of the body, we usually cannot directly sample brain tissue in living people.

         Blood studies have shown that Covid-19 can be associated  with the production of autoantibodies, antibodies that mistakenly recognize tissues of the body as if they were foreign invaders. These autoantibodies have been shown to attack central nervous system tissue in the laboratory and to persist in the blood of people for weeks and even months after recovering from acute Covid-19 infection, suggesting that this could be one mechanism by which abnormal immune system activation affects the brain. It is not known yet, however, whether in fact Covid-19 stimulated autoantibodies do cross the blood-brain barrier in humans and adversely affect the brain.

         Studies using formal neuropsychological testing have shown cognitive abnormalities months after recovery from the acute phase of Covid-19 infection. Brain imaging studies can further tell us many things about brain structure and function. Using the technology of positron emission tomography (PET) scanning, for example, scientists reported decreased metabolism in parts of the brain in patients with severe cognitive impairment in the days following recovery from Covid-19. Six months after recovery from Covid-19, the same investigators found improvement in both cognition and brain metabolism in the study participants, but there were still residual abnormalities in both domains. These are important studies, but they do not provide us with an understanding of the mechanism by which SARS-CoV-2 causes disturbances in brain function.

Positron emission tomography (PET) scanning of the brain can detect abnormalities in regional brain activity (source: Shutterstock).

Animal studies can clarify the effects of the Covid-19 virus on the brain. Studies in a mouse model show that the virus can cause inflammation in the brain without actually entering the central nervous system, with activation of the brain’s immune cells, called microglia. The same pattern of microglia activation has been observed in autopsied human brains. Once again, however, it is unclear to what extent this is a direct result of previous Covid-19 infection or how directly correlated it is with Long Covid symptoms.

Examining the Cerebrospinal Fluid

         That leaves examination of the cerebrospinal fluid (CSF), the fluid that surrounds the brain and runs down alongside the spinal cord. Samples of CSF can be obtained by performing a lumbar puncture, also known as a spinal tap, a standard procedure when brain infections like meningitis are suspected. Although lumbar puncture is less painful than people often anticipate and generally quite safe, it is not something done casually and therefore research studies involving examination of CSF samples often involve relatively small numbers of patients. Furthermore, even though the CSF originates in the brain, what is in it doesn’t always represent exactly what is occurring within the brain tissue itself.

         Studies in which CSF has been sampled from patients with Long Covid are now available. They show elevation of a number of markers of abnormal immune system activation. As noted above, however, these studies involve small numbers of participants and different studies have found different immune systems markers to be activated. Although suggestive, the data do not yet nail down the exact role of immune system activation in Long Covid.

The cerebrospinal fluid (CSF) bathes the brain and spinal cord. It can be sampled via a lumbar puncture, or spinal tap, and can reveal abnormalities in brain chemistry and the presence of abnormal cells or pathogens (source: Shutterstock).

         It is also important to remember that rates of anxiety disorders, depression, and posttraumatic stress disorder have increased during the pandemic. When symptoms of these conditions emerge in people who have recovered from Covid-19 it is difficult to know whether it is the psychosocial burden of living under pandemic conditions that is responsible, the physical effects of ongoing neuroinflammation, or some combination of the two. Because of the stigma surrounding mental illness, it is sometimes difficult to tell patients with Long Covid that one of their problems is depression and that treatment with evidence-based psychotherapy and/or antidepressant medication might be helpful. Here, however, trying to make the distinction between what is “physical” and what is “psychological” may be misleading. There is a long literature on the relationship of depression to immune system abnormalities and overactivation. Hence, it would be unsurprising to find that both psychosocial and immunological factors are involved in Long Covid.

         Evidence is mounting that immune system activation is a factor in the cognitive, emotional, and neurological symptoms associated with Long Covid, but this is far from settled science. As a recent viewpoint article in Science noted “The pathophysiological mechanisms are not well understood, although evidence primarily implicates immune dysfunction, including non-specific neuroinflammation and antineural autoimmune dysregulation.” The search for what causes Long Covid and its brain-related symptoms thus continues.

What Builds Trust in Health Care Institutions?

The psychology of trust should guide our response to health crises.

In January of 2022, at the beginning of a new year that has so far brought renewed surges of COVID-19 cases in the form of an even more contagious variant, the New York Times published an article entitled “For CDC’s Walensky, A Steep Learning Curve on Messaging.” The article reviewed some of the perceived missteps that the Centers for Disease Control and Prevention’s (CDC’s) director Rochelle Walensky has made in regard to communicating ever-changing COVID-19 guidelines to members of the American public. The article pointed to some areas of tension between Dr. Walensky and two other prominent health officials, Dr. Anthony Fauci and Dr. Vivek Murthy, specifically on the decision to reduce isolation time from 10 to 5 days for people with COVID-19 infection (vaccinated or not) without requiring a negative test first. The lack of a united front on issues such as these has certainly contributed to confusion among Americans about what they are supposed to do in various scenarios related to the virus.

Public Trust

While the Times article focuses on “messaging,” and particularly on a failure to provide consistent messaging from a range of government health officials, it’s not entirely clear that “messaging” is what’s at the heart of the problem. While it’s true that there are issues surrounding the way CDC communicates with Americans, the real heart of the problem seems to have more to do with a way in which public officials in this crisis are not understanding and paying enough attention to the issue of public trust.

Postmodern Studio/Shutterstock
Source: Postmodern Studio/Shutterstock

It may be no surprise that evidence suggests that trust in public officials increases the likelihood that people will follow directions in a crisis such as a pandemic. This is why it’s essential that officials understand how trust is built in these situations and how to communicate in a way that will foster this trust. They must stop seeing communication as merely a way to get information to people but also, and maybe more importantly, as a medium to convey trustworthiness. They should therefore constantly be asking themselves: what is the best way for me to craft this message so that it will encourage trust? Because health officials may be too busy or not have the requisite training needed to craft messages in this way, it is important that there be at least one person on staff at relevant health agencies who understands how to communicate well during a crisis.

Consistency and Expectations

Trust is often signaled by consistency and relatively standard and regular expectations. This means that people will expect to hear from health officials at regular intervals and that they expect the communications to build on themselves rather than be full of shocks and reversals. As we have obviously seen with this pandemic, it is impossible for there to be no surprises. We’ve seen changes in recommendations on masks, isolation time, and testing, among other things. What’s important in these situations is for the health official to couch these changes in the context of things that aren’t changing and to acknowledge that the changes themselves might be jarring. Even these small gestures can make the delivery of unexpected news easier. When people feel that they can depend on officials to communicate in a timely fashion and gesture toward and understand how they might feel about unexpected news, they begin to trust these officials.

Some other recommendations for building trust in a health crisis are including a diverse set of experts as communicators, modeling the behavior health officials would like to see in the public, and ensuring that information is always as transparent as possible. Health officials should never allude to something that cannot or will not be shared with the public and should work extra hard to reassure people that when they do not know something, it does not mean they are withholding information. It is also important that health officials communicate with each other and try to reach agreement on scientific issues before they go to the public.

Good communication during a crisis like a pandemic is about more than simple accuracy of information and consistency among health officials. It is fundamentally about what does and does not build trust. Once crisis communications become more focused on this aspect of the public’s relationship with health officials, then we will be able to make more headway in ensuring that more Americans follow guidelines that keep us all safe.

It’s About More Than Money

Conflicts of Interest in the Healthcare Industry are Both Financial and Non-financial

When your doctor prescribes a new medication for you, of course you want to believe that they have made the choice of which drug to prescribe based on science and your individual needs. When you read a scientific paper giving the results of a study to evaluate a new intervention for a disease, it is expected that all the benefits and risks discovered in the study have been accurately and completely described. When you go online seeking guidance from a patient advocacy organization about how to approach treatment for an illness, it is your assumption that the information you will see on the nonprofit’s website is unbiased and reflects up-to-date science on the topic.

         A study published last fall in the medical journal the BMJ tells us, however, that those beliefs, expectations, and assumptions must be tempered by a consideration of a considerable web of potential conflicts of interests. To come to that conclusion, the authors did a thorough—in technical terms, a “scoping” –review of the literature and obtained input from “an international panel of experts with broad expertise in industry ties and deep knowledge of specific parties and activities.” They looked at ties of manufacturers of drugs and medical devices with a variety of players in the healthcare industry, including researchers, practitioners, and non-profit organizations, and considered both financial and non-financial incentives. As such, this is perhaps the most comprehensive examination of potential conflicts of interest in the healthcare field that we have yet seen.

Tangled Web of Relationships

         What they found is sobering. If you look at figures one, two, and three from their publication (which is available for free online), you will see a tangled web of relationships between the medical product industry and what they call the healthcare ecosystem. It is relatively easy to trace payments made from pharmaceutical companies to medical schools in the form of research grants and then to refer to the long literature that shows that reports of the results of such research tend to emphasize benefits of the company’s products and minimize risks. It is also now straightforward to identify payments from drug and medical device companies to individual physicians because these are covered under various sunshine laws that mandate public disclosure. There is also a long literature documenting that paying doctors to give lectures or attend dinners influences what they prescribe.

When money changes hands between a medical industry company and a healthcare organization or provider, the potential for conflicts of interest arises. Some non-financial conflicts of interest can also be important (image: Shutterstock).

          Perhaps we are less likely to consider the ways that pharmaceutical industry funding affects consumers directly, but we have only to remember the advertisements that the companies regularly place about their medications to see that they are influencing us as well. Studies show that these direct to consumer (DTC) advertisements affect how we understand the risks and benefits of drugs, sometimes in subtle and not always accurate ways.

         We may not always realize that non-profit organizations that serve as patient advocates and educators also receive money from drug and device manufacturers. There is much less known about whether these payments adversely affect the advice these organizations give us and the work that they do.

Non-Financial Incentives Are Also Found

         An important feature of the BMJ study is that it considers non-financial incentives as well as financial ones. A medical scientist might be included as a co-author of a paper about a study that was funded by a pharmaceutical company without receiving any actual money themself. Even though no money changes hands in this case between scientists and drug companies, the potential for the scientist to be biased about the study results is potentially present. Here is how the authors of the paper describe the financial and non-financial ties they looked at:

Many medical product industry ties to these parties are financial, involving money or items of financial value, as when companies negotiate prices with supply chain agents; purchase reprints from journals; make contributions to public officials for campaigns; provide consultancy, speaking, or key opinion leader payments to healthcare professionals; or financially support government agencies, healthcare organizations, and nonprofit entities through donations, grants, or fees. Other ties are non-financial, as in companies’ direct-to-consumer advertising to patients, advertising and detailing of prescribers, unpaid professional consultancy work, or the offer of data, authorship, and other professional opportunities to clinicians and researchers. All party types have financial ties to medical product companies. Only payers and distribution agents lack additional, non-financial ties.

         These potential conflicts of interest all involve the medical product industry, and they are extensive. Yet, there is more. The paper does not address potential conflicts of interest that do not involve industry. There are also instances in which a long career of advocating for a particular intervention may bias a scientist in how they talk and write about new studies. Let us say for example that an investigator has written several papers showing that a particular type of psychotherapy is good for treating headaches. That scientist’s career and reputation come to be associated with the benefits of the therapy, so when the results of a new study the scientist conducts do not replicate previously found benefits, the scientist could feel conflicted about reporting those results. That is why replication of studies by independent groups is always necessary before we conclude that a finding is solid.

         There is some oversight now of direct pharmaceutical industry payments to researchers and prescribers, but much less to non-profit organizations, the BMJ paper noted. And when it comes to potential non-financial conflicts of interest, including those that do not involve industry, it is hard to know what kind of oversight could be implemented.

Incentives Do Influence Behavior

         It would be easy to dismiss what the BMJ study found by noting that these conflicts of interest are always expressed with the modifier term “potential.” That is, a financial or non-financial relationship between industry and an individual or organization in healthcare does not automatically translate into actual behaviors. Many doctors we know insist that they can go to pharmaceutical industry-sponsored continuing medical education courses without coming away feeling more favorable about the drug the company makes or going on to prescribing it more often. Many medical journals now take great care to ensure that research studies supported by drug companies include balanced reporting of benefits and adverse side effects and even publish studies in which the company’s drug did not work.

         As we noted above, however, there is ample, recent literature attesting to the fact that incentives influence the behavior of scientists, clinicians, and consumers. For example, one recent study showed that physicians were more likely to implant cardioverter-defibrillator devices made by manufacturers who had paid them the most money. Although the proportion of physicians who receive payments from industry seems to be decreasing, a study showed that 45.0% took some form of payment in 2018. A second example comes from a study published last year in the Journal of the National Comprehensive Cancer Network. An analysis of editorials in oncology journals showed that 74% had at least one author with a disclosed conflict of interest with a pharmaceutical company, 39% had a direct conflict of interest with the company whose drug was being discussed in the editorial, and 12% of the editorials were judged as “unfairly favorable” to the product being discussed, of which a majority fell into the direct conflict of interest category.

         These are just two examples in which conflicts of interest with industry seem to directly influence behavior, one involving what doctors prescribe and the other what medical scientists write in scientific journals. There are many more of these kinds of studies, leaving little doubt that the money the drug companies spend has a real effect on how their products are used. Again, that does not necessarily mean patients are being harmed. The drug that a doctor who has taken money from a drug company prescribes may be exactly the best one for an individual patient. The study paid for by a pharmaceutical or medical device company may really show that its product is beneficial and has a tolerable adverse side effect profile. All too often we see commentators go in the opposite direction on this and try to claim that any amount of money that changes hands between companies and the healthcare system automatically invalidates the results.

         We need to be wary, however, about how medical industry money influences every aspect of the healthcare system. If you accept help with a co-pay for a drug from the company that makes it, be certain you and your doctor really think it is the best one for you. Be very skeptical about ads for drugs—that seemingly endless recitation of adverse side effects that is always accompanied by cheerful music and video representations of happy people does not really tell you much about how the drug will affect you. It is fair game to look up your doctor on one of the publicly available search engines to see if they accepted any money recently from a pharmaceutical company. Whenever a paper lists a company as a sponsor of a study, read it carefully for signs that benefits are being magnified and adverse events minimized. Before donating money to a non-profit healthcare organization or accepting its advice, inquire about donations they receive from industry sources.

         Much harder is screening for non-financial conflicts of interest, both the kinds the BMJ paper was able to uncover and the ones that are much harder to detect, like the emotional attachment an investigator has to their findings. Keeping an eye out for inflated language is one way to watch for bias. If findings are described as “breakthroughs” or “major,” remember that those assessments are usually in the eye of the person who ran the study. When a paper has the phrase “we have previously shown,” ask yourself if anyone else has also shown it.

         Without going to the extreme of reflexively rejecting anything that has a connection to a financial or non-financial incentive, it is important to recognize the web of influence that engulfs our healthcare system. Every one of us needs to be on guard for the insidious presence of bias.

Raining on the American Football Parade

Is Playing Football Bad for the Brain?

On Sunday, February 13, millions of people watched as two teams in the National Football League (NFL) played in the annual Super Bowl game. At the time of writing this commentary, we did not know which teams would be in the game or the outcome, but we could reliably predict that there would be a huge worldwide audience, lots of fanfare, and many excited fans of the American version of football eagerly watching the game.

Millions of people watched  the National Football League’s championship game—the Super Bowl—in February but concerns swirl that American football is the cause of a degenerative brain disease called chronic traumatic encephalopathy or CTE. (image: Shutterstock).

         We also predicted that very few people watching the Super Bowl would be giving much thought to the case of former NFL player Phillip Adams. Last April, at the age of 39, Adams shot and killed six people before shooting himself to death. At autopsy, as reported in the New York Times, Adams’ brain showed severe chronic traumatic encephalopathy, or CTE, a degenerative brain disease that is described as follows by experts at the Boston University CTE Research Center:

Chronic Traumatic Encephalopathy (CTE) is a progressive degenerative disease of the brain found in people with a history of repetitive brain trauma (often athletes), including symptomatic concussions as well as asymptomatic subconcussive hits to the head that do not cause symptoms. CTE has been known to affect boxers since the 1920’s (when it was initially termed punch drunk syndrome or dementia pugilistica).

According to the story in the New York Times by Jonathan Abrams, “More than 315 former N.F.L. players have been posthumously diagnosed with C.T.E., including 24 players who died in their 20s and 30s.”  There are now a significant number of papers in scientific journals linking repetitive head trauma experienced during contact sports like American football and CTE and reports linking CTE to a wide variety of abnormal behaviors, including violence and suicide.

Evidence is Not Definitive

It is important to note that the evidence linking playing American football to CTE and CTE to violence and suicide is not airtight. In fact, there have been questions raised about the quality of the data linking playing American football to adverse behavioral and cognitive outcomes. For example, one study found that overall, homicidal violence is rare among NFL players. A review of the literature on suicides among NFL and former NFL players found only weak evidence for a causal relationship with CTE.

As we have often noted in these commentary pages, the most robust type of study to prove a causal link between two things is the randomized controlled study (RCT), in which a group of people is randomized to different conditions. In theory, and usually (but not always) in practice, the RCT study design controls for all the differences between the randomized groups except for the randomized condition itself and therefore gives the clearest picture of whether one thing actually causes another.

One can easily see how biases could creep into the study of CTE in American football players. Perhaps only former players with signs of brain disease, like early dementia, or who exhibit violent behavior like Adams did, come to the attention of researchers and wind up undergoing post mortem brain examination. It could even be that the rate of CTE is in fact no higher among people who have engaged in contact sports and experienced repetitive head trauma than would be found among the general population. The main study upon which the conclusion that CTE occurs at a high rate among people who have played football was conducted by the Boston University group led by Dr. Ann McKee and published in the Journal of the American Medical Association (JAMA) in 2017. This was a case series of examinations of 202 donated brains from deceased American football players. The research group found that overall, 87% of the brains showed pathological evidence of CTE, including 99% from former NFL players. The players whose brains showed evidence of CTE after death had exhibited many signs and symptoms of abnormal behavior and cognition during their lifetimes, including impulsivity, depression, suicidal ideation, and violence.

Nevertheless, the authors of this landmark study acknowledged in their paper several limitations to their work. They wrote:

“… a major limitation is ascertainment bias associated with participation in this brain donation program. Although the criteria for participation were based on exposure to repetitive head trauma rather than on clinical signs of brain trauma, public awareness of a possible link between repetitive head trauma and CTE may have motivated players and their families with symptoms and signs of brain injury to participate in this research. Therefore, caution must be used in interpreting the high frequency of CTE in this sample, and estimates of prevalence cannot be concluded or implied from this sample.

The second major limitation they noted is the lack of a comparison group of people who were similar in every way to the American football players except that they never played any contact sports.

Other Study Designs Are Needed

We cannot, however, conduct an experiment in which people are prospectively randomized to play in the NFL or not and then, when they die, autopsy all their brains. While this would answer the question definitively, it is obviously not something that could ever be done. And this allows people to cast doubt on the claims that playing tackle football is harmful to a person’s brain. Does this mean that the answer to the question of whether American football and other contact sports associated with head injuries causes CTE and consequent behavioral disturbances will always remain elusive?

Not necessarily. Remember that there are causal links between exposures and adverse health outcomes that we know about that were not proven by RCTs. The best example, of course, is cigarette smoking. No one ever randomized people to either smoke or not smoke cigarettes, waited decades, and then determined that the smokers had higher rates of lung cancer than non-smokers. So how do we know with such certainty that smoking causes lung cancer?

We know this from a variety of studies, including animal studies showing that smoking causes changes in lung cell biology and large human population studies in which it is clear that smokers have higher rates of lung cancer (and a lot of other terrible diseases) than non-smokers.

So, it is possible to determine causality without doing RCTs, but it usually takes large and expensive studies that are very carefully performed. Two recent studies once again raise alarms that playing American football may harm the brain.

The first is a study published in the journal Neurology in which 75 people with a history of repetitive head injuries, including 67 people who played an average of 12 years of American football, underwent magnetic resonance imaging (MRI) scans at an average age of 62 years. Their brains were then examined when they died, at an average age of 67 years. The investigators found a high rate of what are known as white matter hyperintensities, an abnormality in the long tracts that connect brain regions, in the MRI scans of the study participants, and the greater the number of these abnormalities, the longer the participant had played football. More than three quarters of the participants had CTE at autopsy and the burden of white matter hyperintensities on MRI scanning predicted the amount of CTE pathological markers in their brains that were detected postmortem. Once again, there are obvious limitations to this study, which may have suffered from ascertainment bias and lacked a comparison group. Nevertheless, a study like this adds a degree of biological plausibility to the idea that playing football can cause CTE because white matter abnormalities are exactly what one would predict would be found in brains of people who had experienced multiple head traumas.

Abnormalities in the white matter of the brain, called white matter hyperintensities, were detected at a high rate in former football players and were associated with markers of CTE in their brains (image: Shutterstock).

A second recent study of interest was published last December in one of the JAMA journals. It showed that among a group of 19,423 men who had played in the NFL, the rate of the fatal neurological disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease) was four times higher than the general population rate. Furthermore, those players who had ALS also played significantly longer in the league than players who did not get ALS. The authors of the study noted that there may be links between ALS and CTE and concluded “Ultimately, this study provides additional evidence suggesting that NFL athletes are at increased risk of ALS and suggests that this risk may increase with more years of NFL exposure.”

         Our society does permit adults to engage in potentially dangerous and even fatal behaviors. Thus, with some restrictions, adults may choose to smoke cigarettes, drink alcohol to excess, or eat unrestrained amounts of processed foods. One could therefore argue that if an athlete understands the risks of playing in the NFL, that is his choice. But we do not permit youths to smoke or drink and our concern is with the children who engage in American football or other sports that involve repeated head trauma. Even without suffering a concussion, one recent study showed evidence of brain damage in college football players who had played football for only one year. This begs the question of what effects playing football might have on high school-aged players or even younger children who play in organized tackle football leagues.

         While we cannot definitively state that playing football causes CTE at this point, there is an abundance of evidence to cause significant concern that playing football may harm the brain. Just as we do not permit children to smoke cigarettes, drink alcohol, or drive cars, it is time we ask ourselves whether it is okay to let them play tackle football. And if football becomes prohibited until the age of consent, where would that leave the NFL and its ability to recruit new players? We urgently need studies now to investigate potential harms of football to developing brains. Much as many of us love watching football games, it is not worth risking brain damage to children.

A Post-Mortem on Glasgow and COP26

Is There Hope to Mitigate the Climate Crisis?

Last November, representatives from more than 100 countries gathered in Glasgow, Scotland for a much-anticipated United Nations-sponsored climate conference. The goal was to try to forge international agreements to keep the earth’s temperature from rising more than 1.50 Celsius above pre-industrial levels. The stakes were and remain high: climate scientists consider 1.50 C to be a tipping point: exceeding that will bring us more severe flooding, wildfires, heatwaves, species extinctions, zoonotic diseases, and human displacement.

         An article in the November 19 edition of Science by Cathleen O’Grady laid out the Glasgow conference’s success and failures. To be sure, many things were accomplished, giving us some reason for hope that countries are finally taking the climate crisis seriously and are prepared to take meaningful action. More than 100 countries present at COP26 pledged to adopt new curbs on greenhouse gas emissions and the conference concluded with a call for “phasing down” burning coal and other fossil fuels.

         Other notable accomplishments included:

·  Hundreds of companies and investors made voluntary pledges to phase out gasoline powered cars, decarbonize air travel, protect forests, and ensure more sustainable investing.

·  There were agreements to halt and reverse deforestation

·  An agreement to cut methane gas emissions, which are more potent at warming the planet than carbon dioxide emissions, by 30% by 2030 gained international media attention

·  Countries decided to review their goals annually instead of every five years, with next year’s conference slated to take place in Egypt.

Representatives from nearly 200 countries gathered last November in Glasgow, Scotland for the U.N.-sponsored COP26 conference on climate change. While there were hopeful signs, many were disappointed by the conference’s ultimate outputs (image: Shutterstock).

A great deal of attention was placed at the conference on how high-income countries would deal with their responsibilities to the low-income countries that have contributed the least to global warming but suffered its devastating effects the most. There were some new commitments made for funds to flow from the wealthy nations to help poor countries cut greenhouse gas emissions and build the infrastructure that they will need to adapt to the inevitable ravages that climate change will bring to them in the future. It was also decided to begin a discussion to create a fund to compensate low-income countries for the damage already done to them by the relentless burning of fossil fuels by high-income countries.

Another development that garnered a great deal of attention was a joint announcement by the U.S. and China to increase their cooperation on combating climate change. The two countries are at odds on many issues and thus their decision to cooperate on perhaps the most pressing issue of all—saving human civilization from climate change—struck many as highly significant.

Many Shortcomings, Disappointments

Yet despite what seems to be some signs of progress, the presence of hundreds of thousands of protestors in Glasgow during the conference signaled the many shortcomings and disappointments that attended COP26.  “By the end of the meeting,” O’Grady wrote, “…it was clear that the international effort to limit global warming to 1.50C above preindustrial levels…is on life support.”

None of the commitments or agreements made during the Glasgow conference will actually get the world to stay under the 1.50C increase tipping point; in fact, we are now on a course to exceed that limit, with expectations that if we do not see dramatic decreases in greenhouse gas emissions soon, temperatures will rise by more than 2.00C by the end of this century. That will bring us all the devastations noted above, with some small island nations disappearing totally as sea levels rise and parts of the world becoming virtually uninhabitable. At times it seems as if some world leaders at the conference were more concerned with protecting the fossil industry, which had ample representation of its own in Glasgow, than about the droughts and food shortages that the climate crisis is already bringing to many parts of the world. A major source of controversy are the subsidies that some wealthy nations provide to fossil fuel companies. The conference concluded with a call to phase them “down” instead of the hoped for language of “phase out.”

On the critical subject of helping poor countries cope with the climate crisis, much of the conference’s concluding language is vague. “Developing nations did not get one big thing they wanted in Glasgow: a new “loss and damage” fund,” O’Grady explained. “Fund advocates argued that developed nations, having produced the vast majority of historic emissions, should help developing countries cope with the costs of climate-related extreme events…In the end, the pact promised only a ‘dialogue’ on loss and damage.”

From a political point of view, perhaps our expectations for what might be accomplished at COP26 were too high all along. It is clearly going to be incredibly complicated to get nearly 200 countries to agree on plans of action that will cost billions of dollars and disrupt business in so many ways. From such a vantage point, what got done in Glasgow was impressive in the tone it set: country leaders are now seemingly united in recognizing that the climate crisis real, ongoing, and threatening. They seem resigned now to taking it seriously and to trying to find meaningful solutions. There was a sense of urgency palpable to many in Glasgow that had not been felt at previous meetings.

More Climate Disasters Looming

At the same time, however, the results of the conference leave us feeling that we remain on the same collision course with disaster as we did before the Glasgow conference began. For instance, in December we learned that giant cracks in one of Antarctica’s hugest glaciers, the Thwaites Glacier, is bringing it closer to collapse than experts had previously predicted.  The melting of this massive glacier, which has been called an “icon of climate change,” will further contribute to the already perilous rise in sea levels.  It “already loses around 50 billion tons of ice each year and causes 4% of global sea-level rise,” according to an article in the journal Nature.  Now, new fractures in the Thwaites Glacier mean things will get even worse. It seems all around us there is one piece of evidence after another that the climate crisis is imperiling civilization and so far, the countries most responsible for what is happening and with the most power to do something about it have been unable or unwilling to take the decisive action needed.

Melting glaciers, like the Antarctic’s Thwaite Glacier, are contributing to sea level rises at a pace even faster than experts originally warned. The effects will be devastating, especially for small island nations (image: Shutterstock).

This makes the announcement last month that the Build Back Better legislation, which passed in the U.S. House of Representatives is stalled in the Senate and may never see the light of day, especially troubling. About a fourth of the money allocated by the bill–$500 billion—would be earmarked for climate investments. Now, at least at the time of this writing, the bill is imperiled, and some doubt its ultimate passage. If even one major high-income country cannot rouse itself to make the necessary investment in combating climate change, how can we expect 200 countries to agree to anything strong enough to have an impact on things like the melting of the world’s widest glacier?

This is not the time for despair but rather for bold political action. While it is important that individuals take actions in their own lives that reduce their carbon footprints, like switching to electric cars and eating less meat, the climate crisis can only be seriously approached by actions at national and international levels. The most important thing, then, that individuals can do is to support efforts, campaigns, and organizations that promote national climate legislation and international agreements aimed at significantly curbing greenhouse gas emissions. It is imperative that we all become involved in the political process if we are going to address the climate crisis.

Does Facebook Cause Depression?

Facebook and other social media platforms are under fire these days, accused of causing a wide variety of harms to individuals and societies. Among the charges is that spending time on social media can cause depression. A study published in November in the journal JAMA Open Networks attempted to look at this question more closely and has garnered widespread attention from behavioral health experts and the media.

         The paper’s lead author is Roy Perlis, a psychiatrist at the Massachusetts General Hospital and Harvard University in Boston and is titled “Association between social media use and self-reported symptoms of depression in US adults.” The authors note that a number of studies have hinted at this association between social media and depression, but most of them have been cross sectional or involved only a small number of participants, making it impossible to draw any cause-and-effect conclusion.

         In the Perlis et al study, more than 5000 people with a mean age of 55.8 years and who had very low scores at baseline on the nine-item Patient Health Questionnaire (PHQ-9), indicating that they were not depressed, were surveyed approximately monthly between May 2020 and May 2021, with measures of social media use and repeat PHQ-9 at each month. The investigators found that about nine percent of the participants had a five point or more worsening of their scores on the PHQ-9, indicating that they had become significantly depressed over time. Those who had worsening depression also had the most use of three social media platforms, Snapchat, Facebook, and Tik Tok. For Facebook, participants with worsening depression scores had about a 40% increased use of the social media platform compared to those participants without worsening depression scores. Controlling for measures of social contact and social support did not alter these findings, suggesting that social media use substituting for real-life social contact was not the cause of these findings.

Does It Show Causation?

         There are many strengths to this study, including the very large sample size, the use of a well-validated instrument to measure depression, and the longitudinal design. These were all people who were not depressed at study initiation, so the investigators were able to trace the relationship between social media use and the evolution of depression. The question becomes, then, can we say that this study suggests that looking at social media platforms like Facebook can actually cause depression?

         Some headlines on internet sites seemed to hint that this might be the case. “Social Media Use Tied to Self-Reported Depressive Symptoms,” read one. “Social media use linked to depression in adults,” read another. While neither of these used the word “cause” in its headline, the impression one gets from words like “tied to” and “linked to” might easily be misinterpreted to mean that a causal relationship had been demonstrated.

In fact, Dr. Perlis and co-authors note in the paper that “Notably, social media use may simply be a marker of underlying vulnerability to depression.” That is, even though they did not score in the depressed range on the PHQ-9 at baseline, it is still possible that those people who went on to develop worsening depression scores nevertheless had a vulnerability or predisposition to depression that both drives the illness and drives one to spend more time on Facebook and other social media platforms. As the authors themselves point out, the study did not control for innumerable confounding factors that could be the cause of this association between depression and social media use (e.g. previous history of depression; current life stress; reasons for going to social media platforms) and therefore causality cannot be established by this study.

How Would We Establish Causality?

         Yet the findings are troubling because they leave open the possibility that one way to explain them is that in fact spending time on social media is a causative factor for depression. Is it possible that someone who would not otherwise become depressed does so because they spend an excessive amount of time on Facebook, Snapchat, or Tik Tok? What kind of a study could really answer the causality question here?

         Typically, our default study design to answer causality questions is the randomized trial, in which a group of people is randomized to different conditions. In studies to test whether a new medication is effective, for example, we would randomize a group of people who have a particular health condition to either receive the experimental drug or a placebo. Because nothing else is different between the two groups except whether they have received drug or placebo, the assumption is reasonably made that we have controlled for other potentially confounding factors and isolated the true effects of the drug (things aren’t really that simple, but that’s for another commentary). Could we do that with social media use? The basic design would have to be something like starting with a large group of people without depression at baseline and randomizing them to different levels of social media use. We would then follow them over time, assuming that by the randomization process we have controlled for all possible confounding factors except for how much social media is viewed. Then, we would see if people in high social media utilization groups were more likely to develop clinical depression than people in low utilization groups.

         It is easy to see why doing such a study would be challenging. First of all, it would be expensive, requiring that either the social media companies pay for it, which would raise issues of conflict of interest and bias, or that external funds from a foundation or federal funder be obtained. It is unclear how much appetite there would be for doing this.

         Next, it would be hard to recruit such a group of people because participants would have to agree to accept the level of social media use to which they were assigned. A person who spent little time on Facebook, for instance, might be randomized to a group that must view it at a high level while someone who is used to viewing a lot of social media might get randomized to a group in which they would be asked to stop altogether for an extended period of time. It would also be hard to enforce that people really adhered to the amount of time on social media to which they were assigned.

         Finally, if such a study did show that social media use caused depression, what would we do with that information? Note that in the Perlis et al study, 90% of the participants did not develop worsening depression scores during the year of observation. Whom would we tell, then, to refrain from too much social media use? And what demands could we make on social media companies to reduce the risk that their platforms harm mental health?

         One solution recently announced by the social media platform Instagram is to offer “take a break” reminders to teenage users. The idea here is that limiting the time people spend on social media could help prevent adverse mental health outcomes. Given that it is voluntary whether a person agrees to get these reminders or to follow their advice and actually get off the site, it is unclear how effective this would be.

         We probably cannot depend on randomized trials of social media use being completed any time soon to decide if it does in fact cause psychiatric illness. Remember, however, that we did not discover that cigarette smoking causes lung cancer by doing randomized trials; instead, we depended on large population studies to make that case. It may be that we will have to rely on studies like the Perlis et al one, large cross-sectional population studies, and other novel forms of study design to tease out whether there is a causal relationship here. Right now, it might be plausible to advise anyone with a history of depression or in situations that increase risk for depression to be judicious about how much time they spend on social media.

Don’t Call It Flip Flopping

Guidelines Should Change When the Data Say So

For decades people with even low risk for having a heart attack or stroke have been taking low-dose aspirin (usually 81 mg) daily to prevent a first heart attack or stroke. Perhaps because it’s called “baby” aspirin, it sounds so benign. What could be risky about something with the word “baby” in it?

         In an excellent article in the New York Times last October health columnist Tara Parker-Pope wrote “…it came as a shock to many this month when an influential expert panel, the U.S. Preventive Services Task Force, seemed to reverse decades of medical practice, announcing that daily low-dose aspirin should no longer be automatically recommended in middle age to prevent heart attack.” To some, Parker-Pope noted, that “shock” translated into the belief that medical experts had “flip-flopped” on the issue. Some people, she implied in the article, will lose faith in medical recommendations because of what seem like abrupt changes in advice.

         But Parker-Pope also does an excellent job of tracing the evolution of recommendations to take (or not to take) aspirin to prevent cardiovascular (e.g., heart attacks) and cerebrovascular (e.g., strokes) events and convinces us that this is not random flip-flopping but rather the orderly process of science.

Why Would Aspirin Work?

To start, it is logical to ask the question of whether aspirin would be expected to have any effect in reducing the risk a person has to have a heart attack or stroke. We know that many heart attacks and strokes are caused by blood clots forming in arteries that carry blood to the heart and to the brain. Some of these blood clots are thrombotic, arising in those arteries themselves, and others are embolic, arising somewhere else like a deep leg vein or the surface of a heart valve, and travelling through the circulation until they get stuck in a small artery. Either way, those blood clots reduce the ability of blood to flow through the artery where they lodge, decreasing the amount of oxygen that can be delivered to the heart or brain tissue, and thus causing cells to die. Because strokes and heart attacks are high on the list of the most common causes of death, it makes sense to try to prevent blood clots from forming.

Aspirin interferes with the ability of platelets to form clumps and therefore reduces blood clotting. It might help prevent disorders in which blood clots are a cause but can also increase the risk for bleeding (image: Shutterstock).

         The physiology of blood clotting is complex and involves an array of elements, one of which is a cell that circulates in the bloodstream called the platelet. One of the many actions of aspirin is to reduce platelet activity, thereby making the formation of abnormal blood clots less likely. So aspirin has been seen as one way of reducing the potential for abnormal clotting to occur.

At First It Seemed To Work

         The original study, as Parker-Pope points out, that seemed to suggest a benefit for aspirin in preventing first heart attacks—in men—was published in the New England Journal of Medicine in 1988. The study involved more than 22,000 male physicians who took one 325 mg tablet of buffered aspirin every other day or a placebo. Compared to placebo, participants in the aspirin group had a nearly 50% reduction in both fatal and non-fatal heart attacks. The effect was so large that the study was stopped by the study’s independent data monitoring board and the results published.

         There was also a slight increase in strokes in the aspirin group compared to the placebo group in that study. Experts attributed this to what are called hemorrhagic strokes—strokes that occur when a blood vessel bursts and blood leaks into surrounding brain tissue. Because aspirin decreases the blood’s ability to clot by interfering with platelets’ ability to form clumps, these strokes were probably an adverse side effect of taking aspirin. At the time, it seemed clear that the benefits of aspirin outweighed the risks, and many people began taking aspirin in various preparations and at different doses for primary prevention—that is prevention of first—heart attacks. Aspirin also came to be recommended by some guidelines for secondary prevention of heart attacks and strokes, that is to prevent them in people who already had serious cardiovascular or cerebrovascular disease or previous heart attacks and strokes.

         In its 2016 guidelines, the U.S. Preventive Services Task Force (USPSTF), an independent body of experts that reviews literature and data to make recommendations about how to prevent diseases from occurring, made the following statement about people between the ages of 50 and 59:

The USPSTF recommends initiating low-dose aspirin use for the primary prevention of cardiovascular disease (CVD) and colorectal cancer (CRC) in adults aged 50 to 59 years who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years.

But in its 2021 guidelines the USPSTF says the following about people between the ages of 40 and 59:

The decision to initiate low-dose aspirin use for the primary prevention of CVD in adults ages 40 to 59 years who have a 10% or greater 10-year CVD risk should be an individual one. Evidence indicates that the net benefit of aspirin use in this group is small. Persons who are not at increased risk for bleeding and are willing to take low-dose aspirin daily are more likely to benefit.

Now, that is a big change, from a blanket recommendation that people with a relatively low risk of heart attack start taking low dose aspirin on a daily basis to making the decision “an individual one” and saying the net benefit “is small.”  Parker-Pope nicely points to three studies published in 2018, that either failed to show any benefit of aspirin in preventing heart attacks or found that the benefit didn’t outweigh the risk of bleeding as an adverse side effect of aspirin. Those new studies influenced the USPSTF to propose altering its guidelines for aspirin use, which are still being worked on by the agency.

What Accounts for the Difference in Findings?

What changed between the 1988 study that seemed to clearly indicate a benefit for aspirin to 2018, when equally well-designed studies did not? While it is possible that the 1988 results were a fluke, that seems unlikely given the magnitude of the findings. But remember that the 1988 study only included male physicians and they are hardly a representative group of the entire population. When women and people who aren’t doctors are included in more recent studies, things might be expected to change.

Parker-Pope makes an interesting speculation about what else might have changed since 1988—our general health. She writes “Fewer people smoke, and doctors have better treatments to control diabetes, high blood pressure and cholesterol, issues that all affect risk for heart attack and stroke. Aspirin still works to protect the heart, but doctors say the benefits aren’t as pronounced now that other more effective treatments have emerged. As a result, the risks of aspirin, including gastrointestinal bleeding and brain hemorrhage, are of greater concern, though they remain low.”

It is plausible, as Parker-Pope is suggesting, that 20 years ago the benefit of daily aspirin in preventing fatal heart attacks was larger than the risk from daily aspirin of serious bleeding, like hemorrhagic strokes. But as the risk to the population of having a fatal heart attack decreased because of things like less cigarette smoking and better control of high blood pressure, the added benefit of daily aspirin becomes no longer large enough to outweigh the risk of bleeding. This could account for the difference in findings between 1988 and 2018.

Daily aspirin intake can increase the risk for abnormal bleeding as occurs in hemorrhagic stroke, when a blood vessel in the brain bursts and blood leaks out into surrounding brain tissue (image: Shutterstock).

There’s a lot more to these recommendations. We’ve only touched here on primary prevention for people who don’t have much risk for heart disease. Other recommendations apply to primary prevention in people who do have higher risk and to secondary prevention. One expert stated emphatically that “The easiest patient group to address is adults of any age who have a history of heart attack, stroke, or revascularization [e.g., having had a coronary artery stent placed] and are taking aspirin for secondary prevention. They should continue taking aspirin; the new recommendations don’t apply to them.” There are now more drugs that inhibit clotting than we had in 1988, so aspirin may not be the right choice for many or even most patients who should be on some form of anticoagulation therapy to prevent heart attacks and stroke.

The important point here is to notice that as new science gives us new data, guidelines are going to change. That is not flip-flopping, it’s science. It is of course critical that physicians and other healthcare providers are conversant with the data and the latest iterations of treatment guidelines. People should not despair or be frustrated when recommendations change if the changes are made on the basis of new and emerging science. The complete story about aspirin use to prevent cardiovascular and cerebrovascular events is probably not written yet; there will certainly be more studies reporting more data, some of which may complicate the picture. We need to be sure that the science is allowed to flow unhindered and that the information about the results of that science are carefully interpreted by experts and made widely available to the public.

Several things can be done to help people accept changes in scientific consensus. We can continue to hammer at journalists and their editors to be careful how they present new findings, refraining from calling everything a “breakthrough” and acknowledging in their stories that almost every new finding raises important questions that will be researched further. Then, journalists and editors could report more often on ongoing research that hasn’t yet reached the level of changing guidelines for care so that people can see how the process of scientific advances evolves over time. 

More fundamentally, we need to educate people from elementary school on how science really works. Once science was taught to children and adolescents as a fixed group of facts, creating the impression that everything was set in stone and making it hard for anyone to accept change. Even with the evolution of more hands-on learning, in which students are encouraged to do “experiments” and to solve problems, there has been an emphasis on “getting the right answer.” “Experiments” in science class are often more like recipes and are supposed to lead to a single, correct final result. What we need is to show students is that experiments rarely lead to a set of totally “correct” or expected results, that many experiments either produce unexpected and difficult to interpret results or fail altogether to produce a significant finding. From there, scientists keep designing more experiments, trying to work out what went wrong the first time(s) until they get something interesting. We need, therefore, to help people cope with uncertainty and change in science.

Regulating Social Media

How Much Evidence Do We Need?

It seems that not a day goes by without our reading some harsh indictment of Facebook, its subsidiary Instagram, Twitter, You Tube, and other social media platforms. They are accused of warping young minds, distorting election results, spreading misinformation about diseases, and generally imperiling modern society.

         An article by columnist Ishaan Tharoor in the Washington Post last October bore the headline “The indisputable harm caused by Facebook.” Tharoor wrote that “Facebook and the other apps it owns…. are now increasingly seen through the prism of the harm they appear to cause. They have become major platforms for misinformation, polarization and hate speech. At the same time, [Mark} Zuckerberg [the Facebook founder and owner]and his colleagues rake in billions of dollars each quarter in profits.”

         But what is the evidence that we are actually harmed by Facebook, Instagram, WhatsApp, and other social media platforms? Last September the Wall Street Journal accused Instagram of having done research demonstrating it is “toxic” to teenage girls. This was part of a larger series of articles, based in part on testimony by a whistleblower, alleging that Instagram and its owner Facebook have hidden internal documents showing how much harm they do.

Social media stands accused of harming society in a variety of ways, but evidence may be lacking to determine if it is the direct cause of all these alleged harms (source: Shutterstock).

         The charge that Instagram harms young women is at first glance plausible. Numerous sources have shown that rates of depression and other mental health problems are rising among young people in the U.S. and elsewhere. For example, a 2019 Pew Research report showed that 17% of U.S. teenage girls had experienced at least one episode of major depression in 2017, compared to 7% in 2007. At around the same time, Instagram users rose at a rapid pace. So, is logging on to Instagram and scrolling for hours through its feed of photos and videos causing teenagers to become depressed?

The Data Are Lacking

         In October, Laurence Steinberg, a professor of psychology at Temple University, wrote about this in the New York Times and his findings are startling: “Amid the pillorying of Facebook that has dominated the latest news cycle there is an inconvenient fact that critics have overlooked: No research – by Facebook or anyone else – has demonstrated that exposure to Instagram, a Facebook app, harms teenage girls’ psychological well-being.”

         What is the research that the Wall Street Journal says showed that Instagram harms teenagers, especially girls? “Facebook conducted surveys and focus groups to which people were asked to report how they thought they had been affected by using the Instagram app,” Steinberg explained. “Three in ten adolescent girls reported that Instagram made them feel worse about themselves.”  Now surveys and focus groups are important aspects of research attempting to understand how things affect us, but as Steinberg points out, they rarely can establish cause and effect relationships. That is, the research Facebook apparently did on its own that was leaked to the Wall Street Journal is merely suggestive and does not establish that Instagram causes depression or any other mental health disorder in anyone, including teenage girls. First of all, note that the data cited suggest that seven in ten adolescent girls did not indicate that looking at Instagram altered their self-esteem. Second, it could be that depression increases the chances that someone will spend time looking at Instagram and not the other way around. Perhaps lonely, depressed people seek answers to their problems on social media. Third, as Steinberg points out, a myriad of other factors could mediate any relationship between Instagram and mental health.

         Steinberg noted that there is some research looking at relationships between social media use and mental health. “Of the better studies that have found a negative correlation between social media use and adolescent mental health,” he wrote, “most have found extremely small effects—so small as to be trivial and dwarfed by other contributors to adolescent mental health.”

         We are not attempting here to defend Facebook, Instagram, or any other social media platform. [Full disclaimer: Critica president Jack Gorman once owned about $2000 in Facebook stock, which he sold several months ago because of conflict of interest concerns]. We at Critica have had our own negative experiences with Facebook. One of our projects is to engage people who spread misinformation about COVID-19 vaccines on online platforms, including Facebook and Twitter. We have seen our posts, which attempt to provide correct information about health and science, deleted by Facebook’s algorithms while misinformation about vaccines, sometimes containing what we consider to be akin to hate speech, are left standing by those algorithms. Facebook officials have acknowledged that what we post should not be deleted and tried to help us with this problem, but it is a daunting one because with millions of posts circulating on its platforms it is not possible for artificial intelligence to correctly pick out with 100% accuracy which ones are worthy to let stand, and which are misinformation and should be deleted.

         We recognize that because some of our work depends on our posting on Facebook, it could be argued that we are incentivized to be reluctant to be critical of it. We are still fully cognizant of the problems with misinformation, hate speech, and political polarization that circulate throughout social media and agree that these pose a threat to society in multiple ways. Nevertheless, the example of the charge that Instagram harms teenage girls’ mental health should make us pause and ask the most basic question of whether we know that social media is in fact causing harm or instead is reflective of adverse circumstances rather than their cause.

         This is not merely an academic question because if social media is indeed causing the kind of harm to which it is accused, then it is perfectly appropriate for us to request government regulation as a remedy. Let us say, for instance, that it could be demonstrated that Instagram causes depression in 30% of teenage girls who use it. We would be remiss if we didn’t demand some intervention to prevent such a large number of young women developing such a serious disorder. We might ask if we would have such severe political polarization and hate speech leveled against society’s marginalized people if it weren’t for Facebook? If we didn’t have social media platforms to spread misinformation about vaccines, would more people be vaccinated today against COVID-19 today, and would fewer people have died? Would we have fairer elections throughout the world if there were no social media platforms spreading disinformation? Clearly, if the answer to any of these questions is “yes,” then social media urgently needs very serious regulation.

Meaningful Research is Very Difficult to Accomplish

         It will not be easy to acquire such data, however. Sticking with the claim that Instagram causes depression in young women, how would we go about documenting a true causal relationship? Steinberg calls in his New York Times op-ed piece for randomized controlled trials, but it is not easy to imagine what those would look like. We would need to take a group of young women who were free of depression as the research participants and randomize them to have some exposure to Instagram versus no exposure or exposure to something else. Then we would see whether the group exposed to Instagram developed worsened self-image or even depression. You can see immediately what the problem with this design is, however, because implementing the exposure and deciding how much exposure is not at all straightforward. Some participants will already have considerable experience with Instagram and others less or none, so how to control for that in the experiment is tricky. Trickier still is figuring out how much exposure in the experiment is both practical and meaningful. Simply having the exposure group look at one Instagram post or stay on Instagram for a specified time period may be insufficient to show an effect; people who use Instagram can do so on a regular basis and it may be that prolonged and/or repeated exposure is needed to induce a mental health problem. Finally, even if we were able to come up with a viable randomized study design, there are obviously important ethical issues in attempting to actually see if something can make people feel worse about themselves or develop depression.

         So designing a meaningful experiment to show that a social media platform can cause a mental illness is not going to be easy. How much more difficult will it be, then, to show that social media actually causes adverse behaviors like not getting vaccinated or harms minority groups or causes elections to be unfair?

         Social scientists are working on designing experiments that will help us answer these questions in ways that are practical but do not pose ethical issues. When we contemplate regulating social media, we must ask a number of difficult questions. What exactly would we be regulating? Banning teenagers from using Instagram? Banning certain types of Instagram posts? Banning Instagram entirely? There is even the possibility that under some circumstances, looking at Instagram could be helpful for mental health. Perhaps depressed teenagers go to Instagram seeking answers for their problems and maybe some of them actually get help in making connections there to other people or to groups that specialize in mental health issues.

         One place to start would be to implement regulations mandating that social media companies make their platforms available to scientists working on these issues. Last summer Facebook banned a group of New York University investigators from using its platform to conduct research on political ads and disinformation. We see that as emblematic of a general hostility to research and transparency on the part of social media companies like Facebook, something we believe can only be addressed by a legislative intervention. Ryan Calo, professor of law at the University of Washington, has called for just such action. “Congress holds the power to stay Meta’s [the new name for the Facebook company] hand when it comes to threatening legal action or blocking accountability research…It could mandate transparency.”

         Accusations about harms caused by Facebook and other social media platforms are being made at present but in some cases we still lack sufficient data to know with a reasonable degree of certainty that social media is indeed causing those harms. But the accusations are serious and therefore we cannot be reticent about demanding the evidence to decide if they are warranted. If social media companies like Facebook refuse to allow researchers to gather the information society needs to judge their safety, then we believe it is appropriate for governments to take action to make them. It may be that the end result of good internet research is that social media is not the purveyor of the harm it is accused of; we need to find that out.

Do Americans Trust their Doctors?

The crisis of public trust in health and medicine may be more complex than you think.

Do Americans Trust their Doctors?: The crisis of public trust in health and medicine may be more complex than you think. 

It stands to reason that an important component of a functioning society is the population’s trust in government and public institutions. This trust is especially important in a Democratic society, in which citizens need to have faith in and feel supported by their public representatives in order to feel that there’s even any point in participating in civic responsibilities such as voting. Without trust, there can be no reliable and open communication and basically no way for the government to inform and guide people through sometimes difficult situations and decisions. 

Even before the COVID-19 pandemic, there was evidence that Americans’ trust in government was declining. A Pew Center poll from 2019 found that in addition to Americans’ declining trust in government and each other, 64% of citizens believed that the declining trust in government made solving the country’s problems harder, and 70% said that declining trust in fellow citizens made this kind of problem-solving more difficult. There is reason to believe that trust in government, both federal and local, has further decreased during the pandemic. 

Is this distrust in government a symptom of a larger societal distrust in all authority figures? One arena where this trust issue has come up frequently in recent years is in medicine and health. It has been assumed of late that Americans’ trust in doctors, the healthcare system, and public health officials is at an all-time low. And given common refrains we hear from people about their confusion about various COVID-19 guidelines and outrage over various restrictions, it seems only logical that trust in public health would be extremely low. 

However, some recent research conducted by our organization Critica suggests that this is not quite the case. While it is true that people hold a lot of suspicious views, especially about government agencies, government health officials such as Dr. Anthony Fauci, and pharmaceutical companies, people still overwhelmingly trust and turn to their personal physicians and other healthcare providers for advice. When asked about to whom they would turn for information both on the COVID-19 vaccine and on the flu vaccine, an overwhelming majority of focus group participants in our study said that they would first and foremost ask their personal or family physician. Many also commented that for information on childhood vaccines, they would first go to their child’s pediatrician. While most of these people also said they would consult the internet, it was rare for anyone to list the internet as their sole source of information. When a friend or family member was a primary source of information or advice, it was almost always because this person was a healthcare worker. 

These findings suggest that the issue of trust in physicians and health officials is much more nuanced than we might think. While it is true that people overwhelmingly cited their personal physicians as their primary source of information on all things health-related, they would also often spout out what could almost be characterized as conspiracy theories in the same breath. Many people simultaneously believed their doctors were trustworthy while also stating that they thought that vaccine manufacturers and pharmaceutical companies more generally had corrupted doctors and the healthcare system and that they were deeply suspicious of the whole operation. How can a person simultaneously mistrust the healthcare system and even physicians writ large but also trust their own personal physician more than anyone else for essential decisions about their health and safety? While we need more research to be able to answer that question it is worth recognizing that people probably do not always carry over beliefs about large systems to their interactions with individuals. That is, while it may seem dissonant, it is actually possible, and quite common, for people to distrust “doctors” as an entity but still place an enormous amount of trust in their personal physician. This realization is important for several reasons: 1. It allows us to more accurately explore the actual impacts of conspiratorial or distrustful thinking on people’s day-to-day decision-making – that is, it is possible for someone to hold conspiratorial thoughts but still trust an individual in the class they have suspicions about; 2. It gives us a window to intervene in conspiratorial or distrustful thinking by helping people recognize that there are people they trust, even within a category of people they claim to entirely distrust; and 3. Perhaps most importantly, it allows us to have hope that our healthcare system might be able to regain people’s trust.